AI's Impact on BIPOC Communities and their Health
w/ Data Science for Health Equity (DSxHE)
AI's Impact on BIPOC Communities & their Health:
The Drawbacks
Introduction
Communicate for Health Justice (CFHJ) and Data Science for Health Equity (DSxHE) unite to address artificial intelligence’s impact on BIPOC (Black, Brown, Indigenous) communities. Our first launch is a week-long digital awareness campaign.
The first infographic of our theme with Data Science for Health Equity, focuses first on the drawbacks of AI. Dive deep into the complexities of this powerful technology and its implications for BIPOC communities through the blog summary below.
What the research says…
Risk Prediction (United States)
A 2019 study published in Science that researched 50,000 patients—of whom 6,079 self-identified as black and 43,539 self-identified as white, revealed that risk-prediction algorithms led to Black patients received less quality care than their non-Black counterparts. The algorithm uses a patient's previous health care spending to determine future risks need for extra care (Obermeyer, 2019).
Fairness in Screening (India)
A 2021 article in Frontiers in Artificial Intelligence examined one study that examined the use of machine learning on the problem of pulmonary disease screening (Fletcher, Nakeshimana, & Olubeko, 2021). The purpose of the study was to develop and test a set of algorithms to predict several different pulmonary diseases, including asthma, COPD, Allergic Rhinitis. In India, the burden of pulmonary disease is very high, with chronic diseases such as COPD being a second leading cause of death, and asthma representing a major cause of disability.
Attending to Intersectional identities (European Union)
A 2022 PubMed article examined the ways algorithms are likely to reinforce intersectional discrimination, which had already been classified as a “a blind spot” in EU law (Wójcik, 2022). Intersectional identities might be described as a person who identifies as Indigenous and Queer. Due to algorithmic profiling, very precise identity data is used to classify subjects into distinctive subgroups, making it difficult to quantify multiple demographics and other indicators, or individuals who may view themselves as intersectional minorities.
Why This Matters…
Because of economic and social disparities, Black patients often allocate less funding to their healthcare compared to white patients. However, reduced spending does not necessarily correlate with lower health risks. If the algorithm interprets these lower expenditure levels and concludes that Black patients pose minimal risk, they may be systematically disqualified from accessing necessary care (Obermeyer, 2019).
Despite the researchers' efforts to establish parameters within the algorithm to mitigate bias (e.g., between genders, smokers and non-smokers), the underlying bias in this scenario stems from inherent patient characteristics (e.g., smoking behavior) and thus remains unmitigated (Fletcher, Nakeshimana, & Olubeko, 2021). To address fairness amidst bias, the researchers had to define a notion of fairness enforceable within the algorithm. Failure to rectify these biases could introduce new risks of misuse and exacerbate health disparities within AI-driven electronic health records (EHRs) and similar healthcare platforms.
Addressing the shortcomings of AI to accurately represent intersectional minorities within datasets is crucial to mitigate bias (Wójcik, 2022). Failure to do so could exacerbate intersectional discrimination, particularly in healthcare, should such technology be employed for resource allocation. Given the existing prevalence of intersectional discrimination within healthcare systems, proactive measures to rectify AI biases are imperative.
References
Obermeyer, Z. et al. Dissecting racial bias in an algorithm used to manage the health of populations.Science366,447-453(2019).DOI:10.1126/science.aax2342
Fletcher, R. R., Nakeshimana, A., & Olubeko, O. (2021). Addressing fairness, bias, and appropriate use of artificial intelligence and machine learning in global health. Frontiers in Artificial Intelligence, 3, 561802.
Wójcik, M.A. Algorithmic Discrimination in Health Care: An EU Law Perspective. Health Hum Rights. 2022 Jun;24(1):93-103. PMID: 35747275; PMCID: PMC9212826.