Understanding Bias in Artificial Intelligence Algorithms Intended for Clinical Care
Abstract
Artifical Intelligence (AI) has facilitated the improvement of various domains within healthcare including drug development, invasive surgery, patient-care assistance, and diagnosis. The use of AI algorithms in predicting healthcare outcomes is intended to serve as a vital tool for revitalizing the current healthcare ecosystem by minimizing personal biases and clinician errors, thereby leading to improved patient outcomes and reduced health inequalities. However, despite these intended benefits, the possibility of inherent biases in such models either directly through algorithmic biasĀ or through model calibration using biased data, remains a contentious issue, that threatens to impede continued utilization of AI in clinical care. Additionally, AI models are known to exhibit poor predictive performance when applied to diverse populations; the result of datasets lacking demographic diversity and datasets that inherently reflect outcome disparities.
Likewise, studies have demonstrated that individuals of low socioeconomic status, Black and Indigenous populations, and other minority and historically under-represented groups, have limited access to health care resources, resulting in insufficient data to accurately calibrate AI models. This failure to account for racial and other demographic disparities establishes a concerning precedent of biased models that interpret risk through the narrow lens of race, thus limiting treatment, and exacerbating health disparities, contrary to the well-intentioned purpose for which the models were created. This paper discusses the inherent demographic biases within AI algorithms currently in use for clinical care. Furthermore, this paper will address the dangers of such biases when evaluating future healthcare outcomes, costs, and the decision between life and death.