New Technologies in International Law / Tymofeyeva, Crhák et al.
risks automating and worsening that bias as it continues to learn through subsequent cycles and continually perpetuates it. 488 Algorithmic bias may also occur which happens when an AI model, trained on a given data set, produces results that may be wholly unintended by the model creators; because AI tools, particularly those applying machine learning, rely on vast amounts of data, such bias can be encoded within the AI’s modeling choices or even within the data itself. 489 The AI system then acts unfairly, as it cannot make unbiased decisions without favoring any of the populations represented in the input data distribution. 490 Ideally, to prevent bias, AI tools have access to exhaustive sources of population electronic health record data to create representative models for diagnosing diseases, predicting adverse effects, and recommending ongoing treatments. However, in developing countries, such comprehensive data sources may only sometimes be available due to various socio-economic issues, typically financial and infrastructure deficits and other technical problems. 491 Bias may affect the decisions of AI systems in various ways, including relying on biased information such as the gender, location of birth, socio-economic background, and skills of individuals to determine the treatment outcomes for these individuals. The existence of bias in some datasets and algorithms may also result in different access to healthcare outcomes for groups of individuals, resulting in unfair treatment and discrimination perpetuated through AI systems. 492 Unfortunately, this issue of bias in the healthcare application of AI is not uncommon. One study of a widely applied AI system in the healthcare sector in the US showed an example of racial bias perpetuated by an AI tool used in healthcare settings, wherein the stated goal of the AI tool was to identify patients who needed extra attention to their complex health needs. However, the unintended outcome of applying the AI tool was that it ascribed health costs as a proxy for health needs, which perpetuated a real-world racial bias and unfairness, as less money is typically spent on healthcare by Black patients who required the same level of care in comparison to their White counterparts, due to historical, socio-economic issues. The effect was that the algorithm falsely concluded that Black patients were healthier than equally sick White patients, resulting in sicker Black patients receiving similar care to healthier White patients despite needing the same or higher care. 493 Thus, the inherent bias adopted by the AI tool contributed to worse outcomes for Black patients by influencing the likelihood of receiving the appropriate level of care. 489 Mittelstadt BD et al, ‘The Ethics of Algorithms: Mapping the Debate’ (2016) 3(2) Big Data & Society 1. 490 Zou J, Schiebinger L, ‘AI can be sexist and racist – it’s time to make it fair’ (2018) 559(7714) Nature 324. 491 Chen I, Szolovits P, Ghassemi M, ‘Can AI Help Reduce Disparities in General Medical and Mental Health Care?’ (2019) 21(2) The AMA Journal of Ethics 167. 492 Mehrabi N et al, ‘A Survey on Bias and Fairness in Machine Learning’ (2021) 54(6) ACM Computing Surveys 1. 493 Obermeyer Z et al, ‘Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations’ (2019) 366 Science 447. 488 Agarwal R et al, ‘Addressing Algorithmic Bias and the Perpetuation of Health Inequities: An AI Bias Aware Framework’ (2023) 12 Health Policy and Technology , 100702.
116
Made with FlippingBook Annual report maker