New Technologies in International Law / Tymofeyeva, Crhák et al.

black-box model further results in difficulties in revealing the accuracy of the decision making mechanism of the AI algorithm and prohibits healthcare professionals from being able to disclose the inner workings of the AI model. 469 The black-box model also prevents users of AI systems, including physicians and patients, from having little opportunity to interrogate and challenge the operation of AI algorithms and their outcomes, thus making it difficult to guarantee a transparent decision-making process that is explainable, allowing for informed consent by patients, thereby affecting their ability to choose or refuse certain treatment decisions. 470 The black-box model also prevents the decisions of AI algorithms from being audited by competent authorities and makes harm untraceable in situations in which they occur. 471 The ability to Interpret and explain the decisions of an AI algorithm enables those applying AI to healthcare services to delve into the decision-making process to promote confidence in understanding where the AI model gets its results and increases patient safety by giving additional information that is essential for interpreting an AI algorithm’s underlying functioning. This ability to explain the decision-making process of the AI provides insights into the AI’s decision to the healthcare operators to build trust that the AI algorithm is making correct and non-biased decisions based on the facts pertinent to the treatment of the patient to which the AI system is applied to and ensures that the AI algorithm is making correct and non-biased decisions based on the facts before it. Explainability is vital to decision-making about treatments and disease prevention, particularly in cases relating to specific patients, as patients must understand that their treatment decisions are meaningfully made. 472 When this decision-making process in an AI system is thoroughly understood, the AI system becomes transparent and promotes patient safety and trust in the AI system. Developing countries who wish to take up AI to facilitate access to healthcare issues should encourage transparency in the decision-making process of AI algorithms and can do this by shifting their focus from matters surrounding trust in AI systems to focusing more on promoting the development and application of responsible AI systems, for example, by introducing post-market surveillance and audits of medical care delivery and outcomes. 473 Governments of developing nations should also encourage designing AI systems that consider their local peculiarities, including the multidimensionality of health, such as physical, mental, emotional, social, spiritual, vocational, and other dimensions of health, per the principles of fairness and justice. 469 Gunning D et al, ‘XAI—Explainable Artificial Intelligence’ (2019) 4(37) Science Robotics 7120. 470 Ploug T and Holm S, ‘The Right to Refuse Diagnostics and Treatment Planning by Artificial Intelligence’ (2019) 23 Medicine, Health Care and Philosophy 107. 471 Wachter R, The Digital Doctor: Hope, Hype, and Harm at the Dawn of Medicine’s Computer Age (McGraw Hill Education, 2017). 472 Wachter S, Mittelstadt B, Floridi L, ‘Transparent, Explainable, and Accountable AI for Robotics’ (2017) 2(6) Science Robotics 6080. 473 La Fors K, Custers B and Keymolen E, ‘Reassessing Values for Emerging Big Data Technologies: Integrating Design-Based and Application-Based Approaches’ (2019) 21 Ethics and Information Technology 209.

112

Made with FlippingBook Annual report maker