New Technologies in International Law / Tymofeyeva, Crhák et al.

national regulators for not appropriately assessing the product before it was deployed in healthcare settings in the country it was deployed in? Does this responsibility for harm also extend to the developers of the AI tool, for example, for inaccurate coding or poor quality training data? 498 It also needs to be determined who bears liability where the AI tools exhibit technical autonomy, wherein they act independent of human intervention due to their ability to learn independently and adaptation capacity, resulting in an unforeseeable output imagined by the parties mentioned above, including the developers themselves. 499 From the preceding, it is evident that responsibility needs to be distributed, resulting in difficulties in ascribing blame for harm caused by actors in various parts of the healthcare delivery process wherein AI is applied. This lack of distributed responsibility results in difficulty in determining who to hold accountable for poor outcomes, which poses a significant risk to those seeking healthcare services wherein AI tools are adopted. To answer the questions posed, developing nations need to take up solid regulatory frameworks that replicate or emulate the approach under international laws, such as the EU’s administrative, regulatory approach to AI, which proposes to adopt an AI Act, a novel AI Liability Directive (AILD) in conjunction with a revised EU Product Liability Directive (PLD). 500 These laws constitute a proposed cornerstone of AI regulation and employ complementary approaches to regulating AI directly (via specific regulation in the AI Act) and indirectly (via incentives generated by the liability framework). The proposed AILD and PLD seek to integrate the AI Act into civil (product) liability to align the law with the new risks and realities this emerging technology poses. 501 The proposed AI Act outlines a regulatory and oversight framework for AI systems, mainly those considered high-risk, to which AI tools developed and applied in healthcare settings belong, instituting obligations for creating and using them and banning specific harmful AI systems. 502 The proposed AI Act imposes strict liability on all operators and developers of these AI tools based on causation, which limits liability gaps. 503 Strict liability is used in the AI Act, with fault being the trigger for liability based on the tortfeasor’s intent or negligence. 504 Product defectiveness is the crucial requirement that triggers the producer’s liability under the proposed PLD, which deals mainly with physical harm, including death, 498 Morley J et al, ‘The Ethics of AI in Health Care: A Mapping Review’ (2020) 260 Social Science & Medicine 113172. 499 Tessier C, Robots autonomy: Some technical issues’, Autonomy and Artificial Intelligence: A Threat or Savior? (Springer, 2017), p. 180. 500 European Commission, Proposal for a Directive of the European Parliament and of the Council on Liability for Defective Products, repealing Council Directive 85/374/EEC. 501 Hacker P, ‘The European AI liability directives – Critique of a half-hearted approach and lessons for the future’ ( Cornell University , 25 November 2022) accessed 21 October 2023. 502 European Commission, ‘Document 52021PC0206: Proposal for a Regulation of the European Parliament and of The Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts’, 2021/206, . 503 Ibid., Art. 6. 504 Deakin S, Adams Z, Markesinis and Deakin‘s Tort Law (8th edn, OUP, 2019), p. 87.

118

Made with FlippingBook Annual report maker