It is notable that with both of the models in Figure 2 and Figure 3, the clinician retains the final decision as recommended by NHS England. Are they still acting as a liability sink for the AI? We would argue that their role here is much more traditional and that they are integrating a variety of data and opinions, in a manner of working that has become familiar with the advent of the multidisciplinary team.24 Clinicians should feel much more comfortable in accepting liability for a decision where they have genuine understanding and agency, and the socio-technical system as a whole will be much more acceptable to both clinicians and patients as it retains compatibility with patient-centred care.
The question remaining in this setup, however, is the assignment of liability where the advice or information provided by the AI is defective. By returning the clinician to a more traditional role with these models, it becomes more appropriate to treat the AI as a standard medical device. This could be dealt with via product liability, suitably adjusted to take into account the problems within such regimes as applied to AI systems, such as proof of causation, and the failure of the existing Consumer Protection Act 1987 (implementing the European Union’s Product Liability Directive (‘PLD’)25) to cover unembodied software. The need for such adjustments has been recognised by the European Union, which has published reform proposals for the PLD. If we do not want clinicians to become liability sinks, similar reforms may need to be considered in the United Kingdom.
In summary, AI systems being developed using current models risk using clinicians as “liability sinks”, absorbing liability which could otherwise be shared across all those involved in the design, institution, running, and use of the system. Alternative models can return the patient to the centre of decision-making, and also allow the clinician to do what they are best at, rather than simply acting as a final check on a machine.
Summary
· The benefits of AI in healthcare will only be realised if we consider the whole clinical context and the AI’s role in it.
· The current, standard model of AI-supported decision-making in healthcare risks reducing the clinician's role to a mere ‘sense check’ on the AI, whilst at the same time leaving them to be held legally accountable for decisions made using AI.
· This model means that clinicians risk becoming “liability sinks”, unfairly absorbing liability for the consequences of an AI’s recommendation without having sufficient understanding or practical control over how those recommendations were reached.
· Furthermore, this could have an impact on the “second victim” experience of clinicians.
· It also means that clinicians are less able to do what they are best at, specifically exercising sensitivity to patient preferences in a shared clinician-patient decision-making process.
· There are alternatives to this model that can have a more positive impact on clinicians and patients alike.
References
1. Elish MC. Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction. Engaging Science, Technology, and Society. 2019 Mar 23;5:40–60.
2. NHS England. Information Governance Guidance: Artificial Intelligence [Internet]. NHS England - Transformation Directorate; 2022 [cited 2022 Nov 3]. Available from: https://transform.england.nhs.uk/information-governance/guidance/artificial-intelligence/
3. Bainbridge L. Ironies of automation. In: Johannsen G, Rijnsdorp JE, editors. Analysis, Design and Evaluation of Man–Machine Systems [Internet]. Pergamon; 1983 [cited 2023 Feb 22]. p. 129–35. Available from: https://www.sciencedirect.com/science/article/pii/B9780080293486500269
4. Engineering Analysis 22-002 [Internet]. National Highway Traffic Safety Administation, Office of Defects Investigation; 2022 [cited 2022 Nov 3]. Available from: https://static.nhtsa.gov/odi/inv/2022/INOA-EA22002-3184.PDF
5. Habli I, Lawton T, Porter Z. Artificial intelligence in health care: accountability and safety. Bulletin of the World Health Organization. 2020 Feb;98(4):251–6.
6. Wu AW, Steckelberg RC. Medical error, incident investigation and the second victim: doing better but feeling worse? BMJ Qual Saf. 2012 Apr 1;21(4):267–70.
7. Sirriyeh R, Lawton R, Gardner P, Armitage G. Coping with medical error: a systematic review of papers to assess the effects of involvement in medical errors on healthcare professionals’ psychological well-being. Qual Saf Health Care. 2010 Dec 1;19(6):e43–e43.
8. Engel KG, Rosenthal M, Sutcliffe KM. Residents’ responses to medical error: coping, learning, and change. Acad Med. 2006 Jan;81(1):86–93.
9. Gianfrancesco MA, Tamang S, Yazdany J, Schmajuk G. Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data. JAMA Internal Medicine. 2018 Nov 1;178(11):1544–7.
10. McDermid JA, Jia Y, Porter Z, Habli I. Artificial intelligence explainability: the technical and ethical dimensions. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences. 2021 Aug 16;379(2207):20200363.
11. Chesterman S. Artificial intelligence and the limits of legal personality. ICLQ. 2020;69(4):819–44.
12. Smith H, Fotheringham K. Artificial intelligence in clinical decision-making: Rethinking liability. Medical Law International. 2020 Jun 1;20(2):131–54.
13. Wilsher v Essex Area Health Authority [1987] QB 730 (CA). 1987.
14. Junior v McNicol. Times Law Reports, March 26 1959. 1959.
15. Armitage M, editor. Chapter 10: Persons Professing Some Special Skill. In: Charlesworth & Percy on Negligence. 15th ed. London: Sweet & Maxwell; p. 10–147. (Common Law Library).
16. Burton S, Habli I, Lawton T, McDermid J, Morgan P, Porter Z. Mind the gaps: Assuring the safety of autonomous systems from an engineering, ethical, and legal perspective. Artificial Intelligence. 2020 Feb 1;279:103201.
17. Heywood R. Systemic Negligence and NHS Hospitals: An Underutilised Argument. King’s Law Journal. 2021 Sep 2;32(3):437–65.
18. Morgan, Phillip. Chapter 6: Tort Law and Artificial Intelligence – Vicarious Liability. In: Lim E, Morgan P, editors. The Cambridge Handbook of Private Law and Artificial Intelligence. Cambridge University Press;
19. Abbott R. The Reasonable Robot: Artificial Intelligence and the Law [Internet]. Cambridge: Cambridge University Press; 2020 [cited 2023 Feb 22]. Available from: https://www.cambridge.org/core/books/reasonable-robot/092E62F0087270F1ADD9F62160F23B5A
20. Bjerring JC, Busch J. Artificial Intelligence and Patient-Centered Decision-Making. Philos Technol. 2021 Jun 1;34(2):349–71.
21. Birch J, Creel KA, Jha AK, Plutynski A. Clinical decisions using AI must consider patient values. Nat Med [Internet]. 2022 Jan 31 [cited 2022 Feb 1]; Available from: https://www.nature.com/articles/s41591-021-01624-y
22. Jia Y, Mcdermid JA, Lawton T, Habli I. The Role of Explainability in Assuring Safety of Machine Learning in Healthcare. IEEE Transactions on Emerging Topics in Computing. 2022;1–1.
23. Mittelstadt B, Russell C, Wachter S. Explaining Explanations in AI. In: Proceedings of the Conference on Fairness, Accountability, and Transparency [Internet]. New York, NY, USA: Association for Computing Machinery; 2019 [cited 2023 Feb 22]. p. 279–88. (FAT* ’19). Available from: https://doi.org/10.1145/3287560.3287574
24. Epstein NE. Multidisciplinary in-hospital teams improve patient outcomes: A review. Surgical Neurology International. 2014;5(Suppl 7):S295.
25. Council Directive 85/374/EEC of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products [Internet]. OJ L Jul 25, 1985. Available from: http://data.europa.eu/eli/dir/1985/374/oj/eng
Acknowledgements
This work was supported by The MPS Foundation Grant Programme. The MPS Foundation was established to undertake research, analysis, education and training to enable healthcare professionals to provide better care for their patients and improve their own wellbeing. To achieve this, it supports and funds research across the world that will make a difference and can be applied in the workplace. The work was also supported by the Engineering and Physical Sciences Research Council (EP/W011239/1).
Conflicts of interest
TL has received an honorarium for a lecture on this topic from Al Sultan United Medical Co and is head of clinical artificial intelligence at Bradford Teaching Hospitals NHS Foundation Trust, and a potential liability sink
All other authors report no conflicts of interest
Authors’ contributions
TL, ZP, IH - conceptualisation, funding acquisition, writing - original draft & review & editing, analysis, visualisation
PM - writing - original draft, analysis, visualisation, writing - review & editing
SH, AC, NH, JI, YJ, VS - analysis, visualisation, writing - review & editing