With both models in Figure 2 and Figure 3, the clinician retains the final decision as recommended by NHS England. Are they still a liability sink for the AI? The models may not remove liability, but we would argue that the clinician’s role here is much more traditional, as they are integrating a variety of data and opinions in a manner of working that has become familiar with the advent of the multidisciplinary team.28 Clinicians should feel much more comfortable in accepting liability for a decision where they have genuine understanding and agency, and the socio-technical system as a whole will be much more acceptable to both clinicians and patients as it retains compatibility with patient-centred care.
 
The question remaining in this setup, however, is the assignment of liability for defective AI advice or information. As these models return the clinician to a more traditional role, the current legal position becomes more appropriate: treating the AI as a standard medical device. This could be dealt with via product liability, suitably adjusted to take into account the problems within such regimes as applied to AI systems, such as proof of causation, and the failure discussed above of the PLD19 to cover unembodied software. The European Union has recognised this need, and published reform proposals for the PLD. If we do not want clinicians to become liability sinks, similar reforms may be needed in the United Kingdom.
In summary, AI systems being developed using current models risk using clinicians as “liability sinks”, absorbing liability which could otherwise be shared across all those involved in the design, institution, running, and use of the system. Alternative models can return the patient to the centre of decision-making, and also allow the clinician to do what they are best at, rather than simply acting as a final check on a machine.
 

Summary

·       The benefits of AI in healthcare will only be realised if we consider the whole clinical context and the AI’s role in it.
·        The current, standard model of AI-supported decision-making in healthcare risks reducing the clinician's role to a mere ‘sense check’ on the AI, whilst at the same time leaving them to be held legally accountable for decisions made using AI.
·        This model means that clinicians risk becoming “liability sinks”, unfairly absorbing liability for the consequences of an AI’s recommendation without having sufficient understanding or practical control over how those recommendations were reached.
·        Furthermore, this could have an impact on the “second victim” experience of clinicians.
·        It also means that clinicians are less able to do what they are best at, specifically exercising sensitivity to patient preferences in a shared clinician-patient decision-making process.
·        There are alternatives to this model that can have a more positive impact on clinicians and patients alike.