020 7650 1200

Medical Consultation

Medico-legal Challenges in the Age of AI-assisted Radiology

Michael Roberts considers whether AI will deliver the accuracy needed for diagnosis in radiology.

Posted on 08 November 2023

Radiology plays a crucial role in diagnosing diseases by using imaging techniques such as X-rays and MRI scans to look inside the body. These images have traditionally been reviewed by expert human radiologists, but it is becoming increasingly common to employ artificial intelligence to assist with the reviewing of these images. The potential benefits of such technology means it is undoubtedly here to stay, but how do these AI tools measure up to their human counterparts?

This was the focus of Dr Louis Plesner and his team, whose recent study compared the performance of human radiologists against AI diagnostic tools in identifying common lung diseases on x-ray scans. While the AI showed promise, it fell short of the standard of human radiologists.

The study involved over 2,000 chest X-rays, a group of 72 human radiologists, and four commercially available medical AI tools. Overall, the AI systems were quite good at identifying disease, but performed less well in confirming the absence of disease, producing a high false-positive rate. The human radiologists were able to consider patient history and previous imaging studies, which the AI systems could not.

Diagnostic AI tools in radiology are deliberately designed to have a high sensitivity (correctly identifying disease) rather than a high specificity (correctly ruling out disease), so results such as these are perhaps not unexpected. Another study showed that AI tools had a specificity rate of 28%, much lower than the standard of an expert human radiologist (around 92%).

Whilst it is understandable that sensitivity is prioritised over specificity for the AI tools, so that abnormalities are not missed, the value of correctly ruling out abnormalities is very important and human radiologists are trained to optimise their accuracy in both elements.

From a medico-legal standpoint, patients have a right to expect accuracy in both respects. Just as a patient can expect a high degree of accuracy to ensure disease is not missed, so they should also expect not to be diagnosed with a disease they do not have. The latter can result in unnecessary tests, investigations, and procedures, as well as, of course, anxiety for the patient. Unnecessary treatment also has time and costs consequences for the healthcare system. The upshot is that errors either way can have serious medicolegal consequences as well as draining medical resources.

AI tools have made remarkable strides and will no doubt continue to do so, but these studies provide reminder that expert human oversight remains a crucial part of healthcare. For the time being, AI must coexist with – and remain subordinate to – human expertise. Medical professionals bring not only technical knowledge but also critical judgment, empathy, and the ability to consider the broader patient context, factors that AI currently lacks.

It is also important to acknowledge that AI diagnostic tools are only as good as the data they have been trained on. In essence, the performance and accuracy of these tools rely heavily on the datasets from which they have derived their knowledge and the diversity of cases encompassed within these datasets. An AI system can only provide accurate diagnoses to the extent that its training data reflects the breadth and depth of real-world medical scenarios. Any biases inherent to a dataset will inevitably be transferred over to the AI.

Careful thought and regulation will be needed, as automated processes are increasingly integrated into the medical industry, to ensure that patient safety is not sacrificed in the name of efficiency. There is a danger that technology will race ahead of legislation and regulation. Pro-active, multi-disciplinary engagement is required in order to prepare for these challenges, with collaboration between experts in medical, technological, and legal fields. We must be especially careful to avoid a transitional gap, wherein AI has led to a new world of medical practice whilst the legal and regulatory framework is still in the old world.

It is also crucial that errors and mistakes arising from AI are not swept under the carpet, otherwise they are doomed to be repeated. A culture of learning must be fostered, with willingness to openly share ‘failures’ and so-called ‘negative’ data, in order to overcome these issues and work towards a brighter future for healthcare.

Profile
Michael Roberts
Amputation Birth injury Brain injury Cerebral palsy Inquests Maternal injury Spinal injury

Michael Roberts

Michael Roberts is a senior associate solicitor in the medical negligence department.

News Article
Antibiotic Drip
Medical negligence Clinical negligence settlement

Family receive a settlement after woman dies following an unnecessary procedure

A family has received a settlement and an admission of liability from an NHS Trust following an unnecessary invasive procedure, which resulted in undiagnosed internal bleeding and death.

News Article
Doctor walking down hospital corridor 909214658
Medical negligence Clinical negligence Sepsis

Family receive apology and compensation

Northwick Park Hospital in north-west London has apologised for shortcomings in treatment and provided compensation to a bereaved family following the death of their son from sepsis.