Artificial Intelligence Misdiagnosis: Who Is to Blame?

In this Article
Reading Time:
5
 minutes
Posted: 28th February 2018 by
Michael Carson, Greg McEwen
Last updated 3rd July 2019
Share this article

The news that an artificial intelligence (AI) system has been developed that can diagnose scans for heart disease and lung cancer raises clear questions about accountability. Put simply, if a misdiagnosis is made by an AI system, who is to blame: the hospital or the developer of the software?

 

 

Michael Carson, Senior Lawyer at Fletchers Solicitors speaks: “Any software system capable of making life-changing recommendations will need to be developed in close collaboration with the end user. The output of the software will be a result of calculations made using parameters the end user has stipulated, not the developer. For instance, it’s been reported that the new AI system will be able to search scans for clumps of cells in the lungs and then advise whether the cells are harmless or potentially cancerous - it can only do this with radiographers and oncologists being heavily involved in the development process.

 

“Once the AI software is up and running, the developer will insist on the end user essentially signing a contract to say that they are happy the software meets the specifications set out at the beginning of development. This ought to indemnify the developer against any claims for misdiagnosis.

 

“Interestingly, the Consumer Rights Act [2015] contains information relating to digital content, including software. It provides that a user may seek a remedy if the software a) is not of satisfactory quality b) is not fit for a particular purpose, or c) does not meet the description (or specification). It may well be then that the consumer, i.e. the NHS, could seek to recover damages paid out in a medical negligence claim via this route.

 

“It seems, therefore, that we should view AI as just another piece of hospital equipment. Any errors or misdiagnosis made by the AI should be dealt with as a medical negligence claim, with the AI merely being a tool used by the hospital.”

 

Rise of the Robo-Diagnosis

 

With the rapid progression in medical technology and a greater global reliance on big data, the role of the doctor as sole diagnostician is changing dramatically. Greg McEwen, Partner at BLM, considers what this might mean for our trust in human decisions and the accuracy of diagnoses.

 

The word ‘diagnosis’ can be defined as “the act of identifying a disease from its signs and symptoms”. As a society, we have traditionally looked to our healthcare professionals to diagnose and treat our ailments, from minor aches and pains to major, life-threatening conditions.

The existence of lawyers who specialise in clinical negligence, from both a claimant and defendant perspective, is a reminder of the industry that has grown up around litigation in this area. In the year 2015-16, the NHS Litigation Authority received nearly 11,000 new claims for clinical negligence and nearly 1,000 referrals about the performance of doctors, dentists and pharmacists. Of course, not all claims relate to diagnostic error. Likewise, not every error in diagnosis results in a claim. Nor should it, since the mere fact of an incorrect diagnosis does not equate to negligence. But could advances in technology lead to earlier or more accurate diagnoses?

Technology has long played a part in the diagnostic process. From cancer screening to MRI scanning, to optometry, computers have been employed with a view to informing and improving key decision making. The caveat to this is that the technology is operated and, most importantly, interpreted and acted upon by people exercising judgment. Diagnosis remains an art as much as a science but that has not stopped the onward march of technology, with AI and big data seeking to chip away at the role of diagnostician and decision maker. Whether it’s through a wearable consumer device such as a Fitbit, or AI trained to identify potentially cancerous tumours, the average patient today is exposed to technology that can monitor heart rate, nutritional intake and sleep patterns, all the way up to identifying serious, life-threatening conditions.

Some of this technology has the potential to reduce or replace human input, but will it lead to better outcomes? There certainly seems to be a belief that it will amongst some major stakeholders, both healthcare providers and technology companies alike. IBM’s Watson supercomputer is currently being used in the US to help produce tailored treatment plans for cancer patients. Here in the UK, Babylon Health is reported to have secured £50m to further develop its AI diagnostic tool, itself a development on its existing clinical triage app, trialled in the NHS.

Are we hurtling head first into futuristic healthcare, then? Does this threaten the role of doctor as sole diagnostician? And what happens if AI gets it wrong?

The obvious concern over AI diagnoses centres around the issue of liability for errors. Where would medical and legal responsibility fall if a patient incorrectly receives the all-clear on the basis of an AI algorithm? It seems unlikely that this technology will be used to diagnose patients in isolation for various reasons, not least that the lines of clinical responsibility and legal liability need to remain clear. Patients need to know who is ultimately responsible for their medical treatment and who they can look to for redress in the event that something goes wrong. Therefore, we assume that primary responsibility is likely to remain with the healthcare provide.

Yet whether healthcare professionals will be able to measure the accuracy and reliability of AI-output remains uncertain, given the complexity of the software and the protection of proprietary information. For insurers and healthcare organisations, this step into the unknown opens up the important issue around digital malpractice, lengthening the chain of responsibility to manufacturers and software developers. Increasingly, we have to consider whether mishaps and mistakes fall into the category of negligence, product liability or both, particularly as we move through a period in which doctors increasingly work in tandem with AI and big data.

There’s cause for optimism as well. AI also brings great opportunity. People are not machines and human error is as much a possibility in healthcare as any walk of life. The number of known diseases in humans has been put at anywhere between 10,000 and 30,000 depending on the criteria employed. Some estimates have suggested that as any as one in six diagnoses within the NHS turn out to be incorrect. Using AI as an assistive tool has the potential to improve accuracy and reduce diagnostic errors, within an increasingly stretched Health Service. The use of AI to detect heart disease, for example, has been estimated to save the NHS over £300 million a year.

There is however a flip side when comparing machines with their human counterparts. Diagnoses and treatment plans are not simply a matter of logic and deduction. They affect real people. The fact that a computer aided cancer diagnosis is accurate doesn’t make it any less devastating for the recipient. Machines cannot empathise. There will always be a need for healthcare professionals in the diagnostic process, however advanced the technology becomes.

What we can say is that the risks are broadening along with the benefits, for all involved in the delivery of healthcare in the digital age. As technology increasingly plays a part in the diagnostic process, we’re likely to see a host of new issues arising around the attribution of liability, arguably the price of progress.

 

Greg McEwen

Partner, BLM

www.blmlaw.com

 

About BLM

  • BLM is the UK and Ireland’s leading insurance and risk law firm. With a turnover of over £100million, we advise insurers, Lloyd’s syndicates, MGAs, brokers, corporate policyholders, professional indemnifiers and other market organisations.
  • With more than 200 partners and 1600 staff, BLM is instructed on a broad spectrum of legal issues and acts for customers in key sectors such as construction and property, corporate risks, healthcare, insurance and indemnity, leisure, public sector, retail, technology, media and telecoms, transport and the London Market.
  • BLM has 13 offices across the UK and Ireland in Belfast, Birmingham, Bristol, Cardiff, Dublin, Edinburgh, Glasgow, Leeds, Liverpool, London, Derry, Manchester and Southampton.
  • BLM presently advises 12 of the top 15 insurance companies. We aim to be the firm of choice for customers seeking concise advice, stellar service, value for money and long term partnership.
  • The firm was formed from the combination of two leading practices, Berrymans Lace Mawer and HBM Sayers, in May 2014.
  • On 1 December 2014 BLM combined with leading Northern Ireland based risk and insurance law firm Campbell Fitzpatrick Solicitors (CFS).
  • On 10 March 2015 BLM relocated its London operations to a central flagship office at Plantation Place in EC3.

About Lawyer Monthly

Lawyer Monthly is a news website and monthly legal publication with content that is entirely defined by the significant legal news from around the world.