The use of artificial intelligence in healthcare could create a legally complex blame game when it comes to establishing liability for medical failings, experts have warned.
The development of AI for clinical use has boomed, with researchers creating a host of tools, from algorithms to help interpret scans to systems that can aid with diagnoses. AI is also being developed to help manage hospitals, from optimising bed capacity to tackling supply chains.
But while experts say the technology could bring myriad benefits for healthcare, they say there is also cause for concern, from a lack of testing of the effectiveness of AI tools to questions over who is responsible should a patient have a negative outcome.
Prof Derek Angus, of the University of Pittsburgh, said: “There’s definitely going to be instances where there’s the perception that something went wrong and people will look around to blame someone.”
The Jama summit on Artificial Intelligence, hosted last year by the Journal of the American Medical Association, brought together a panoply of experts including clinicians, technology companies, regulatory bodies, insurers, ethicists, lawyers and economists.
The resulting report, of which Angus is first author, not only looks at the nature of AI tools and the areas of healthcare where they are being used, but also examines the challenges they present, including legal concerns.
Prof Glenn Cohen from Harvard law school, a co-author of the report, said patients could face difficulties showing fault in the use or design of an artificial intelligence product. There could be barriers to gaining information about its inner workings, while it could also be challenging to propose a reasonable alternative design for the product or prove a poor outcome was caused by the AI system.
He said: “The interplay between the parties may also present challenges for bringing a lawsuit – they may point to one another as the party at fault, and they may have existing agreement contractually reallocating liability or have indemnification lawsuits.”
Prof Michelle Mello, another author of the report, from Stanford law school, said courts were well equipped to resolve legal issues. “The problem is that it takes time and will involve inconsistencies in the early days, and this uncertainty elevates costs for everyone in the AI innovation and adoption ecosystem,” she said.
The report also raises concerns about how AI tools are evaluated, noting many are outside the oversight of regulators such as the US Food and Drug Administration (FDA).
Angus said: “For clinicians, effectiveness usually means improved health outcomes, but there’s no guarantee that the regulatory authority will require proof [of that]. Then once it’s out, AI tools can be deployed in so many unpredictable ways in different clinical settings, with different kinds of patients, by users who are of different levels of skills. There is very little guarantee that what seems to be a good idea in the pre-approval package is actually what you get in practice.”
The report outlines that at present there are many barriers to evaluating AI tools including that they often need to be in clinical use to be fully assessed, while current approaches to assessment are expensive and cumbersome.
Angus said it was important that funding was made available for the performance of AI tools in healthcare to be properly assessed, with investment in digital infrastructure a key area. “One of the things that came up during the summit was [that] the tools that are best evaluated have been least adopted. The tools that are most adopted have been least evaluated.”