AI enabled medical devices raise safety concerns

AI enabled medical devices raise safety concerns

Problems with the use of AI enabled medical devices are a concern for the safety of patients

April 2023

Macquarie University has published a world first review into the safety of artificial intelligence (AI) enabled medical devices (defined as AI systems used for the diagnosis, management, or prevention of disease) approved for use by the United States Food and Drug Administration (FDA).

The research, published in the Journal of the American Medical Informatics Association (JAMIA), reports that algorithms associated with the use of AI enabled medical devices—such as imaging for diagnosis and treatment, radiotherapy planning software, insulin dose calculators, clinical patient monitors, and cardiac arrhythmia detection apps—are not the only area of concern for the safety of patients.

Led by Dr David Lyell and Professor Farah Magrabi from the Australian Institute of Health Innovation at Macquarie University, Sydney, Australia, the research found a number of kinds of safety concerns when AI enabled medical devices are used in real world settings.

“We identified safety problems across all stages of medical device use, including acquiring the data needed by AI and algorithmic errors. However problems in the way AI enabled medical devices were used and what they are used for were more likely to lead to patient harm,” Dr Lyell said.

“Most attention so far has been around the safety of the algorithms behind the AI—including biases arising from the data on which the AI is trained—and is based on theoretical investigations or simulations but this research shows there’s more to consider when using AI in the real world,” he said.

The research analysed 266 safety events involving AI technology recorded in the FDA’s MAUDE database of safety reports submitted over six years to the FDA by manufacturers, healthcare professionals, patients and consumers.

Most safety events (82%) involved problems with acquiring the data needed by AI and 11% related to device problems including algorithms. However, human factors issues were proportionally higher in the events involving patient harm than technical problems.

Of the total reports, 16% were associated with patient harm, while 66% were hazards with the potential to harm. 4% were near-miss events where users recognised the potential for harm to occur and intervened.

“Reports about safety events show the need for a whole of system approach to ensure AI enabled medical devices are safe, and that means also understanding how patients and health professionals interact with the systems.”

“For consumers, AI is a double-edged sword. While AI grants consumers access to capabilities exceeding their own, they may lack the expertise to fully understand what the AI is telling them and what it means for their health.”

Examples from the review of patient harm during the use of AI enabled medical devices include the following:

  • Diagnostic ultrasound: Due to the input of incorrect settings,signals indicating mitral valve insufficiency were not observed on the cardiac ultrasound Doppler delaying diagnosis. The patient later died.
  • Mammography: Incorrectly entered data resulted in a patient requiring biopsy markers to be surgically removed after they were placed in the incorrect location.
  • Radiotherapy planning: Due to data input errors by users, some patients were overdosed and some had radiation delivered to the wrong location.
  • Insulin dosing: A patient suffered hypoglycaemia when given insulin without the carbohydrates recommended by the insulin dosing software.
  • Consumer ECG: Consumers reported their over-the-counter ECG devices indicated a ‘normal sinus rhythm’ while they experienced a heart attack, a condition the device was neither indicated nor capable of detecting. Some delayed seeking medical care based on the device result.

Dr David Lyell and Professor Farah Magrabi are available for interview, please contact chrissy.clay@mq.edu.au to arrange.

Caption: Dr David Lyell and Professor Farah Magrabi from the Australian Institute of Health Innovation at Macquarie University.


Read the article here:

David Lyell, Ying Wang, Enrico Coiera, Farah Magrabi, More than algorithms: an analysis of safety events involving ML-enabled medical devices reported to the FDA, Journal of the American Medical Informatics Association, 2023;, ocad065, https://doi.org/10.1093/jamia/ocad065


Read the Sydney Morning Herald article : A new breed of AI is changing healthcare


CENTRES RELATED TO THIS NEWS

Centre for Health Informatics

FOR FURTHER INFORMATION, PLEASE CONTACT

Chrissy Clay, Media and Research Outreach Coordinator

Follow us on Twitter @AIHI_MQ

Back to the top of this page