AI will enhance healthcare, but caution is needed

AI will enhance healthcare, but caution is needed

New research to improve patient safety

New research from Macquarie University shows artificial intelligence (AI) is being used in medical devices to support, not replace clinicians, and for the first time has classified three levels of autonomy to improve safe use.

While AI is sometimes hyped to be taking over medicine, its use in medical devices is still limited in Australia, and the workforce is largely unaware of the different levels of autonomy of a medical device, says Dr David Lyell, AI in healthcare expert from the Australian Institute of Health Innovation, Macquarie University.

Dr Lyell’s research was nominated as editor’s choice by the leading international BMJ Health and Care Informatics Journal for its unique classification of AI in medical devices. A medical device is software, or a machine or instrument used to support the diagnosis and treatment of people in healthcare.

While AI has the potential to support clinicians in their decision making, more awareness is required into the impact on patient safety and how the Australian workforce will be trained to use new devices.

Patient safety is at risk if clinicians are not trained to understand the level of autonomy they are dealing with in the medical device and the specific circumstances it has been designed for. For instance, a device that supports decision making for diagnosis of adults is often not suitable for use with children.

Dr Lyell draws an analogy with driverless cars that while they are promoted as being self-driving, in reality they still require constant driver attention and a readiness to takeover should problems arise. Likewise, decisions made using AI supported medical devices should always be the responsibility of the clinician not the AI.

Three levels of AI in medical devices were defined by the research, and importantly point to how the benefits of the technology can be leveraged while also making them safe for patients.

Assistive devices – these are characterized by an overlap between the device and clinician. For breast cancer screening both identify possible cancers, however clinicians are responsible for making decisions on what should be followed up and therefore must decide whether they agree with AI marked cancers.

Autonomous information – this is characterized by a separation between what the device and the clinician contribute to the activity or decision. An example is an ECG that monitors heart activity, interprets the results, and provides the information, such as quantifying heart rhythm, which clinicians can use to inform decisions on diagnosis or treatment.

Autonomous decision – this is where the device provides the decision on a clinical task that can be enacted by the device or the clinician. An example is the IDx-DR diabetic screening system in the US that can detect diabetic retinopathy. General practitioners can act on positive findings and refer those patients to specialists for diagnosis and treatment, without having to interpret retina photographs themselves.

The research was based on medical devices approved by the US Food and Drug Administration.

This peer-reviewed journal article is free to access here.

Dr David Lyell and Associate Professor Farah Magrabi are available for interview, please contact Chrissy Clay  chrissy.clay@mq.edu.au


CENTRES RELATED TO THIS NEWS

Centre for Health Informatics

FOR FURTHER INFORMATION, PLEASE CONTACT

Chrissy Clay, Media and Research Outreach Coordinator

Follow us on Twitter @AIHI_MQ

Back to the top of this page