How machine learning supports clinician decision making

  1. Macquarie University
  2. Faculty of Medicine, Health and Human Sciences
  3. Departments and schools
  4. Australian Institute of Health Innovation
  5. Our research centres
  6. Health Informatics
  7. Our research
  8. How machine learning supports clinician decision making
Dr David Lyell Learn about this project on our research portal Explore a community of AI in healthcare researchers

AI is changing the way clinicians work

Our research explores how clinicians work with and use artificial intelligence to improve the safety and effectiveness of healthcare provided to patients.

Project sponsor: The National Health and Medical Research Council Centre for Research Excellence in Digital Health (APP1134919)

An operating room with a vitals monitor in the foreground and a doctor in the background.

About the project

Central to the clinician-AI relationship is how AI fits into the clinical tasks performed by clinicians, such as the diagnosis of disease, what AI contributes to those tasks and how AI changes the way clinicians work.

Patient safety is at risk if AI does not properly support the tasks clinicians perform and how they work. Likewise, clinicians need training in how to safely and effectively work with AI, with understanding of its strengths and limitations – especially as many AIs are trained using only data from adults and are therefore not suitable for use with children.

Effective use of AI depends on knowing when its output can be trusted autonomously and when it requires human oversight. Regardless of AI input, clinicians remain accountable for diagnoses and treatments.

Project goals

This project aims to:

  • examine how and to what extent medical devices using machine learning support clinician decision making
  • develop a framework for classifying clinical AI by level of autonomy
  • characterise the types of technical problems and human factors issues that contribute to AI incidents
  • develop requirements for safe implementation and use of AI.

Based on analyses of AI-based medical devices approved by the US Food and Drug Administration, we propose a framework for classifying devices into three different levels of autonomy:

  • Assistive devices – characterised by an overlap between the device and clinician. For breast cancer screening, both identify possible cancers, however clinicians are responsible for making decisions on what should be followed up and therefore must decide whether they agree with AI marked cancers.
  • Autonomous information – characterised by a separation between what the device and the clinician contribute to the activity or decision. An example is an ECG that monitors heart activity, interprets the results, and provides the information, such as quantifying heart rhythm, which clinicians can use and rely on to inform decisions on diagnosis or treatment.
  • Autonomous decision – where the device provides the decision on a clinical task that can be enacted by the device or the clinician. An example is the IDx-DR diabetic screening system in the US that can detect diabetic retinopathy. General practitioners can act on positive findings and refer those patients to specialists for diagnosis and treatment, without having to interpret retina photographs themselves.

This framework will inform appropriate training, ensuring patient safety as AI-enabled devices become more commonplace in medical decision-making for diagnosis and treatment.

Project lead: Professor Farah Magrabi