Explainable AI in healthcare

Explainable AI in healthcare

Centre for health informatics

Research stream

Patient Safety Informatics

Image source:  Alexa Steinbrück / Better Images of AI / Explainable AI / CC-BY 4.0

Project members

Dr Ying Wang
Dr David Lyell
Professor Farah Magrabi
Professor Enrico Coiera

Project contact

Dr Ying Wang
E: ying.wang@mq.edu.au

Project main description

The current resurgence of artificial intelligence (AI) in healthcare largely driven by developments in deep learning methods which attempt to simulate the behaviour of the human brain to learn from vast amounts of data. In healthcare, AI promises to transform clinical decision-making processes as it has the potential to harness the vast amounts of genomic, biomarker, and phenotype data that are being generated across the health system to improve the safety and quality of care decisions.

Today, AI models have been successfully incorporated into variety of clinical decision support systems for detecting clinical findings in medical imaging, suggesting diagnoses and recommending treatments in data-intensive specialties like radiology, pathology and ophthalmology. However, models developed using deep learning methods are a black box i.e. their internal workings are not understandable by humans. This lack of explainability continues to spark criticism. Although explainable AI (XAI) involves various legal, ethical and clinical issues, technical feasibility is a fundamental challenge.

The goals of this project are:

  • To identify requirements for XAI in healthcare.
  • To develop tools for choosing and validating XAI techniques to make healthcare AI systems accountable.
  • To develop and evaluate independent explainers for deep learning models in healthcare.

References

  1. Holzinger, A., Biemann, C., Pattichis, C.S. and Kell, D.B., 2017. What do we need to build explainable AI systems for the medical domain?. arXiv preprint arXiv:1712.09923.
  2. Ribeiro, M.T., S. Singh, and C. Guestrin. Why should i trust you?: Explaining the predictions of any classifier. in Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 2016. ACM.
  3. Wang, Y., Coiera, E. and Magrabi, F., 2019. Using convolutional neural networks to identify patient safety incident reports by type and severity. Journal of the American Medical Informatics Association, 26(12), pp.1600-1608.
  4. Wang, Y., Coiera, E., Runciman, W. and Magrabi, F., 2017. Using multiclass classification to automate the identification of patient safety incident reports by type and severity. BMC medical informatics and decision making, 17(1), pp.1-12.

Project sponsors

The Australian National Health and Medical Research Council Centre for Research Excellence in Digital Health (APP1134919)

Collaborative partners

Professor Bill Runciman, University of South Australia and Australian Patient Safety Foundation

Related projects

Project status

Current

Centres related to this project

Centre for Health Informatics

Content owner: Australian Institute of Health Innovation Last updated: 11 Mar 2024 7:47pm

Back to the top of this page