Connecting To The Server To Fetch The WebPage Elements!!....
MXPlank.com MXMail Submit Research Thesis Electronics - MicroControllers Contact us QuantumDDX.com




Search The Site





 

What do we need to build explainable AI systems for the medical domain?



Machine Learning, or ML, has vast applications in the medical domain, such as medical education, research, and clinical decision-making. One of the arguments made to promote ML in the medical domain is that all patients should have access to the "best Doctor in the world". Theoretically, ML could become this Doctor.


The fact that this 'Doctor' improves with every patient is all the more fascinating. However, the inability of ML algorithms to explain results to human experts has caused its application to be limited in the medical domain.



UB researchers have leveraged the power of digital pathology and computational modeling to develop a new approach to detecting and quantifying podocytes,shown above, a specialized type of cell in the kidney that undergoes damaging changes during early stage kidney disease. Image credit: NIH, National Institute of Diabetes and Digestive and Kidney Diseases

In their research paper, Andreas Holzinger, Chris Biemann, Constantinos S. Pattichis, and Douglas B. Kell have discussed explainable AI systems targeted for application in the medical domain. This research paper is titled "What do we need to build explainable AI systems for the medical domain?" which forms the basis of the following text.


Importance of this research

If medical professionals can understand why an ML took a specific decision regarding a certain patient, it would improve the adaptation of these ML algorithms in the medical domain.


Moreover, the new European General Data Protection Regulation (GDPR 2016/679 and ISO/IEC 27001) mandates that there should be a possibility to make the results re-traceable on demand. Without explainable AI, it would make the existing ML approaches difficult to use.


What is explainable AI?

In the medical domain, it means that medical professionals should have the ability to understand how and why a machine decision has been made.


Challenge with explainable ML Models

Often the best-performing methods (e.g., deep learning) are the least transparent, and the ones providing a clear explanation (e.g., decision trees) are least accurate.


AI explainability can be broadly classified in the below two categories:

1. Post-hoc Systems: Post-hoc systems aim to provide local explanations for a specific decision and make it reproducible on demand

2. Ante-hoc Systems: Here, explainability occurs before the event in question occurs. Hence this system is also referred to as explainability by design.




The researchers have discussed how a Neural Network works & then discussed an example of interpreting a Deep Neural Network (DNN) in the research paper. The researchers have also discussed explainable models for image data and then, in Images and *omics data. They have also discussed explainable models for text in the research paper. In the words of the researchers,




In the medical domain, a large amount of knowledge is represented in textual form, and the written text of the medical reports is legally binding, unlike images nor *omics data. Here it is crucial to underpin machine output with reasons that are human-verifiable and where high precision is imperative for supporting, not distracting the medical experts. The only way forward seems the integration of both knowledge-based and neural approaches to combine the interpretability of the former with the high efficiency of the latter. Promising for explainable-AI in the medical domain seems to be the use of hybrid distributional models that combine sparse graph-based representations with dense vector representations and link them to lexical resources and knowledge bases. Last but not least we emphasize that successful explainable-AI systems need effective user interfaces, fostering new strategies for presenting human-understandable explanations.


Conclusion

ML offers immense possibilities in the medical domain to improve the quality of healthcare. However, the last of explainability makes it a limited application tool. The researchers have discussed the tools we need to build explainable AI systems in the research paper.