Explanatory AI system enables more accurate diagnosis of fetal congenital heart disease

Researchers from the RIKEN Center for Advanced Intelligence Project (AIP) and their colleagues tested AI-enhanced diagnosis of fetal congenital heart disease in a clinical setting. Hospital residents and fellows made more accurate diagnoses when they used a graphical interface that represented AI analysis of fetal heart ultrasound screening videos. The new system could help train doctors and make diagnoses when specialists aren’t available. The report recently appeared in the scientific journal Biomedicines.

Congenital heart problems account for nearly 20% of newborn deaths. Although early diagnosis before birth is known to improve chances of survival, it is extremely difficult as diagnoses must be based entirely on ultrasound videos. In particular, subtle abnormalities may be masked by fetal and probe movements. Experts may well screen the images, but in practice the vast majority of regular ultrasounds are only reviewed by residents or fellows present. To combat this problem, researchers led by Masaaki Komatsu at RIKEN AIP have developed an AI that can learn what a normal fetal heart looks like after being exposed to thousands of ultrasound images. Then it can make diagnoses by classifying ultrasound videos as normal or abnormal.

The system worked well in the lab, but getting it to work in a real environment poses a whole new set of challenges.

It is difficult to establish a relationship of trust with healthcare professionals when the decisions made by AI take place in a “black box” and cannot be understood.”


Masaaki Komatsu at RIKEN AIP

The new study tested an improved explanatory AI system that allows doctors to view a graph representing AI decisions. Additionally, the graphs themselves are generated by another round of deep learning, which has improved AI performance and allowed doctors to see if abnormalities are related to the heart, blood vessels, or other characteristics.

Experts, fellows, and residents received the same series of ultrasound videos and were asked to provide diagnoses twice, once without the explanatory AI and once aided by the graphical representation of the patient’s decision. AI. Examiners were not given the actual decision from the AI, which is simply a numerical value. The researchers found that each group of physicians made more correct diagnoses when using the new AI-based decision tables. “This is the first demonstration in which examiners of all experience levels have been able to improve their ability to screen ultrasound videos for fetal heart abnormalities using explainable AI,” says Komatsu.

A closer examination of the results provided some surprising conclusions. The least qualified examiners; fellows and residents; became 7% and 13% more accurate, respectively, with the help of AI. While Experts and Fellows were able to make good use of AI, Residents were still about 12% less accurate than AI alone. So, in terms of clinical application, the AI ​​was most useful for fellows – who happen to be the ones who usually do the fetal heart ultrasound screening in the hospital.

“Our study suggests that even with widespread use of AI assistance, an examiner’s expertise will still be a key factor in future medical exams,” Komatsu says. “In addition to future clinical applications, our results show that this technology could benefit most by also using it in residency training and education.”

Source:

Journal reference:

Sakai, A. et al. (2022) Improving Healthcare Professionals Using Explainable Artificial Intelligence in Fetal Heart Ultrasound Screening. Biomedicines. doi.org/10.3390/biomedicines10030551.