Diagnosis by voice; this new AI technology could create wonders

Medicine is a field that depends heavily on a good dialogue between doctor and patient. Understanding the patient’s problems is essential to diagnose diseases. The main diagnostic tools used to confirm the presence of problems are body fluid tests as well as imaging such as MRIs and X-rays. However, there is a possible new route that could be useful in diagnosing many diseases. Researchers are working to develop an artificial intelligence system that can use voice as a diagnostic tool to identify diseases.

How to use voice as a diagnostic tool?

Diseases can affect organs such as the heart, lungs, brain, muscles or vocal cords, which can then alter an individual’s voice. Therefore, voice analysis using artificial intelligence opens up new opportunities for healthcare. From the use of voice biomarkers for diagnosis, risk prediction and remote monitoring of various clinical outcomes and symptoms. With this in mind, there could be various possible applications of the voice for health-related purposes. This rapidly changing environment holds enormous potential from a research, patient and clinical perspective.

The voice, as complex networks of sounds coming from our vocal cords, contains various information and plays a fundamental role in social interaction by allowing us to share information about our emotions, fears, feelings and excitement by modulating its tone or pitch. Virtual and voice assistants on smartphones or in smart home devices such as smart speakers are now commonplace and have paved the way for considerable use of voice-enabled search. The evolution of voice technology, audio signal analysis, and natural language processing and understanding methods has paved the way for many potential applications of voice, such as the identification of voice biomarkers for diagnosis. , classification or remote monitoring of patients, or to improve clinical practice. .

Artificial intelligence to use voice as a diagnostic tool

Researchers are currently developing artificial intelligence-based tools that could eventually diagnose serious illnesses. They target everything from Alzheimer’s disease to cancer.

The National Institutes of Health-funded project, announced Tuesday, aims to turn the human voice into something that could be used as a biomarker of disease, like blood or temperature.

According to the National Institute of Health website, they will invest $130 million over four years, pending the availability of funds, to accelerate the widespread use of artificial intelligence (AI) by the biomedical and behavioral research communities. The NIH Pooled Fund Bridge to Artificial Intelligence (Bridge2AI) program brings together team members from diverse disciplines and backgrounds to generate richly detailed tools, resources, and data that respond to AI approaches. At the same time, the program will ensure that its tools and data do not perpetuate any inequities or ethical issues that may arise during data collection and analysis.

According to Olivier Elemento, a professor at the Institute for Computational Biomedicine at Weill Cornell Medicine and one of the project’s principal investigators, the best thing about voice data is that it’s one of the most powerful types of data. cheaper ones that can be collected from people. It is also very accessible and can be retrieved easily from any patient. This would be useful for creating a large database.

According to Yaël Bensoussan, an otolaryngologist at USF Health and the project’s other principal investigator, while there have been similar efforts in the past, most of them were very small to be effective. The lack of a proper database was also a huge contributing factor. As this is a relatively new area of ​​study, researchers have yet to determine best practices for collecting information for this system. The project we are working on will establish the data collection standards for this purpose.

The research team will start by creating an app that will collect voice data from participants with conditions such as vocal cord paralysis, Alzheimer’s disease, Parkinson’s disease, depression, pneumonia and autism. All voice collections will be supervised by a clinician. “So, for example, someone who has Parkinson’s disease – their voice may be lower and their way of speaking is slower,” says Bensoussan. They would be asked to say sounds, read sentences, and read full texts through the app.

According to some doctors, it is possible to tell that the patients had brain metastases by the way they speak!

How will voice data for diagnosis be protected?

The research team is collaborating with medical AI company Owkin to build and train the project’s AI models. According to Owkin’s framework, the data collected from the patient remains housed in the center where it was collected while the AI ​​model moves between institutions. The model trains separately on each dataset, and then the results of those trainings come back to a central location, where they are merged. This gives an extra layer of privacy protection to voice data. A team of bio-ethicists are working on the project to study the ethical and legal implications of a voice database and voice-based diagnostics. They will consider, for example, whether voice is protected by the Health Insurance Portability and Accountability Act (or HIPAA) and whether patients own their own voice data.

Is there a possibility to use voice data for diagnosis?

Voice data is already useful in the diagnosis and treatment of voice disorders. It also seems to have immense potential in the field of mental health disorders such as depression, anxiety, PTSD, etc. Collecting voice samples from veterans and analyzing vocal cues such as pitch, pitch, rhythm, frequency, and volume can be helpful in monitoring for signs of unseen injuries like PTSD, traumatic brain injury (TBI) and depression. Using machine learning to exploit voice characteristics, algorithms select voice patterns in people with these conditions and compare them with voice samples from healthy people.