February 16, 2022

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

Audiovisual data differentiate schizophrenia, bipolar disorders

× close

(HealthDay)—Audiovisual data can differentiate between schizophrenia disorders and bipolar disorders, according to a study published in the January issue of JMIR Mental Health.

Michael L. Birnbaum, M.D., from the Zucker Hillside Hospital in Glen Oaks, New York, and colleagues examined whether reliable inferences—psychiatric signs, symptoms, and diagnoses—can be extracted from audiovisual patterns using audiovisual data from 89 participants: 41 individuals with spectrum disorders, 21 individuals with , and 27 healthy volunteers.

Machine learning models were developed based on acoustic and facial movement features extracted from participant interviews. Model performance was assessed using area under the receiver operating characteristic curve (AUROC) in fivefold cross validation.

The researchers found that when aggregating face and voice features, the model successfully differentiated between schizophrenia spectrum disorders and bipolar disorder (AUROC, 0.73). The strongest signal for men was seen for facial action units, including cheek-raising muscle and chin-raising muscle (AUROCs, 0.64 and 0.74, respectively). For women, the strongest signal was provided by vocal features, including energy in the 1 to 4 kHz and spectral harmonicity (AUROCs, 0.80 and 0.78, respectively).

For both men and women, lip corner-pulling muscle signal discriminated between diagnoses (AUROCs, 0.61 and 0.62, respectively). Certain psychiatric signs and symptoms were successfully inferred, including blunted affect, avolition, lack of vocal inflection, asociality, and worthlessness (AUROCs, 0.81, 0.72, 0.71, 0.63, and 0.61, respectively).

"Integrating audiovisual data could change the way mental health clinicians diagnose and monitor patients, enabling faster, more accurate identification of illness and enhancing a personalized approach to medicine," the authors write.

More information: Michael L Birnbaum et al, Acoustic and Facial Features From Clinical Interviews for Machine Learning–Based Psychiatric Diagnosis: Algorithm Development, JMIR Mental Health (2022). DOI: 10.2196/24699

Load comments (0)