May 1, 2024

This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked
peer-reviewed publication
trusted source
proofread

Cardiologists train large AI model to assess heart structure, function

EchoCLIP workflow. Credit: Nature Medicine (2024). DOI: 10.1038/s41591-024-02959-y
× close
EchoCLIP workflow. Credit: Nature Medicine (2024). DOI: 10.1038/s41591-024-02959-y

Artificial intelligence experts at Cedars-Sinai and the Smidt Heart Institute created a dataset with more than 1 million echocardiograms, or cardiac ultrasound videos, and their corresponding clinical interpretations. Using this database, they created EchoCLIP, a powerful machine learning algorithm that can "interpret" echocardiogram images and assess key findings.

The design and evaluation of EchoCLIP, described in a manuscript published in Nature Medicine, suggest that an EchoCLIP of a patient's echocardiogram provides clinician-level evaluations of heart function, assessment of past surgeries and devices, and may assist clinicians in identifying patients in need of treatment.

The EchoCLIP foundation model also can identify the same patient across multiple videos, studies and timepoints as well as recognize clinically important changes in a patient's heart.

"To our knowledge, this is the largest model trained on echocardiography images," said corresponding author David Ouyang, MD, a faculty member in the Department of Cardiology in the Smidt Heart Institute and in the Division of Artificial Intelligence in Medicine.

"Many previous AI models for echocardiograms are only trained on tens of thousands of examples. In contrast, EchoCLIP's uniquely strong performance in image interpretation is a result of its training on almost tenfold more data than existing models."

"Our results suggest that large datasets of medical imaging and expert-adjudicated interpretations can serve as the basis for training medical foundation models, which are a form of generative artificial intelligence," Ouyang said.

He said this advanced foundation model can soon help cardiologists in the assessment of echocardiograms by generating preliminary assessments of cardiac measurements, identify changes that happen over time, and common disease states.

The team of investigators built a dataset of 1,032,975 cardiac ultrasound videos and corresponding expert interpretations to develop EchoCLIP. Key takeaways from the study include:

"Foundation models are one of the newest areas within generative AI, but most models do not have enough medical data to be useful in the health care arena," said Christine M. Albert, MD, MPH, chair of the Department of Cardiology in the Smidt Heart Institute and the Lee and Harold Kapelovitz Distinguished Chair in Cardiology.

Albert, who was not involved in the study, said, "This novel foundation model integrates computer vision interpretation of images with processing to augment cardiologists' interpretation of echocardiograms."

More information: Matthew Christensen et al, Vision–language foundation model for echocardiogram interpretation, Nature Medicine (2024). DOI: 10.1038/s41591-024-02959-y

Journal information: Nature Medicine

Load comments (0)