This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

peer-reviewed publication

trusted source

proofread

Using deep learning to process raw photoacoustic channel data and guide cardiac interventions

Using deep learning to process raw photoacoustic channel data and guide cardiac interventions
A photoacoustic approach to guiding cardiac catheters involves short laser pulses delivered using an optical fiber attached to a catheter, while a special signal transducer picks up the ensuing ultrasound waves generated within the heart. Experiments with live swine demonstrate the potential of combining deep neural networks with photoacoustic imaging. Such models can help robotic arms carefully control the position of cardiac catheters during cardiac interventions. Credit: Journal of Biomedical Optics (2023). DOI: 10.1117/1.JBO.29.S1.S11505

Cardiovascular diseases rank among the top causes of death across the world, and cardiac interventions are similarly very common. For example, cardiac catheter ablation procedures, which are used to treat arrythmias, number in several tens of thousands per year in the US alone. In these procedures, surgeons insert a thin, flexible tube called a catheter into the femoral vein in the leg and navigate their way up to the heart, where the problematic tissue is destroyed using cold or focused radiation.

Even though cardiac -based procedures are considered minimally invasive, the position of the catheter tip must be carefully monitored and controlled to prevent damage to the heart. In most cases, surgeons rely on fluoroscopy to localize and guide the catheter tip. However, this approach exposes both the patient and the to ionizing radiation, which can lead to problems such as increased risk of cancer or birth defects.

An alternative method to guide cardiac catheters involves photoacoustic imaging. In this approach, are delivered using an attached to the catheter while a special signal transducer picks up the ensuing ultrasound waves generated within the heart.

Photoacoustic images generated this way can be used to guide robotic arms manipulating the cardiac catheter to enhance precision and minimize risks. However, the algorithms used to automatically detect photoacoustic sources in these images, which are located close to the catheter tip, are susceptible to errors such as reflection artifacts.

A research team led by Muyinatu A. Lediju Bell, the John C. Malone Associate Professor in the Whiting School of Engineering at Johns Hopkins University, U.S., has been working toward a solution to this issue. As reported in a new study published in the Journal of Biomedical Optics, they have developed a new approach for cardiac catheter localization through by leveraging machine learning.

The researchers proposed using a deep convolutional neural network (CNN) to pinpoint the position of cardiac catheter tips in photoacoustic images. However, need to be trained on very large datasets to perform correctly, which would require hours of manual image acquisition and annotation. To circumvent this problem, the team turned to simulated data.

"We trained the network with simulated channel data frames which we formatted to accommodate the field of view of the photoacoustic transducer, including multiple , signal amplitudes, and sound speeds, to ensure robustness against channel noise, target amplitude, and sound speed differences," said Bell. To make the CNN more robust, the training dataset also included simulated images with artifacts.

The researchers introduced an additional processing step called "histogram matching" to further enhance the performance of the model. Herein, they automatically modified acquired images so that they looked similar to the simulated images used to train the CNN.

Through ex vivo and in vivo experiments on excised swine hearts and live swine, respectively, the team demonstrated the impressive performance of their deep learning-based approach. The positional errors for the catheter tip were remarkably small; most of them were even smaller than the resolution of the photoacoustic signal transducer. The network achieved Euclidean errors of 1.02 ± 0.84 mm for target depths of 20–100 mm. Furthermore, it exhibited great performance metrics, with precision, recall, and F1 scores as high as 100%.

"Our results demonstrate the potential of the proposed method to identify sources in future interventional cardiology and cardiac electrophysiology applications, with the broader potential to replace fluoroscopy during these procedures," said Bell.

Overall, this study can pave the way for safer cardiac interventions that require catheters, helping doctors and patients alike in the fight against heart diseases.

More information: Mardava R. Gubbi et al, Deep learning in vivo catheter tip locations for photoacoustic-guided cardiac interventions, Journal of Biomedical Optics (2023). DOI: 10.1117/1.JBO.29.S1.S11505

Journal information: Journal of Biomedical Optics
Provided by SPIE
Citation: Using deep learning to process raw photoacoustic channel data and guide cardiac interventions (2023, November 13) retrieved 21 June 2024 from https://medicalxpress.com/news/2023-11-deep-raw-photoacoustic-channel-cardiac.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Study proves feasibility of using light and sound for medical imaging

10 shares

Feedback to editors