A single multitask AI system (PanEcho) accurately automates comprehensive TTE interpretation across settings.

Background

Echocardiography is central to cardiovascular diagnosis but depends on expert interpretation of multi-view videos, creating bottlenecks and variability. Prior AI efforts typically addressed single views or single tasks. This study developed and validated PanEcho, a unified, multiview, multitask deep learning system to automate complete transthoracic echocardiography (TTE) interpretation.

Patients

Intervention

PanEcho, a multiview, multitask video-based deep learning system using a 2D image encoder, temporal transformer, and task-specific heads to produce automated study-level reports for 39 echocardiographic tasks (18 diagnostic classifications; 21 measurements) from B-mode and color Doppler videos.

Control

Reference standard: certified echocardiographer’s final report/measurements extracted from clinical systems; task labels defined per guidelines and local practice.

Outcome

Study Design

Model development and retrospective, multisite validation of a multitask, view-agnostic AI system. Internal temporal validation (YNHHS), international external validation (RVENet+), public single-view datasets (EchoNet-Dynamic, EchoNet-LVH), and point-of-care ED POCUS cohort. TRIPOD+AI-aligned reporting.

Level of Evidence

Diagnostic accuracy study with external validation; retrospective design. Oxford CEBM: approximately Level III (non-consecutive/retrospective diagnostic cohort with reference standard).

Follow up period

None (cross-sectional imaging assessments; no longitudinal clinical follow-up).

Results

Primary outcomes

Secondary outcomes

Limitations

Funding

Citation

Holste G, Oikonomou EK, Tokodi M, Kovács A, Wang Z, Khera R. Complete AI-Enabled Echocardiography Interpretation With Multitask Deep Learning. JAMA. 2025;334(4):306-318. doi:10.1001/jama.2025.8731. Published online June 23, 2025.