- Library Home /
- Search Collections /
- Open Collections /
- Browse Collections /
- UBC Theses and Dissertations /
- Point tracking as a temporal cue for robust myocardium...
Open Collections
UBC Theses and Dissertations
UBC Theses and Dissertations
Point tracking as a temporal cue for robust myocardium segmentation in echocardiography videos Khodabakhshian, Bahar
Abstract
Echocardiography is one of the most widely used imaging modalities for the assessment of cardiac function due to its real-time capability, portability, and cost-effectiveness. Accurate delineation of the myocardium in echocardiographic videos is a critical prerequisite for quantitative analysis of cardiac performance, including estimation of ejection fraction, myocardial strain, and regional wall motion abnormalities. However, automated myocardium segmentation in ultrasound remains challenging due to speckle noise, acoustic artifacts, low contrast, non-rigid cardiac motion, and substantial variability in image quality, particularly in point-of-care ultrasound (POCUS) settings. While deep learning has significantly advanced cardiac image segmentation, most existing approaches process frames independently or rely on implicit temporal feature propagation, often leading to temporal inconsistency, segmentation drift, and “flickering” artifacts across video sequences.
This thesis introduces a motion-centric framework for robust myocardium segmentation in echocardiography videos. Instead of implicitly propagating segmentation features through time, we explicitly model myocardial motion using dense point tracking and leverage the resulting trajectories as a temporal cue for segmentation. We propose PointSeg, a transformer-based architecture that integrates an echo-tuned point tracking module, fine-tuned on a synthetic echocardiography dataset, with a tracking-conditioned segmentation decoder. The tracking module employs attention-based mechanisms to robustly follow anatomical landmarks across frames while handling occlusions and ultrasound-specific visibility constraints. The segmentation network fuses sparse, motion-aware point representations with dense image features, enforcing temporal coherence without memory-based accumulation.
Extensive experiments on the public CAMUS dataset and a large-scale private clinical dataset demonstrate that the proposed framework achieves competitive spatial accuracy on high-quality data while significantly improving robustness and temporal stability in low-quality echocardiography videos. Ablation studies further validate the importance of temporal attention mechanisms and motion-conditioned fusion in reducing segmentation drift. Beyond accurate delineation, the generated pixel-level motion trajectories provide physiologically meaningful information that can support downstream functional analyses such as automated strain imaging. By shifting the paradigm from implicit feature propagation to explicit anatomical motion modeling, this thesis establishes point tracking as an effective and interpretable temporal cue for medical video segmentation, advancing the development of motion-aware and clinically reliable AI systems for echocardiographic analysis.
Item Metadata
| Title |
Point tracking as a temporal cue for robust myocardium segmentation in echocardiography videos
|
| Creator | |
| Supervisor | |
| Publisher |
University of British Columbia
|
| Date Issued |
2026
|
| Description |
Echocardiography is one of the most widely used imaging modalities for the assessment of cardiac function due to its real-time capability, portability, and cost-effectiveness. Accurate delineation of the myocardium in echocardiographic videos is a critical prerequisite for quantitative analysis of cardiac performance, including estimation of ejection fraction, myocardial strain, and regional wall motion abnormalities. However, automated myocardium segmentation in ultrasound remains challenging due to speckle noise, acoustic artifacts, low contrast, non-rigid cardiac motion, and substantial variability in image quality, particularly in point-of-care ultrasound (POCUS) settings. While deep learning has significantly advanced cardiac image segmentation, most existing approaches process frames independently or rely on implicit temporal feature propagation, often leading to temporal inconsistency, segmentation drift, and “flickering” artifacts across video sequences.
This thesis introduces a motion-centric framework for robust myocardium segmentation in echocardiography videos. Instead of implicitly propagating segmentation features through time, we explicitly model myocardial motion using dense point tracking and leverage the resulting trajectories as a temporal cue for segmentation. We propose PointSeg, a transformer-based architecture that integrates an echo-tuned point tracking module, fine-tuned on a synthetic echocardiography dataset, with a tracking-conditioned segmentation decoder. The tracking module employs attention-based mechanisms to robustly follow anatomical landmarks across frames while handling occlusions and ultrasound-specific visibility constraints. The segmentation network fuses sparse, motion-aware point representations with dense image features, enforcing temporal coherence without memory-based accumulation.
Extensive experiments on the public CAMUS dataset and a large-scale private clinical dataset demonstrate that the proposed framework achieves competitive spatial accuracy on high-quality data while significantly improving robustness and temporal stability in low-quality echocardiography videos. Ablation studies further validate the importance of temporal attention mechanisms and motion-conditioned fusion in reducing segmentation drift. Beyond accurate delineation, the generated pixel-level motion trajectories provide physiologically meaningful information that can support downstream functional analyses such as automated strain imaging. By shifting the paradigm from implicit feature propagation to explicit anatomical motion modeling, this thesis establishes point tracking as an effective and interpretable temporal cue for medical video segmentation, advancing the development of motion-aware and clinically reliable AI systems for echocardiographic analysis.
|
| Genre | |
| Type | |
| Language |
eng
|
| Date Available |
2026-04-14
|
| Provider |
Vancouver : University of British Columbia Library
|
| Rights |
Attribution-NonCommercial-NoDerivatives 4.0 International
|
| DOI |
10.14288/1.0451917
|
| URI | |
| Degree (Theses) | |
| Program (Theses) | |
| Affiliation | |
| Degree Grantor |
University of British Columbia
|
| Graduation Date |
2026-05
|
| Campus | |
| Scholarly Level |
Graduate
|
| Rights URI | |
| Aggregated Source Repository |
DSpace
|
Item Media
Item Citations and Data
Rights
Attribution-NonCommercial-NoDerivatives 4.0 International