Ovarian Ultrasound Image Segmentation with Limited Training Data
Main Article Content
Abstract
Ultrasound imaging is pivotal for ovarian tumor diagnosis, yet it poses significant segmentation challenges due to severe speckle noise, low contrast, and high inter-patient morphological variability. These challenges are further exacerbated by the limited availability of annotated medical data, making few-shot segmentation an appealing solution. Existing few-shot segmentation models like UniverSeg offer a promising direction for such limited-data scenarios but suffer from performance instability caused by stochastic support set selection. To address this, we propose a novel CLIP-guided support selection strategy that leverages the semantic embedding space of the Contrastive Language–Image Pre-training (CLIP) model to retrieve morphologically consistent support samples for each query. By replacing random sampling with a similarity-based retrieval mechanism, our method ensures better structural alignment between support and query images. Extensive experiments on two ovarian ultrasound datasets, OvaTUS and OTU_2D, demonstrate that our approach consistently outperforms the baseline UniverSeg and other few-shot methods. Specifically, on the OvaTUS dataset, our method achieves a Dice Similarity Coefficient (DSC) of 75.8% and Intersection over Union (IoU) of 64.9%, surpassing the random selection baseline by 2.1% and 2.7%, respectively. Furthermore, our approach shows superior robustness in extreme few-shot settings (N = 1), improving the Dice score by over 8% compared to the baseline. Code will be publicly released upon acceptance.
Keywords
Few-shot segmentation, Ovarian ultrasound, UniverSeg, CLIP, Medical image analysis.
Article Details

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.