VessShape: Few-shot 2D blood vessel segmentation by leveraging shape priors from synthetic images

Computer Vision & MultiModal AI
Published: arXiv: 2510.27646v1
Authors

Cesar H. Comin Wesley N. Galvão

Abstract

Semantic segmentation of blood vessels is an important task in medical image analysis, but its progress is often hindered by the scarcity of large annotated datasets and the poor generalization of models across different imaging modalities. A key aspect is the tendency of Convolutional Neural Networks (CNNs) to learn texture-based features, which limits their performance when applied to new domains with different visual characteristics. We hypothesize that leveraging geometric priors of vessel shapes, such as their tubular and branching nature, can lead to more robust and data-efficient models. To investigate this, we introduce VessShape, a methodology for generating large-scale 2D synthetic datasets designed to instill a shape bias in segmentation models. VessShape images contain procedurally generated tubular geometries combined with a wide variety of foreground and background textures, encouraging models to learn shape cues rather than textures. We demonstrate that a model pre-trained on VessShape images achieves strong few-shot segmentation performance on two real-world datasets from different domains, requiring only four to ten samples for fine-tuning. Furthermore, the model exhibits notable zero-shot capabilities, effectively segmenting vessels in unseen domains without any target-specific training. Our results indicate that pre-training with a strong shape bias can be an effective strategy to overcome data scarcity and improve model generalization in blood vessel segmentation.

Paper Summary

Problem
The main problem addressed in this research paper is the difficulty in segmenting blood vessels in medical images using deep learning models. This task is challenging due to the labor-intensive process of manual annotation, which requires domain expertise to create accurate segmentation masks. The scarcity of large-scale datasets limits the training of deep learning models and the development of new methods.
Key Innovation
The researchers introduce a new methodology called VessShape, which generates large-scale 2D synthetic datasets designed to instill a shape bias in segmentation models. This approach is unique because it focuses on transferring shape representations rather than texture-based features, which are commonly learned by Convolutional Neural Networks (CNNs).
Practical Impact
The VessShape methodology has the potential to improve the performance of segmentation models on new domains with different visual characteristics. By pre-training models on synthetic datasets with shape priors, researchers can develop more generalizable models that can adapt to various imaging modalities, such as retinal fundus photography and cerebral cortex microscopy. This can lead to more accurate and automated analysis of medical images, which is essential for precision medicine and disease diagnosis.
Analogy / Intuitive Explanation
Think of it like learning to recognize shapes in a puzzle. A traditional approach would focus on memorizing the texture and patterns of each piece, which can be challenging to generalize to new puzzles. The VessShape approach is like teaching the model to recognize the underlying shapes and structures of the puzzle pieces, which can be applied to various puzzles with different textures and patterns. This way, the model can learn to recognize and segment blood vessels in medical images more accurately and efficiently.
Paper Information
Categories:
cs.CV cs.AI
Published Date:

arXiv ID:

2510.27646v1

Quick Actions