BMS^3: Bayesian Modeling Based SwinUNet Segmentation on Self-distillation Architecture


Not published yet…

Medical image segmentation is crucial for accurate disease detection and diagnosis, but current deep learning methods often struggle with generalization across diverse imaging systems. This paper introduces Bayesian Modeling Based SwinUNet Segmentation on Self-distillation Architecture BMS^3, a novel approach that addresses the challenge of domain invariance in medical image segmentation. By integrating Bayesian modeling for feature extraction with the efficient Swin Transformer-based U-Net architecture, BMS^3 decomposes images into domain-invariant shape components and domain-specific appearance attributes. This approach enhances generalization to unseen data and improves performance on cross-domain tasks. Additionally, we incorporated an optional self-distillation mechanism to further boost performance on imbalanced datasets. Extensive experiments across multiple medical imaging datasets demonstrate that BMS^3 outperforms state-of-the-art methods, including ResNet, TransUNet, and BayeSeg, in both segmentation accuracy and computational efficiency. Our method showed particular promise in maintaining high performance across diverse medical imaging systems, addressing a critical need in clinical applications where data heterogeneity is common. BMS^3 represents a significant advancement in creating a more robust and adaptable medical image segmentation systems. Our code will be made publicly available upon publication of this work.

Reference List for BMSSS – Railgun的小窝


Leave a Reply

Your email address will not be published. Required fields are marked *