
Published by ICONIP 2025.
The six-volume set constitutes the refereed proceedings of the 32nd International Conference on Neural Information Processing, ICONIP 2025, held in Okinawa, Japan, in November 2025.
The 197 full papers presented in this book were carefully selected and reviewed from 1092 submissions.
The conference focuses on three main areas, i.e., Theory and Algorithms, Computational Neurosciences, and Applications and Frontiers.
Medical image segmentation is crucial for accurate disease detection and diagnosis, but current deep learning methods often struggle with generalization across diverse imaging systems. This paper introduces Bayesian Modeling Based SwinUNet Segmentation on Self-distillation Architecture BMS^3, a novel approach that addresses the challenge of domain invariance in medical image segmentation. By integrating Bayesian modeling for feature extraction with the efficient Swin Transformer-based U-Net architecture, BMS^3 decomposes images into domain-invariant shape components and domain-specific appearance attributes. This approach enhances generalization to unseen data and improves performance on cross-domain tasks. Additionally, we incorporated an optional self-distillation mechanism to further boost performance on imbalanced datasets. Extensive experiments across multiple medical imaging datasets demonstrate that BMS^3 outperforms state-of-the-art methods, including ResNet, TransUNet, and BayeSeg, in both segmentation accuracy and computational efficiency. Our method showed particular promise in maintaining high performance across diverse medical imaging systems, addressing a critical need in clinical applications where data heterogeneity is common. BMS^3 represents a significant advancement in creating a more robust and adaptable medical image segmentation systems. Our code will be made publicly available upon publication of this work.
Reference List for BMSSS – Railgun的小窝
PAPER:

