{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T23:35:41Z","timestamp":1761176141069,"version":"build-2065373602"},"reference-count":0,"publisher":"IOS Press","isbn-type":[{"value":"9781643686318","type":"electronic"}],"license":[{"start":{"date-parts":[[2025,10,21]],"date-time":"2025-10-21T00:00:00Z","timestamp":1761004800000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025,10,21]]},"abstract":"<jats:p>Vision Transformer (ViT) has shown promise in computer vision thanks to their powerful self-attention mechanisms. However, deploying ViT, especially for medical image analysis, presents significant challenges. Firstly, the quadratic computational complexity of self-attention hinders efficient model deployment. Secondly, medical images often have a lower signal-to-noise ratio (SNR) compared to natural images, making robust analysis difficult for standard ViTs. To address these specific limitations, we propose SparseMambaVision, a novel sparse-attention based Mamba-Transformer framework for prostate segmentation. Also, we introduce a novel Mamba Sparse Attention (MSA) module, which is designed to enhance both computational efficiency and model effectiveness to improve the performance on low SNR images. Extensive experiments on three benchmark prostate datasets demonstrate that SparseMambaVision achieves state-of-the-art results across four evaluation metrics, surpassing existing methods. This underscores the potential of SparseMambaVision for more accurate and efficient prostate segmentation in the clinical setting. The code is available at https:\/\/github.com\/WangyuFeng-FJNU\/SparseMambaVision.<\/jats:p>","DOI":"10.3233\/faia250885","type":"book-chapter","created":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T09:45:00Z","timestamp":1761126300000},"source":"Crossref","is-referenced-by-count":0,"title":["SparseMambaVision: Sparse-Attention Based Mamba-Transformer Framework for Prostate Segmentation"],"prefix":"10.3233","author":[{"given":"Wangyu","family":"Feng","sequence":"first","affiliation":[{"name":"College of Computer Science and Cyber Security, Fujian Normal University, China"}]},{"given":"Likun","family":"Xia","sequence":"additional","affiliation":[{"name":"College of Information Engineering, Capital Normal University, China"}]},{"given":"Yading","family":"Yuan","sequence":"additional","affiliation":[{"name":"Department of Radiation Oncology, Columbia University Irving Medical Center, USA"}]},{"given":"Ming","family":"Ma","sequence":"additional","affiliation":[{"name":"Department of Graduate Computer Science and Engineering, Katz School of Science and Health, Yeshiva University, USA"}]}],"member":"7437","container-title":["Frontiers in Artificial Intelligence and Applications","ECAI 2025"],"original-title":[],"link":[{"URL":"https:\/\/ebooks.iospress.nl\/pdf\/doi\/10.3233\/FAIA250885","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,22]],"date-time":"2025-10-22T09:45:00Z","timestamp":1761126300000},"score":1,"resource":{"primary":{"URL":"https:\/\/ebooks.iospress.nl\/doi\/10.3233\/FAIA250885"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,21]]},"ISBN":["9781643686318"],"references-count":0,"URL":"https:\/\/doi.org\/10.3233\/faia250885","relation":{},"ISSN":["0922-6389","1879-8314"],"issn-type":[{"value":"0922-6389","type":"print"},{"value":"1879-8314","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,21]]}}}