{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,24]],"date-time":"2025-09-24T00:14:58Z","timestamp":1758672898868,"version":"3.44.0"},"publisher-location":"California","reference-count":0,"publisher":"International Joint Conferences on Artificial Intelligence Organization","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2025,9]]},"abstract":"<jats:p>Dense object segmentation is essential for various applications, particularly in pathology image and remote sensing image analysis. However, distinguishing numerous similar and densely packed objects in this task presents significant challenges. Several methods, including CNN- and ViT-based approaches, have been proposed to tackle these issues. Yet, models trained on limited datasets exhibit limited generalization ability. The Segment Anything Model (SAM) has recently achieved significant progress in zero-shot segmentation but relies heavily on precise positional guidance. However, providing numerous accurate location prompts in dense scenarios is time-consuming. To overcome this limitation, we conducted an in-depth exploration of the SAM mechanism and found that its strong generalization ability stems from the encoder\u2019s edge detection capability, which is semantically independent, making location prompts essential for segmentation. This insight inspired the development of DenseSAM, which replaces location prompts with semantic guidance for automatic segmentation in dense scenarios. Specifically, it uses local details to weaken the edges of background objects, leverages global context to enhance intra-class feature similarity, while further increasing contrast with the background, and integrates a dual-head decoding process to enable lightweight automatic semantic segmentation. Extensive experiments on pathology images demonstrate that DenseSAM delivers remarkable performance with minimal training parameters, providing a cost-effective and efficient solution. Moreover, experiments on remote sensing images further validate its excellent scalability, making DenseSAM suitable for various dense object segmentation domains. The code is available at https:\/\/github.com\/imAzhou\/DenseSAM.<\/jats:p>","DOI":"10.24963\/ijcai.2025\/889","type":"proceedings-article","created":{"date-parts":[[2025,9,19]],"date-time":"2025-09-19T08:10:40Z","timestamp":1758269440000},"page":"7994-8002","source":"Crossref","is-referenced-by-count":0,"title":["DenseSAM: Semantic Enhance SAM for Efficient Dense Object Segmentation"],"prefix":"10.24963","author":[{"given":"Linyun","family":"Zhou","sequence":"first","affiliation":[{"name":"State Key Laboratory of Blockchain and Data Security, Zhejiang University"}]},{"given":"Jiacong","family":"Hu","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Blockchain and Data Security, Zhejiang University"}]},{"given":"Shengxuming","family":"Zhang","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Blockchain and Data Security, Zhejiang University"},{"name":"School of Software Technology, Zhejiang University"}]},{"given":"Xiangtong","family":"Du","sequence":"additional","affiliation":[{"name":"Xuzhou Medical University"}]},{"given":"Mingli","family":"Song","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Blockchain and Data Security, Zhejiang University"}]},{"given":"Xiuming","family":"Zhang","sequence":"additional","affiliation":[{"name":"The First Affiliated Hospital, College of Medicine, Zhejiang University"}]},{"given":"Zunlei","family":"Feng","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Blockchain and Data Security, Zhejiang University"},{"name":"School of Software Technology, Zhejiang University"},{"name":"Hangzhou High-Tech Zone (Binjiang) Institute of Blockchain and Data Security"}]}],"member":"10584","event":{"number":"34","sponsor":["International Joint Conferences on Artificial Intelligence Organization (IJCAI)"],"acronym":"IJCAI-2025","name":"Thirty-Fourth International Joint Conference on Artificial Intelligence {IJCAI-25}","start":{"date-parts":[[2025,8,16]]},"theme":"Artificial Intelligence","location":"Montreal, Canada","end":{"date-parts":[[2025,8,22]]}},"container-title":["Proceedings of the Thirty-Fourth International Joint Conference on Artificial Intelligence"],"original-title":[],"deposited":{"date-parts":[[2025,9,23]],"date-time":"2025-09-23T11:35:24Z","timestamp":1758627324000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.ijcai.org\/proceedings\/2025\/889"}},"subtitle":[],"proceedings-subject":"Artificial Intelligence Research Articles","short-title":[],"issued":{"date-parts":[[2025,9]]},"references-count":0,"URL":"https:\/\/doi.org\/10.24963\/ijcai.2025\/889","relation":{},"subject":[],"published":{"date-parts":[[2025,9]]}}}