{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T02:49:55Z","timestamp":1773802195304,"version":"3.50.1"},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"16","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"<jats:p>Foundation segmentation models, such as SAM and its video-oriented variant SAM2, have achieved remarkable success in natural image and video segmentation. However, their direct application to echocardiography video is challenged by structural uncertainty arising from severe speckle noise and blurry anatomical boundaries. To address this, we propose E\u00b3SAM2, a lightweight adaptation framework that introduces a novel entropy-based methodology to explicitly model and mitigate such uncertainty. Specifically, an entropy-guided attention mechanism is introduced to steer the model\u2019s focus toward structurally reliable features, particularly in speckle-dominated regions. Additionally, an entropy regularization loss is introduced to further enhance target-background discrimination. To better resolve indistinct anatomical contours, an edge-aware supervision module is incorporated to inject explicit boundary priors for sharper delineation. These components are efficiently integrated through a global-local feature adapter. Experiments on CAMUS and EchoNet-Dynamic datasets demonstrate that E\u00b3SAM2 achieves state-of-the-art segmentation and clinical estimation performance, while maintaining high computational efficiency.<\/jats:p>","DOI":"10.1609\/aaai.v40i16.38346","type":"journal-article","created":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T00:22:51Z","timestamp":1773793371000},"page":"13423-13431","source":"Crossref","is-referenced-by-count":0,"title":["E\u00b3SAM2: Entropy-Aware and Edge-Guided Adaptation of SAM2 for Echocardiography Video Segmentation"],"prefix":"10.1609","volume":"40","author":[{"given":"Long","family":"Zheng","sequence":"first","affiliation":[]},{"given":"Zhi","family":"Li","sequence":"additional","affiliation":[]},{"given":"Weidong","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Zhenyu","family":"Dai","sequence":"additional","affiliation":[]},{"given":"Shuyun","family":"Li","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2026,3,14]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/38346\/42308","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/38346\/42308","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T00:22:51Z","timestamp":1773793371000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/38346"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,14]]},"references-count":0,"journal-issue":{"issue":"16","published-online":{"date-parts":[[2026,3,17]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v40i16.38346","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2026,3,14]]}}}