{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,11]],"date-time":"2026-02-11T18:33:16Z","timestamp":1770834796277,"version":"3.50.1"},"reference-count":0,"publisher":"Association for the Advancement of Artificial Intelligence (AAAI)","issue":"6","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AAAI"],"abstract":"<jats:p>Given a text query, partially relevant video retrieval (PRVR) seeks to find untrimmed videos containing pertinent moments in a database. For PRVR, clip modeling is essential to capture the partial relationship between texts and videos. Current PRVR methods adopt scanning-based clip construction to achieve explicit clip modeling, which is information-redundant and requires a large storage overhead. To solve the efficiency problem of PRVR methods, this paper proposes GMMFormer, a Gaussian-Mixture-Model based Transformer which models clip representations implicitly. During frame interactions, we incorporate Gaussian-Mixture-Model constraints to focus each frame on its adjacent frames instead of the whole video. Then generated representations will contain multi-scale clip information, achieving implicit clip modeling. In addition, PRVR methods ignore semantic differences between text queries relevant to the same video, leading to a sparse embedding space. We propose a query diverse loss to distinguish these text queries, making the embedding space more intensive and contain more semantic information. Extensive experiments on three large-scale video datasets (i.e., TVR, ActivityNet Captions, and Charades-STA) demonstrate the superiority and efficiency of GMMFormer.<\/jats:p>","DOI":"10.1609\/aaai.v38i6.28389","type":"journal-article","created":{"date-parts":[[2024,3,25]],"date-time":"2024-03-25T09:55:41Z","timestamp":1711360541000},"page":"5767-5775","source":"Crossref","is-referenced-by-count":9,"title":["GMMFormer: Gaussian-Mixture-Model Based Transformer for Efficient Partially Relevant Video Retrieval"],"prefix":"10.1609","volume":"38","author":[{"given":"Yuting","family":"Wang","sequence":"first","affiliation":[]},{"given":"Jinpeng","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Bin","family":"Chen","sequence":"additional","affiliation":[]},{"given":"Ziyun","family":"Zeng","sequence":"additional","affiliation":[]},{"given":"Shu-Tao","family":"Xia","sequence":"additional","affiliation":[]}],"member":"9382","published-online":{"date-parts":[[2024,3,24]]},"container-title":["Proceedings of the AAAI Conference on Artificial Intelligence"],"original-title":[],"link":[{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/28389\/28760","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/28389\/28761","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/download\/28389\/28760","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,3,25]],"date-time":"2024-03-25T09:55:42Z","timestamp":1711360542000},"score":1,"resource":{"primary":{"URL":"https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/28389"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,3,24]]},"references-count":0,"journal-issue":{"issue":"6","published-online":{"date-parts":[[2024,3,25]]}},"URL":"https:\/\/doi.org\/10.1609\/aaai.v38i6.28389","relation":{},"ISSN":["2374-3468","2159-5399"],"issn-type":[{"value":"2374-3468","type":"electronic"},{"value":"2159-5399","type":"print"}],"subject":[],"published":{"date-parts":[[2024,3,24]]}}}