{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,15]],"date-time":"2026-03-15T04:48:19Z","timestamp":1773550099185,"version":"3.50.1"},"reference-count":37,"publisher":"Association for Computing Machinery (ACM)","issue":"3","funder":[{"name":"Beijing Natural Science Foundation","award":["JQ24019"],"award-info":[{"award-number":["JQ24019"]}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62576047, 52572349"],"award-info":[{"award-number":["62576047, 52572349"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Open Fund of the Key Laboratory for Civil Aviation Collaborative Air Traffic Management Technology and Applications","award":["2025-001"],"award-info":[{"award-number":["2025-001"]}]},{"name":"SMP-Z Large Model Fund","award":["CIPS-SMP20250313"],"award-info":[{"award-number":["CIPS-SMP20250313"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Multimedia Comput. Commun. Appl."],"published-print":{"date-parts":[[2026,3,31]]},"abstract":"<jats:p>\n                    Multimedia recommendation systems leverage user\u2013item interactions and multimodal information to capture user preferences, enabling more accurate and personalized recommendations. Despite notable advancements, existing approaches still face two critical limitations: first, shallow modality fusion often relies on simple concatenation, failing to exploit rich synergic intra- and inter-modal relationships; second, asymmetric feature treatment\u2014where users are only characterized by interaction IDs while items benefit from rich multimodal content\u2014hinders the learning of a shared semantic space. To address these issues, we propose a\n                    <jats:bold>\n                      <jats:italic toggle=\"yes\">C<\/jats:italic>\n                      ross-Modal\n                      <jats:italic toggle=\"yes\">R<\/jats:italic>\n                      ecursive\n                      <jats:italic toggle=\"yes\">A<\/jats:italic>\n                      ttention\n                      <jats:italic toggle=\"yes\">N<\/jats:italic>\n                      etwork with Dual Graph\n                      <jats:italic toggle=\"yes\">E<\/jats:italic>\n                      mbedding (CRANE)\n                    <\/jats:bold>\n                    . To tackle shallow fusion, we design a core\n                    <jats:bold>Recursive Cross-Modal Attention (RCA)<\/jats:bold>\n                    mechanism that iteratively refines modality features based on cross-correlations in a joint latent space, effectively capturing high-order intra- and inter-modal dependencies. For symmetric multimodal learning, we explicitly construct users\u2019 multimodal profiles by aggregating features of their interacted items. Furthermore, CRANE integrates a symmetric dual graph framework\u2014comprising a heterogeneous user\u2013item interaction graph and a homogeneous item\u2013item semantic graph\u2014unified by a self-supervised contrastive learning objective to fuse behavioral and semantic signals. Despite these complex modeling capabilities, CRANE maintains high computational efficiency. Theoretical and empirical analyses confirm its scalability and high practical efficiency, achieving faster convergence on small datasets and superior performance ceilings on large-scale ones. Comprehensive experiments on four public real-world datasets validate an average 5% improvement in key metrics over state-of-the-art baselines. Our code is publicly available at\n                    <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/github.com\/MKC-Lab\/CRANE\">https:\/\/github.com\/MKC-Lab\/CRANE<\/jats:ext-link>\n                    .\n                  <\/jats:p>","DOI":"10.1145\/3788289","type":"journal-article","created":{"date-parts":[[2026,1,19]],"date-time":"2026-01-19T14:07:04Z","timestamp":1768831624000},"page":"1-23","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Cross-Modal Attention Network with Dual Graph Learning in Multimodal Recommendation"],"prefix":"10.1145","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0009-0006-6849-2138","authenticated-orcid":false,"given":"Ji","family":"Dai","sequence":"first","affiliation":[{"name":"Beijing University of Posts and Telecommunications, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4190-1529","authenticated-orcid":false,"given":"Quan","family":"Fang","sequence":"additional","affiliation":[{"name":"Beijing University of Posts and Telecommunications, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1277-6802","authenticated-orcid":false,"given":"Jun","family":"Hu","sequence":"additional","affiliation":[{"name":"National University of Singapore, Singapore, Singapore"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7464-9115","authenticated-orcid":false,"given":"Desheng","family":"Cai","sequence":"additional","affiliation":[{"name":"Tianjin University of Technology, Tianjin, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9172-1695","authenticated-orcid":false,"given":"Yang","family":"Yang","sequence":"additional","affiliation":[{"name":"Beihang University, Beijing, China and State Key Laboratory of CNS\/ATM, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0000-6830-6282","authenticated-orcid":false,"given":"Can","family":"Zhao","sequence":"additional","affiliation":[{"name":"Aviation Data Communication Corporation, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2026,2,27]]},"reference":[{"key":"e_1_3_2_2_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2018.2798607"},{"key":"e_1_3_2_3_2","doi-asserted-by":"publisher","DOI":"10.1145\/3564284"},{"key":"e_1_3_2_4_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i01.5330"},{"key":"e_1_3_2_5_2","doi-asserted-by":"publisher","DOI":"10.1145\/3331184.3331254"},{"key":"e_1_3_2_6_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v37i4.25551"},{"key":"e_1_3_2_7_2","doi-asserted-by":"publisher","DOI":"10.1145\/2872427.2883037"},{"key":"e_1_3_2_8_2","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v30i1.9973"},{"key":"e_1_3_2_9_2","doi-asserted-by":"publisher","DOI":"10.1145\/3397271.3401063"},{"key":"e_1_3_2_10_2","unstructured":"Jun Hu Yufei He Yuan Li Bryan Hooi and Bingsheng He. 2025. NTSFormer: A self-teaching graph transformer for multimodal cold-start node classification. arXiv:2507.04870. Retrieved from https:\/\/arxiv.org\/abs\/2507.04870"},{"key":"e_1_3_2_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP.2019.8803025"},{"key":"e_1_3_2_12_2","doi-asserted-by":"publisher","DOI":"10.1145\/3626772.3657703"},{"key":"e_1_3_2_13_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10115-023-02022-1"},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1145\/3343031.3350953"},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1145\/3638763"},{"key":"e_1_3_2_16_2","doi-asserted-by":"publisher","DOI":"10.1145\/3637528.3671473"},{"key":"e_1_3_2_17_2","doi-asserted-by":"publisher","DOI":"10.1145\/3735561"},{"key":"e_1_3_2_18_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSS.2024.3490801"},{"key":"e_1_3_2_19_2","unstructured":"Steffen Rendle Christoph Freudenthaler Zeno Gantner and Lars Schmidt-Thieme. 2012. BPR: Bayesian personalized ranking from implicit feedback. arXiv:1205.2618. Retrieved from https:\/\/arxiv.org\/abs\/1205.2618"},{"key":"e_1_3_2_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2022.3187572"},{"key":"e_1_3_2_21_2","doi-asserted-by":"publisher","DOI":"10.2478\/jaiscr-2024-0012"},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-025-94256-y"},{"key":"e_1_3_2_23_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2025.3551402"},{"key":"e_1_3_2_24_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2021.3138464"},{"key":"e_1_3_2_25_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10489-024-06061-1"},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1145\/3331184.3331267"},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICME51207.2021.9428201"},{"key":"e_1_3_2_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2019.2923608"},{"key":"e_1_3_2_29_2","doi-asserted-by":"publisher","DOI":"10.1145\/3394171.3413556"},{"key":"e_1_3_2_30_2","doi-asserted-by":"publisher","DOI":"10.1145\/3343031.3351034"},{"key":"e_1_3_2_31_2","doi-asserted-by":"publisher","DOI":"10.1145\/3539618.3591716"},{"key":"e_1_3_2_32_2","unstructured":"Jinfeng Xu Zheyu Chen Wei Wang Xiping Hu Sang-Wook Kim and Edith C. H. Ngai. 2025. COHESION: Composite graph convolutional network with dual-stage fusion for multimodal recommendation. arXiv:2504.04452. Retrieved from https:\/\/arxiv.org\/abs\/2504.04452"},{"key":"e_1_3_2_33_2","doi-asserted-by":"publisher","DOI":"10.1145\/3539618.3591932"},{"key":"e_1_3_2_34_2","doi-asserted-by":"publisher","DOI":"10.1145\/3474085.3475259"},{"key":"e_1_3_2_35_2","unstructured":"Hongyu Zhou Xin Zhou Zhiwei Zeng Lingzi Zhang and Zhiqi Shen. 2023. A comprehensive survey on multimodal recommender systems: taxonomy evaluation and future directions. arXiv:2302.04473. Retrieved from https:\/\/arxiv.org\/abs\/2302.04473"},{"key":"e_1_3_2_36_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2024.3369875"},{"key":"e_1_3_2_37_2","doi-asserted-by":"publisher","DOI":"10.1145\/3581783.3611943"},{"key":"e_1_3_2_38_2","doi-asserted-by":"publisher","DOI":"10.1145\/3543507.3583251"}],"container-title":["ACM Transactions on Multimedia Computing, Communications, and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3788289","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,15]],"date-time":"2026-03-15T03:49:17Z","timestamp":1773546557000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3788289"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,2,27]]},"references-count":37,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2026,3,31]]}},"alternative-id":["10.1145\/3788289"],"URL":"https:\/\/doi.org\/10.1145\/3788289","relation":{},"ISSN":["1551-6857","1551-6865"],"issn-type":[{"value":"1551-6857","type":"print"},{"value":"1551-6865","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,2,27]]},"assertion":[{"value":"2025-07-24","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2026-01-03","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2026-02-27","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}