{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:11:06Z","timestamp":1750219866572,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":44,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,10,10]],"date-time":"2022-10-10T00:00:00Z","timestamp":1665360000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"the National Major Scientific Instruments and Equipments Development Project of National Natural Science Foundation of China","award":["No. 62027813"],"award-info":[{"award-number":["No. 62027813"]}]},{"name":"the National Natural Science Foundation of China","award":["No. 62106235"],"award-info":[{"award-number":["No. 62106235"]}]},{"name":"the Key Program of the National Natural Science Foundation of China","award":["No. 62036005"],"award-info":[{"award-number":["No. 62036005"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,10,10]]},"DOI":"10.1145\/3552458.3556447","type":"proceedings-article","created":{"date-parts":[[2022,10,4]],"date-time":"2022-10-04T22:08:06Z","timestamp":1664921286000},"page":"15-23","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Dual Domain-Adversarial Learning for Audio-Visual Saliency Prediction"],"prefix":"10.1145","author":[{"given":"Yingzi","family":"Fan","sequence":"first","affiliation":[{"name":"Xidian University, Xi'an, China"}]},{"given":"Longfei","family":"Han","sequence":"additional","affiliation":[{"name":"Beijing Technology and Business University, Beijing, China"}]},{"given":"Yue","family":"Zhang","sequence":"additional","affiliation":[{"name":"Xi'an Jiaotong University, Xi'an, China"}]},{"given":"Lechao","family":"Cheng","sequence":"additional","affiliation":[{"name":"Zhejiang Lab, Hangzhou, China"}]},{"given":"Chen","family":"Xia","sequence":"additional","affiliation":[{"name":"Northwestern Polytechnical University, Xi'an, China"}]},{"given":"Di","family":"Hu","sequence":"additional","affiliation":[{"name":"Renmin University of China, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2022,10,10]]},"reference":[{"key":"e_1_3_2_1_1_1","first-page":"1784","volume-title":"2007 15th European Signal Processing Conference","author":"Marat Sophie","year":"2007","unstructured":"Sophie Marat , Mick\u00e4el Guironnet , and Denis Pellerin . Video summarization using a visual attention model . In 2007 15th European Signal Processing Conference , pages 1784 -- 1788 . IEEE, 2007 . Sophie Marat, Mick\u00e4el Guironnet, and Denis Pellerin. Video summarization using a visual attention model. In 2007 15th European Signal Processing Conference, pages 1784--1788. IEEE, 2007."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_2_1","DOI":"10.1109\/TIP.2013.2282897"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_3_1","DOI":"10.1109\/DCABES.2010.160"},{"key":"e_1_3_2_1_4_1","volume-title":"Dave: A deep audiovisual embedding for dynamic saliency prediction. arXiv preprint arXiv:1905.10693","author":"Tavakoli Hamed R","year":"2019","unstructured":"Hamed R Tavakoli , Ali Borji , Esa Rahtu , and Juho Kannala . Dave: A deep audiovisual embedding for dynamic saliency prediction. arXiv preprint arXiv:1905.10693 , 2019 . Hamed R Tavakoli, Ali Borji, Esa Rahtu, and Juho Kannala. Dave: A deep audiovisual embedding for dynamic saliency prediction. arXiv preprint arXiv:1905.10693, 2019."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_5_1","DOI":"10.1109\/CVPR42600.2020.00482"},{"key":"e_1_3_2_1_6_1","volume-title":"Bio-inspired audio-visual cues integration for visual attention prediction. arXiv preprint arXiv:2109.08371","author":"Yuan Yuan","year":"2021","unstructured":"Yuan Yuan , Hailong Ning , and Bin Zhao . Bio-inspired audio-visual cues integration for visual attention prediction. arXiv preprint arXiv:2109.08371 , 2021 . Yuan Yuan, Hailong Ning, and Bin Zhao. Bio-inspired audio-visual cues integration for visual attention prediction. arXiv preprint arXiv:2109.08371, 2021."},{"key":"e_1_3_2_1_7_1","volume-title":"Avinet: Diving deep into audio-visual saliency prediction. arXiv e-prints","author":"Jain Samyak","year":"2012","unstructured":"Samyak Jain , Pradeep Yarlagadda , Ramanathan Subramanian , and Vineet Gandhi . Avinet: Diving deep into audio-visual saliency prediction. arXiv e-prints , pages arXiv-- 2012 , 2020. Samyak Jain, Pradeep Yarlagadda, Ramanathan Subramanian, and Vineet Gandhi. Avinet: Diving deep into audio-visual saliency prediction. arXiv e-prints, pages arXiv--2012, 2020."},{"key":"e_1_3_2_1_8_1","volume-title":"Soundnet: Learning sound representations from unlabeled video. Advances in neural information processing systems, 29","author":"Aytar Yusuf","year":"2016","unstructured":"Yusuf Aytar , Carl Vondrick , and Antonio Torralba . Soundnet: Learning sound representations from unlabeled video. Advances in neural information processing systems, 29 , 2016 . Yusuf Aytar, Carl Vondrick, and Antonio Torralba. Soundnet: Learning sound representations from unlabeled video. Advances in neural information processing systems, 29, 2016."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_9_1","DOI":"10.1109\/ICCV.2015.38"},{"key":"e_1_3_2_1_10_1","volume-title":"Exploiting surroundedness for saliency detection: a boolean map approach","author":"Zhang Jianming","year":"2015","unstructured":"Jianming Zhang and Stan Sclaroff . Exploiting surroundedness for saliency detection: a boolean map approach . IEEE transactions on pattern analysis and machine intelligence, 38(5):889--902, 2015 . Jianming Zhang and Stan Sclaroff. Exploiting surroundedness for saliency detection: a boolean map approach. IEEE transactions on pattern analysis and machine intelligence, 38(5):889--902, 2015."},{"key":"e_1_3_2_1_11_1","volume-title":"Kevin McGuinness, Noel E O'Connor, Jordi Torres, Elisa Sayrol, and Xavier Giro-i Nieto. Salgan: Visual saliency prediction with generative adversarial networks. arXiv preprint arXiv:1701.01081","author":"Pan Junting","year":"2017","unstructured":"Junting Pan , Cristian Canton Ferrer , Kevin McGuinness, Noel E O'Connor, Jordi Torres, Elisa Sayrol, and Xavier Giro-i Nieto. Salgan: Visual saliency prediction with generative adversarial networks. arXiv preprint arXiv:1701.01081 , 2017 . Junting Pan, Cristian Canton Ferrer, Kevin McGuinness, Noel E O'Connor, Jordi Torres, Elisa Sayrol, and Xavier Giro-i Nieto. Salgan: Visual saliency prediction with generative adversarial networks. arXiv preprint arXiv:1701.01081, 2017."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_12_1","DOI":"10.1109\/TIP.2017.2787612"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_13_1","DOI":"10.1109\/TMM.2017.2777665"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_14_1","DOI":"10.1007\/978-3-030-01264-9_37"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_15_1","DOI":"10.1109\/CVPR.2018.00514"},{"key":"e_1_3_2_1_16_1","first-page":"2394","volume-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision","author":"Min Kyle","year":"2019","unstructured":"Kyle Min and Jason J Corso . Tased-net : Temporally-aggregating spatial encoderdecoder network for video saliency detection . In Proceedings of the IEEE\/CVF International Conference on Computer Vision , pages 2394 -- 2403 , 2019 . Kyle Min and Jason J Corso. Tased-net: Temporally-aggregating spatial encoderdecoder network for video saliency detection. In Proceedings of the IEEE\/CVF International Conference on Computer Vision, pages 2394--2403, 2019."},{"key":"e_1_3_2_1_17_1","volume-title":"Noel E O'Connor, Xavier Giro-i Nieto, and Kevin McGuinness. Simple vs complex temporal recurrences for video saliency prediction. arXiv preprint arXiv:1907.01869","author":"Linardos Panagiotis","year":"2019","unstructured":"Panagiotis Linardos , Eva Mohedano , Juan Jose Nieto , Noel E O'Connor, Xavier Giro-i Nieto, and Kevin McGuinness. Simple vs complex temporal recurrences for video saliency prediction. arXiv preprint arXiv:1907.01869 , 2019 . Panagiotis Linardos, Eva Mohedano, Juan Jose Nieto, Noel E O'Connor, Xavier Giro-i Nieto, and Kevin McGuinness. Simple vs complex temporal recurrences for video saliency prediction. arXiv preprint arXiv:1907.01869, 2019."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_18_1","DOI":"10.1007\/s11432-021-3384-y"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_19_1","DOI":"10.1109\/TPAMI.2022.3179526"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_20_1","DOI":"10.1109\/TGRS.2021.3123984"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_21_1","DOI":"10.1109\/TCSVT.2018.2870832"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_22_1","DOI":"10.1109\/CVPR42600.2020.01377"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_23_1","DOI":"10.1109\/CVPR42600.2020.00861"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_24_1","DOI":"10.1007\/s41095-020-0199-z"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_25_1","DOI":"10.1109\/ROBOT.2008.4543329"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_26_1","DOI":"10.1109\/IROS.2011.6095124"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_27_1","DOI":"10.1109\/TCSVT.2014.2329380"},{"key":"e_1_3_2_1_28_1","volume-title":"Audeosynth: music-driven video montage. ACM Transactions on Graphics (TOG), 34(4):1--10","author":"Liao Zicheng","year":"2015","unstructured":"Zicheng Liao , Yizhou Yu , Bingchen Gong , and Lechao Cheng . Audeosynth: music-driven video montage. ACM Transactions on Graphics (TOG), 34(4):1--10 , 2015 . Zicheng Liao, Yizhou Yu, Bingchen Gong, and Lechao Cheng. Audeosynth: music-driven video montage. ACM Transactions on Graphics (TOG), 34(4):1--10, 2015."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_29_1","DOI":"10.1109\/ICIP.2014.7025219"},{"key":"e_1_3_2_1_30_1","volume-title":"Fixation prediction through multimodal analysis. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 13(1):1--23","author":"Min Xiongkuo","year":"2016","unstructured":"Xiongkuo Min , Guangtao Zhai , Ke Gu , and Xiaokang Yang . Fixation prediction through multimodal analysis. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 13(1):1--23 , 2016 . Xiongkuo Min, Guangtao Zhai, Ke Gu, and Xiaokang Yang. Fixation prediction through multimodal analysis. ACM Transactions on Multimedia Computing, Communications, and Applications (TOMM), 13(1):1--23, 2016."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_31_1","DOI":"10.1109\/ICME51207.2021.9428415"},{"key":"e_1_3_2_1_32_1","first-page":"1989","volume-title":"International conference on machine learning","author":"Hoffman Judy","year":"2018","unstructured":"Judy Hoffman , Eric Tzeng , Taesung Park , Jun-Yan Zhu , Phillip Isola , Kate Saenko , Alexei Efros , and Trevor Darrell . Cycada : Cycle-consistent adversarial domain adaptation . In International conference on machine learning , pages 1989 -- 1998 . PMLR, 2018 . Judy Hoffman, Eric Tzeng, Taesung Park, Jun-Yan Zhu, Phillip Isola, Kate Saenko, Alexei Efros, and Trevor Darrell. Cycada: Cycle-consistent adversarial domain adaptation. In International conference on machine learning, pages 1989--1998. PMLR, 2018."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_33_1","DOI":"10.1109\/CVPR.2018.00473"},{"key":"e_1_3_2_1_34_1","volume-title":"Rethink maximum mean discrepancy for domain adaptation. arXiv preprint arXiv:2007.00689","author":"Wang Wei","year":"2020","unstructured":"Wei Wang , Haojie Li , Zhengming Ding , and Zhihui Wang . Rethink maximum mean discrepancy for domain adaptation. arXiv preprint arXiv:2007.00689 , 2020 . Wei Wang, Haojie Li, Zhengming Ding, and Zhihui Wang. Rethink maximum mean discrepancy for domain adaptation. arXiv preprint arXiv:2007.00689, 2020."},{"key":"e_1_3_2_1_35_1","first-page":"1180","volume-title":"International conference on machine learning","author":"Ganin Yaroslav","year":"2015","unstructured":"Yaroslav Ganin and Victor Lempitsky . Unsupervised domain adaptation by backpropagation . In International conference on machine learning , pages 1180 -- 1189 . PMLR, 2015 . Yaroslav Ganin and Victor Lempitsky. Unsupervised domain adaptation by backpropagation. In International conference on machine learning, pages 1180-- 1189. PMLR, 2015."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_36_1","DOI":"10.1109\/ICCV48922.2021.01020"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_37_1","DOI":"10.1007\/978-3-030-01267-0_19"},{"key":"e_1_3_2_1_38_1","first-page":"6558","volume-title":"Proceedings of the conference. Association for Computational Linguistics. Meeting","volume":"2019","author":"Hubert Tsai Yao-Hung","unstructured":"Yao-Hung Hubert Tsai , Shaojie Bai , Paul Pu Liang , J Zico Kolter , Louis-Philippe Morency , and Ruslan Salakhutdinov . Multimodal transformer for unaligned multimodal language sequences . In Proceedings of the conference. Association for Computational Linguistics. Meeting , volume 2019 , page 6558 . NIH Public Access, 2019. Yao-Hung Hubert Tsai, Shaojie Bai, Paul Pu Liang, J Zico Kolter, Louis-Philippe Morency, and Ruslan Salakhutdinov. Multimodal transformer for unaligned multimodal language sequences. In Proceedings of the conference. Association for Computational Linguistics. Meeting, volume 2019, page 6558. NIH Public Access, 2019."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_39_1","DOI":"10.1007\/s12559-010-9074-z"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_40_1","DOI":"10.1167\/14.8.5"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_41_1","DOI":"10.1007\/978-1-4939-3435-5_16"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_42_1","DOI":"10.1007\/978-3-319-10584-0_33"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_43_1","DOI":"10.1016\/j.image.2015.08.004"},{"key":"e_1_3_2_1_44_1","volume-title":"What do different evaluation metrics tell us about saliency models? IEEE transactions on pattern analysis and machine intelligence, 41(3):740--757","author":"Bylinskii Zoya","year":"2018","unstructured":"Zoya Bylinskii , Tilke Judd , Aude Oliva , Antonio Torralba , and Fr\u00e9do Durand . What do different evaluation metrics tell us about saliency models? IEEE transactions on pattern analysis and machine intelligence, 41(3):740--757 , 2018 Zoya Bylinskii, Tilke Judd, Aude Oliva, Antonio Torralba, and Fr\u00e9do Durand. What do different evaluation metrics tell us about saliency models? IEEE transactions on pattern analysis and machine intelligence, 41(3):740--757, 2018"}],"event":{"sponsor":["SIGMM ACM Special Interest Group on Multimedia"],"acronym":"MM '22","name":"MM '22: The 30th ACM International Conference on Multimedia","location":"Lisboa Portugal"},"container-title":["Proceedings of the 3rd International Workshop on Human-Centric Multimedia Analysis"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3552458.3556447","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3552458.3556447","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:47:42Z","timestamp":1750178862000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3552458.3556447"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,10,10]]},"references-count":44,"alternative-id":["10.1145\/3552458.3556447","10.1145\/3552458"],"URL":"https:\/\/doi.org\/10.1145\/3552458.3556447","relation":{},"subject":[],"published":{"date-parts":[[2022,10,10]]},"assertion":[{"value":"2022-10-10","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}