{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,29]],"date-time":"2025-12-29T18:54:21Z","timestamp":1767034461981,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":46,"publisher":"ACM","license":[{"start":{"date-parts":[[2020,10,12]],"date-time":"2020-10-12T00:00:00Z","timestamp":1602460800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"National Key R&D Program of China under Grand","award":["2018AAA0102000"],"award-info":[{"award-number":["2018AAA0102000"]}]},{"name":"National Natural Science Foundation of China Major Project","award":["U1611461"],"award-info":[{"award-number":["U1611461"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2020,10,12]]},"DOI":"10.1145\/3394171.3413908","type":"proceedings-article","created":{"date-parts":[[2020,10,12]],"date-time":"2020-10-12T12:26:25Z","timestamp":1602505585000},"page":"1085-1093","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":15,"title":["Controllable Video Captioning with an Exemplar Sentence"],"prefix":"10.1145","author":[{"given":"Yitian","family":"Yuan","sequence":"first","affiliation":[{"name":"Tsinghua University, Shenzhen, China"}]},{"given":"Lin","family":"Ma","sequence":"additional","affiliation":[{"name":"Meituan-Dianping Group, Beijing, China"}]},{"given":"Jingwen","family":"Wang","sequence":"additional","affiliation":[{"name":"Tencent AI Lab, Shenzhen, China"}]},{"given":"Wenwu","family":"Zhu","sequence":"additional","affiliation":[{"name":"Tsinghua University, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2020,10,12]]},"reference":[{"key":"e_1_3_2_2_1_1","doi-asserted-by":"crossref","unstructured":"Lorenzo Baraldi Costantino Grana and Rita Cucchiara. 2017. Hierarchical boundary-aware neural encoder for video captioning. In CVPR.  Lorenzo Baraldi Costantino Grana and Rita Cucchiara. 2017. Hierarchical boundary-aware neural encoder for video captioning. In CVPR.","DOI":"10.1109\/CVPR.2017.339"},{"key":"e_1_3_2_2_2_1","doi-asserted-by":"crossref","unstructured":"Yangyu Chen Shuhui Wang Weigang Zhang and Qingming Huang. 2018. Less is more: Picking informative frames for video captioning. In ECCV.  Yangyu Chen Shuhui Wang Weigang Zhang and Qingming Huang. 2018. Less is more: Picking informative frames for video captioning. In ECCV.","DOI":"10.1007\/978-3-030-01261-8_22"},{"key":"e_1_3_2_2_3_1","volume-title":"cC aug lar G\u00fclcc ehre, and Aaron Courville","author":"Cooijmans Tim","year":"2016","unstructured":"Tim Cooijmans , Nicolas Ballas , C\u00e9sar Laurent , cC aug lar G\u00fclcc ehre, and Aaron Courville . 2016 . Recurrent batch normalization. arXiv preprint arXiv:1603.09025 (2016). Tim Cooijmans, Nicolas Ballas, C\u00e9sar Laurent, cC aug lar G\u00fclcc ehre, and Aaron Courville. 2016. Recurrent batch normalization. arXiv preprint arXiv:1603.09025 (2016)."},{"key":"e_1_3_2_2_4_1","volume-title":"Stefan Lee, and Dhruv Batra.","author":"Das Abhishek","year":"2017","unstructured":"Abhishek Das , Satwik Kottur , Jos\u00e9 MF Moura , Stefan Lee, and Dhruv Batra. 2017 . Learning cooperative visual dialog agents with deep reinforcement learning. In ICCV. Abhishek Das, Satwik Kottur, Jos\u00e9 MF Moura, Stefan Lee, and Dhruv Batra. 2017. Learning cooperative visual dialog agents with deep reinforcement learning. In ICCV."},{"key":"e_1_3_2_2_5_1","unstructured":"Harm De Vries Florian Strub J\u00e9r\u00e9mie Mary Hugo Larochelle Olivier Pietquin and Aaron C Courville. 2017. Modulating early visual processing by language. In NeurIPS.  Harm De Vries Florian Strub J\u00e9r\u00e9mie Mary Hugo Larochelle Olivier Pietquin and Aaron C Courville. 2017. Modulating early visual processing by language. In NeurIPS."},{"key":"e_1_3_2_2_6_1","doi-asserted-by":"crossref","unstructured":"Aditya Deshpande Jyoti Aneja Liwei Wang Alexander G Schwing and David Forsyth. 2019. Fast Diverse and Accurate Image Captioning Guided By Part-of-Speech. In CVPR.  Aditya Deshpande Jyoti Aneja Liwei Wang Alexander G Schwing and David Forsyth. 2019. Fast Diverse and Accurate Image Captioning Guided By Part-of-Speech. In CVPR.","DOI":"10.1109\/CVPR.2019.01095"},{"key":"e_1_3_2_2_7_1","volume-title":"A learned representation for artistic style. arXiv preprint arXiv:1610.07629","author":"Dumoulin Vincent","year":"2016","unstructured":"Vincent Dumoulin , Jonathon Shlens , and Manjunath Kudlur . 2016. A learned representation for artistic style. arXiv preprint arXiv:1610.07629 ( 2016 ). Vincent Dumoulin, Jonathon Shlens, and Manjunath Kudlur. 2016. A learned representation for artistic style. arXiv preprint arXiv:1610.07629 (2016)."},{"key":"e_1_3_2_2_8_1","doi-asserted-by":"crossref","unstructured":"Yang Feng Lin Ma Wei Liu and Jiebo Luo. 2019. Unsupervised Image Captioning. In CVPR.  Yang Feng Lin Ma Wei Liu and Jiebo Luo. 2019. Unsupervised Image Captioning. In CVPR.","DOI":"10.1109\/CVPR.2019.00425"},{"key":"e_1_3_2_2_9_1","volume-title":"Stylenet: Generating attractive visual captions with styles. In CVPR.","author":"Gan Chuang","year":"2017","unstructured":"Chuang Gan , Zhe Gan , Xiaodong He , Jianfeng Gao , and Li Deng . 2017 . Stylenet: Generating attractive visual captions with styles. In CVPR. Chuang Gan, Zhe Gan, Xiaodong He, Jianfeng Gao, and Li Deng. 2017. Stylenet: Generating attractive visual captions with styles. In CVPR."},{"key":"e_1_3_2_2_10_1","doi-asserted-by":"crossref","unstructured":"Sergio Guadarrama Niveda Krishnamoorthy Girish Malkarnenkar Subhashini Venugopalan Raymond Mooney Trevor Darrell and Kate Saenko. 2013. Youtube2text: Recognizing and describing arbitrary activities using semantic hierarchies and zero-shot recognition. In ICCV.  Sergio Guadarrama Niveda Krishnamoorthy Girish Malkarnenkar Subhashini Venugopalan Raymond Mooney Trevor Darrell and Kate Saenko. 2013. Youtube2text: Recognizing and describing arbitrary activities using semantic hierarchies and zero-shot recognition. In ICCV.","DOI":"10.1109\/ICCV.2013.337"},{"key":"e_1_3_2_2_11_1","volume-title":"Long short-term memory. Neural computation","author":"Hochreiter Sepp","year":"1997","unstructured":"Sepp Hochreiter and J\u00fcrgen Schmidhuber . 1997. Long short-term memory. Neural computation , Vol. 9 , 8 ( 1997 ), 1735--1780. Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation, Vol. 9, 8 (1997), 1735--1780."},{"key":"e_1_3_2_2_12_1","volume-title":"Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980","author":"Kingma Diederik P","year":"2014","unstructured":"Diederik P Kingma and Jimmy Ba . 2014 . Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014). Diederik P Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)."},{"key":"e_1_3_2_2_13_1","doi-asserted-by":"crossref","unstructured":"Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In arXiv preprint arXiv:1805.01052.  Nikita Kitaev and Dan Klein. 2018. Constituency parsing with a self-attentive encoder. In arXiv preprint arXiv:1805.01052.","DOI":"10.18653\/v1\/P18-1249"},{"key":"e_1_3_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1023\/A:1020346032608"},{"key":"e_1_3_2_2_15_1","volume-title":"Devi Parikh, Dhruv Batra, and Marcus Rohrbach.","author":"Kottur Satwik","year":"2018","unstructured":"Satwik Kottur , Jos\u00e9 MF Moura , Devi Parikh, Dhruv Batra, and Marcus Rohrbach. 2018 . Visual coreference resolution in visual dialog using neural module networks. In ECCV. Satwik Kottur, Jos\u00e9 MF Moura, Devi Parikh, Dhruv Batra, and Marcus Rohrbach. 2018. Visual coreference resolution in visual dialog using neural module networks. In ECCV."},{"key":"e_1_3_2_2_16_1","doi-asserted-by":"crossref","unstructured":"Ranjay Krishna Kenji Hata Frederic Ren Li Fei-Fei and Juan Carlos Niebles. 2017. Dense-captioning events in videos. In ICCV.  Ranjay Krishna Kenji Hata Frederic Ren Li Fei-Fei and Juan Carlos Niebles. 2017. Dense-captioning events in videos. In ICCV.","DOI":"10.1109\/ICCV.2017.83"},{"key":"e_1_3_2_2_17_1","unstructured":"Yehao Li Ting Yao Rui Hu Tao Mei and Yong Rui. 2016. Video ChatBot: Triggering Live Social Interactions by Automatic Video Commenting. In ACM MM.  Yehao Li Ting Yao Rui Hu Tao Mei and Yong Rui. 2016. Video ChatBot: Triggering Live Social Interactions by Automatic Video Commenting. In ACM MM."},{"key":"e_1_3_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.3115\/v1\/P14-5010"},{"key":"e_1_3_2_2_19_1","volume-title":"Semstyle: Learning to generate stylised image captions using unaligned text. In CVPR.","author":"Mathews Alexander","year":"2018","unstructured":"Alexander Mathews , Lexing Xie , and Xuming He . 2018 . Semstyle: Learning to generate stylised image captions using unaligned text. In CVPR. Alexander Mathews, Lexing Xie, and Xuming He. 2018. Semstyle: Learning to generate stylised image captions using unaligned text. In CVPR."},{"key":"e_1_3_2_2_20_1","unstructured":"Yingwei Pan Tao Mei Ting Yao Houqiang Li and Yong Rui. 2016. Jointly modeling embedding and translation to bridge video and language. In CVPR.  Yingwei Pan Tao Mei Ting Yao Houqiang Li and Yong Rui. 2016. Jointly modeling embedding and translation to bridge video and language. In CVPR."},{"key":"e_1_3_2_2_21_1","unstructured":"Yingwei Pan Ting Yao Houqiang Li and Tao Mei. 2017. Video captioning with transferred semantic attributes. In CVPR.  Yingwei Pan Ting Yao Houqiang Li and Tao Mei. 2017. Video captioning with transferred semantic attributes. In CVPR."},{"key":"e_1_3_2_2_22_1","volume-title":"Multi-task video captioning with video and entailment generation. arXiv preprint arXiv:1704.07489","author":"Pasunuru Ramakanth","year":"2017","unstructured":"Ramakanth Pasunuru and Mohit Bansal . 2017a. Multi-task video captioning with video and entailment generation. arXiv preprint arXiv:1704.07489 ( 2017 ). Ramakanth Pasunuru and Mohit Bansal. 2017a. Multi-task video captioning with video and entailment generation. arXiv preprint arXiv:1704.07489 (2017)."},{"key":"e_1_3_2_2_23_1","volume-title":"Reinforced video captioning with entailment rewards. arXiv preprint arXiv:1708.02300","author":"Pasunuru Ramakanth","year":"2017","unstructured":"Ramakanth Pasunuru and Mohit Bansal . 2017b. Reinforced video captioning with entailment rewards. arXiv preprint arXiv:1708.02300 ( 2017 ). Ramakanth Pasunuru and Mohit Bansal. 2017b. Reinforced video captioning with entailment rewards. arXiv preprint arXiv:1708.02300 (2017)."},{"key":"e_1_3_2_2_24_1","volume-title":"Glove: Global vectors for word representation. In EMNLP.","author":"Pennington Jeffrey","year":"2014","unstructured":"Jeffrey Pennington , Richard Socher , and Christopher Manning . 2014 . Glove: Global vectors for word representation. In EMNLP. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In EMNLP."},{"key":"e_1_3_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-11752-2_15"},{"key":"e_1_3_2_2_26_1","doi-asserted-by":"crossref","unstructured":"Marcus Rohrbach Wei Qiu Ivan Titov Stefan Thater Manfred Pinkal and Bernt Schiele. 2013. Translating video content to natural language descriptions. In ICCV.  Marcus Rohrbach Wei Qiu Ivan Titov Stefan Thater Manfred Pinkal and Bernt Schiele. 2013. Translating video content to natural language descriptions. In ICCV.","DOI":"10.1109\/ICCV.2013.61"},{"key":"e_1_3_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263-015-0816-y"},{"key":"e_1_3_2_2_28_1","volume-title":"Alessandro Sordoni, Aaron Courville, and Yoshua Bengio.","author":"Shen Yikang","year":"2018","unstructured":"Yikang Shen , Zhouhan Lin , Athul Paul Jacob , Alessandro Sordoni, Aaron Courville, and Yoshua Bengio. 2018 . Straight to the tree: Constituency parsing with neural syntactic distance. arXiv preprint arXiv:1806.04168 (2018). Yikang Shen, Zhouhan Lin, Athul Paul Jacob, Alessandro Sordoni, Aaron Courville, and Yoshua Bengio. 2018. Straight to the tree: Constituency parsing with neural syntactic distance. arXiv preprint arXiv:1806.04168 (2018)."},{"key":"e_1_3_2_2_29_1","doi-asserted-by":"crossref","unstructured":"Christian Szegedy Sergey Ioffe Vincent Vanhoucke and Alexander A Alemi. 2017. Inception-v4 inception-resnet and the impact of residual connections on learning. In AAAI.  Christian Szegedy Sergey Ioffe Vincent Vanhoucke and Alexander A Alemi. 2017. Inception-v4 inception-resnet and the impact of residual connections on learning. In AAAI.","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"e_1_3_2_2_30_1","doi-asserted-by":"crossref","unstructured":"Subhashini Venugopalan Marcus Rohrbach Jeffrey Donahue Raymond Mooney Trevor Darrell and Kate Saenko. 2015. Sequence to sequence-video to text. In ICCV.  Subhashini Venugopalan Marcus Rohrbach Jeffrey Donahue Raymond Mooney Trevor Darrell and Kate Saenko. 2015. Sequence to sequence-video to text. In ICCV.","DOI":"10.1109\/ICCV.2015.515"},{"key":"e_1_3_2_2_31_1","volume-title":"Translating videos to natural language using deep recurrent neural networks. arXiv preprint arXiv:1412.4729","author":"Venugopalan Subhashini","year":"2014","unstructured":"Subhashini Venugopalan , Huijuan Xu , Jeff Donahue , Marcus Rohrbach , Raymond Mooney , and Kate Saenko . 2014. Translating videos to natural language using deep recurrent neural networks. arXiv preprint arXiv:1412.4729 ( 2014 ). Subhashini Venugopalan, Huijuan Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, and Kate Saenko. 2014. Translating videos to natural language using deep recurrent neural networks. arXiv preprint arXiv:1412.4729 (2014)."},{"key":"e_1_3_2_2_32_1","doi-asserted-by":"crossref","unstructured":"Bairui Wang Lin Ma Wei Zhang Wenhao Jiang Jingwen Wang and Wei Liu. 2019. Controllable Video Captioning with POS Sequence Guidance Based on Gated Fusion Network. In ICCV.  Bairui Wang Lin Ma Wei Zhang Wenhao Jiang Jingwen Wang and Wei Liu. 2019. Controllable Video Captioning with POS Sequence Guidance Based on Gated Fusion Network. In ICCV.","DOI":"10.1109\/ICCV.2019.00273"},{"key":"e_1_3_2_2_33_1","doi-asserted-by":"crossref","unstructured":"Bairui Wang Lin Ma Wei Zhang and Wei Liu. 2018c. Reconstruction network for video captioning. In CVPR.  Bairui Wang Lin Ma Wei Zhang and Wei Liu. 2018c. Reconstruction network for video captioning. In CVPR.","DOI":"10.1109\/CVPR.2018.00795"},{"key":"e_1_3_2_2_34_1","doi-asserted-by":"crossref","unstructured":"Jingwen Wang Wenhao Jiang Lin Ma Wei Liu and Yong Xu. 2018b. Bidirectional Attentive Fusion with Context Gating for Dense Video Captioning. In CVPR.  Jingwen Wang Wenhao Jiang Lin Ma Wei Liu and Yong Xu. 2018b. Bidirectional Attentive Fusion with Context Gating for Dense Video Captioning. In CVPR.","DOI":"10.1109\/CVPR.2018.00751"},{"key":"e_1_3_2_2_35_1","unstructured":"Qingzhong Wang and Antoni B Chan. 2019. Describing like humans: on diversity in image captioning. In CVPR.  Qingzhong Wang and Antoni B Chan. 2019. Describing like humans: on diversity in image captioning. In CVPR."},{"key":"e_1_3_2_2_36_1","doi-asserted-by":"crossref","unstructured":"Xin Wang Wenhu Chen Jiawei Wu Yuan-Fang Wang and William Yang Wang. 2018a. Video captioning via hierarchical reinforcement learning. In CVPR.  Xin Wang Wenhu Chen Jiawei Wu Yuan-Fang Wang and William Yang Wang. 2018a. Video captioning via hierarchical reinforcement learning. In CVPR.","DOI":"10.1109\/CVPR.2018.00443"},{"key":"e_1_3_2_2_37_1","volume-title":"Msr-vtt: A large video description dataset for bridging video and language. In CVPR.","author":"Xu Jun","year":"2016","unstructured":"Jun Xu , Tao Mei , Ting Yao , and Yong Rui . 2016 . Msr-vtt: A large video description dataset for bridging video and language. In CVPR. Jun Xu, Tao Mei, Ting Yao, and Yong Rui. 2016. Msr-vtt: A large video description dataset for bridging video and language. In CVPR."},{"key":"e_1_3_2_2_38_1","doi-asserted-by":"crossref","unstructured":"Jun Xu Ting Yao Yongdong Zhang and Tao Mei. 2017. Learning multimodal attention LSTM networks for video captioning. In ACM MM.  Jun Xu Ting Yao Yongdong Zhang and Tao Mei. 2017. Learning multimodal attention LSTM networks for video captioning. In ACM MM.","DOI":"10.1145\/3123266.3123448"},{"key":"e_1_3_2_2_39_1","doi-asserted-by":"crossref","unstructured":"Ran Xu Caiming Xiong Wei Chen and Jason J Corso. 2015. Jointly modeling deep video and compositional text to bridge vision and language in a unified framework. In AAAI.  Ran Xu Caiming Xiong Wei Chen and Jason J Corso. 2015. Jointly modeling deep video and compositional text to bridge vision and language in a unified framework. In AAAI.","DOI":"10.1609\/aaai.v29i1.9512"},{"key":"e_1_3_2_2_40_1","doi-asserted-by":"crossref","unstructured":"Li Yao Atousa Torabi Kyunghyun Cho Nicolas Ballas Christopher Pal Hugo Larochelle and Aaron Courville. 2015. Describing videos by exploiting temporal structure. In ICCV.  Li Yao Atousa Torabi Kyunghyun Cho Nicolas Ballas Christopher Pal Hugo Larochelle and Aaron Courville. 2015. Describing videos by exploiting temporal structure. In ICCV.","DOI":"10.1109\/ICCV.2015.512"},{"key":"e_1_3_2_2_41_1","volume-title":"Image captioning at will: A versatile scheme for effectively injecting sentiments into image descriptions. arXiv preprint arXiv:1801.10121","author":"You Quanzeng","year":"2018","unstructured":"Quanzeng You , Hailin Jin , and Jiebo Luo . 2018. Image captioning at will: A versatile scheme for effectively injecting sentiments into image descriptions. arXiv preprint arXiv:1801.10121 ( 2018 ). Quanzeng You, Hailin Jin, and Jiebo Luo. 2018. Image captioning at will: A versatile scheme for effectively injecting sentiments into image descriptions. arXiv preprint arXiv:1801.10121 (2018)."},{"key":"e_1_3_2_2_42_1","unstructured":"Haonan Yu Jiang Wang Zhiheng Huang Yi Yang and Wei Xu. 2016. Video paragraph captioning using hierarchical recurrent neural networks. In CVPR.  Haonan Yu Jiang Wang Zhiheng Huang Yi Yang and Wei Xu. 2016. Video paragraph captioning using hierarchical recurrent neural networks. In CVPR."},{"key":"e_1_3_2_2_43_1","doi-asserted-by":"crossref","unstructured":"Yitian Yuan Lin Ma Jingwen Wang Wei Liu and Wenwu Zhu. 2019. Semantic Conditioned Dynamic Modulation for Temporal Sentence Grounding in Videos. In NeurIPS.  Yitian Yuan Lin Ma Jingwen Wang Wei Liu and Wenwu Zhu. 2019. Semantic Conditioned Dynamic Modulation for Temporal Sentence Grounding in Videos. In NeurIPS.","DOI":"10.1109\/TPAMI.2020.3038993"},{"key":"e_1_3_2_2_44_1","series-title":"SIAM journal on computing","volume-title":"Simple fast algorithms for the editing distance between trees and related problems","author":"Zhang Kaizhong","year":"1989","unstructured":"Kaizhong Zhang and Dennis Shasha . 1989. Simple fast algorithms for the editing distance between trees and related problems . SIAM journal on computing , Vol. 18 , 6 ( 1989 ), 1245--1262. Kaizhong Zhang and Dennis Shasha. 1989. Simple fast algorithms for the editing distance between trees and related problems. SIAM journal on computing, Vol. 18, 6 (1989), 1245--1262."},{"key":"#cr-split#-e_1_3_2_2_45_1.1","doi-asserted-by":"crossref","unstructured":"Wei Zhang Bairui Wang Lin Ma and Wei Liu. 2019. Reconstruct and Represent Video Contents for Captioning via Reinforcement Learning. In TPAMI. https:\/\/doi.org\/10.1109\/TPAMI.2019.2920899 10.1109\/TPAMI.2019.2920899","DOI":"10.1109\/TPAMI.2019.2920899"},{"key":"#cr-split#-e_1_3_2_2_45_1.2","doi-asserted-by":"crossref","unstructured":"Wei Zhang Bairui Wang Lin Ma and Wei Liu. 2019. Reconstruct and Represent Video Contents for Captioning via Reinforcement Learning. In TPAMI. https:\/\/doi.org\/10.1109\/TPAMI.2019.2920899","DOI":"10.1109\/TPAMI.2019.2920899"}],"event":{"name":"MM '20: The 28th ACM International Conference on Multimedia","sponsor":["SIGMM ACM Special Interest Group on Multimedia"],"location":"Seattle WA USA","acronym":"MM '20"},"container-title":["Proceedings of the 28th ACM International Conference on Multimedia"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3394171.3413908","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3394171.3413908","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T21:32:06Z","timestamp":1750195926000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3394171.3413908"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,10,12]]},"references-count":46,"alternative-id":["10.1145\/3394171.3413908","10.1145\/3394171"],"URL":"https:\/\/doi.org\/10.1145\/3394171.3413908","relation":{},"subject":[],"published":{"date-parts":[[2020,10,12]]},"assertion":[{"value":"2020-10-12","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}