{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,11]],"date-time":"2026-04-11T20:37:15Z","timestamp":1775939835266,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":65,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,9,14]],"date-time":"2022-09-14T00:00:00Z","timestamp":1663113600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Fondazione di Modena"},{"name":"Italian Ministry of University and Research","award":["B87G22000460001"],"award-info":[{"award-number":["B87G22000460001"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2022,9,14]]},"DOI":"10.1145\/3549555.3549585","type":"proceedings-article","created":{"date-parts":[[2022,10,7]],"date-time":"2022-10-07T16:14:01Z","timestamp":1665159241000},"page":"1-7","source":"Crossref","is-referenced-by-count":44,"title":["Retrieval-Augmented Transformer for Image Captioning"],"prefix":"10.1145","author":[{"given":"Sara","family":"Sarto","sequence":"first","affiliation":[{"name":"Department of Engineering, University of Modena and Reggio Emilia, Italy"}]},{"given":"Marcella","family":"Cornia","sequence":"additional","affiliation":[{"name":"Department of Education and Humanities, University of Modena and Reggio Emilia, Italy"}]},{"given":"Lorenzo","family":"Baraldi","sequence":"additional","affiliation":[{"name":"Department of Engineering, University of Modena and Reggio Emilia, Italy"}]},{"given":"Rita","family":"Cucchiara","sequence":"additional","affiliation":[{"name":"Department of Engineering, University of Modena and Reggio Emilia, Italy"}]}],"member":"320","published-online":{"date-parts":[[2022,10,7]]},"reference":[{"key":"e_1_3_2_1_1_1","unstructured":"Jean-Baptiste Alayrac Jeff Donahue Pauline Luc Antoine Miech Iain Barr Yana Hasson Karel Lenc Arthur Mensch Katie Millican Malcolm Reynolds 2022. Flamingo: a Visual Language Model for Few-Shot Learning. arXiv preprint arXiv:2204.14198(2022). Jean-Baptiste Alayrac Jeff Donahue Pauline Luc Antoine Miech Iain Barr Yana Hasson Karel Lenc Arthur Mensch Katie Millican Malcolm Reynolds 2022. Flamingo: a Visual Language Model for Few-Shot Learning. arXiv preprint arXiv:2204.14198(2022)."},{"key":"e_1_3_2_1_2_1","volume-title":"SPICE: Semantic Propositional Image Caption Evaluation. In ECCV.","author":"Anderson Peter","year":"2016","unstructured":"Peter Anderson , Basura Fernando , Mark Johnson , and Stephen Gould . 2016 . SPICE: Semantic Propositional Image Caption Evaluation. In ECCV. Peter Anderson, Basura Fernando, Mark Johnson, and Stephen Gould. 2016. SPICE: Semantic Propositional Image Caption Evaluation. In ECCV."},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"crossref","unstructured":"Peter Anderson Xiaodong He Chris Buehler Damien Teney Mark Johnson Stephen Gould and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR. Peter Anderson Xiaodong He Chris Buehler Damien Teney Mark Johnson Stephen Gould and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR.","DOI":"10.1109\/CVPR.2018.00636"},{"key":"e_1_3_2_1_4_1","doi-asserted-by":"crossref","unstructured":"Jyoti Aneja Aditya Deshpande and Alexander\u00a0G Schwing. 2018. Convolutional image captioning. In CVPR. Jyoti Aneja Aditya Deshpande and Alexander\u00a0G Schwing. 2018. Convolutional image captioning. In CVPR.","DOI":"10.1109\/CVPR.2018.00583"},{"key":"e_1_3_2_1_5_1","volume-title":"ACL Workshops.","author":"Banerjee Satanjeev","year":"2005","unstructured":"Satanjeev Banerjee and Alon Lavie . 2005 . METEOR: An automatic metric for MT evaluation with improved correlation with human judgments . In ACL Workshops. Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An automatic metric for MT evaluation with improved correlation with human judgments. In ACL Workshops."},{"key":"e_1_3_2_1_6_1","volume-title":"The Unreasonable Effectiveness of CLIP Features for Image Captioning: An Experimental Analysis. In CVPR Workshops.","author":"Barraco Manuele","year":"2022","unstructured":"Manuele Barraco , Marcella Cornia , Silvia Cascianelli , Lorenzo Baraldi , and Rita Cucchiara . 2022 . The Unreasonable Effectiveness of CLIP Features for Image Captioning: An Experimental Analysis. In CVPR Workshops. Manuele Barraco, Marcella Cornia, Silvia Cascianelli, Lorenzo Baraldi, and Rita Cucchiara. 2022. The Unreasonable Effectiveness of CLIP Features for Image Captioning: An Experimental Analysis. In CVPR Workshops."},{"key":"e_1_3_2_1_7_1","doi-asserted-by":"crossref","unstructured":"Manuele Barraco Matteo Stefanini Marcella Cornia Silvia Cascianelli Lorenzo Baraldi and Rita Cucchiara. 2022. CaMEL: Mean Teacher Learning for Image Captioning. In ICPR. Manuele Barraco Matteo Stefanini Marcella Cornia Silvia Cascianelli Lorenzo Baraldi and Rita Cucchiara. 2022. CaMEL: Mean Teacher Learning for Image Captioning. In ICPR.","DOI":"10.1109\/ICPR56361.2022.9955644"},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"crossref","unstructured":"Roberto Bigazzi Federico Landi Marcella Cornia Silvia Cascianelli Lorenzo Baraldi and Rita Cucchiara. 2020. Explore and Explain: Self-supervised Navigation and Recounting. In ICPR. Roberto Bigazzi Federico Landi Marcella Cornia Silvia Cascianelli Lorenzo Baraldi and Rita Cucchiara. 2020. Explore and Explain: Self-supervised Navigation and Recounting. In ICPR.","DOI":"10.1109\/ICPR48806.2021.9412628"},{"key":"e_1_3_2_1_9_1","volume-title":"Jean-Baptiste Lespiau","author":"Borgeaud Sebastian","year":"2021","unstructured":"Sebastian Borgeaud , Arthur Mensch , Jordan Hoffmann , Trevor Cai , Eliza Rutherford , Katie Millican , George van\u00a0den Driessche , Jean-Baptiste Lespiau , Bogdan Damoc, Aidan Clark , 2021 . Improving language models by retrieving from trillions of tokens. arXiv preprint arXiv:2112.04426(2021). Sebastian Borgeaud, Arthur Mensch, Jordan Hoffmann, Trevor Cai, Eliza Rutherford, Katie Millican, George van\u00a0den Driessche, Jean-Baptiste Lespiau, Bogdan Damoc, Aidan Clark, 2021. Improving language models by retrieving from trillions of tokens. arXiv preprint arXiv:2112.04426(2021)."},{"key":"e_1_3_2_1_10_1","unstructured":"Tom Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared\u00a0D Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell 2020. Language models are few-shot learners. In NeurIPS. Tom Brown Benjamin Mann Nick Ryder Melanie Subbiah Jared\u00a0D Kaplan Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell 2020. Language models are few-shot learners. In NeurIPS."},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"crossref","unstructured":"Marco Cagrandi Marcella Cornia Matteo Stefanini Lorenzo Baraldi and Rita Cucchiara. 2021. Learning to Select: A Fully Attentive Approach for Novel Object Captioning. In ICMR. Marco Cagrandi Marcella Cornia Matteo Stefanini Lorenzo Baraldi and Rita Cucchiara. 2021. Learning to Select: A Fully Attentive Approach for Novel Object Captioning. In ICMR.","DOI":"10.1145\/3460426.3463587"},{"key":"e_1_3_2_1_12_1","volume-title":"USENIX Security Symposium.","author":"Carlini Nicholas","year":"2021","unstructured":"Nicholas Carlini , Florian Tramer , Eric Wallace , Matthew Jagielski , Ariel Herbert-Voss , Katherine Lee , Adam Roberts , Tom Brown , Dawn Song , Ulfar Erlingsson , 2021 . Extracting training data from large language models . In USENIX Security Symposium. Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Katherine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, 2021. Extracting training data from large language models. In USENIX Security Symposium."},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"crossref","unstructured":"Marcella Cornia Lorenzo Baraldi and Rita Cucchiara. 2020. SMArT: Training Shallow Memory-aware Transformers for Robotic Explainability. In ICRA. Marcella Cornia Lorenzo Baraldi and Rita Cucchiara. 2020. SMArT: Training Shallow Memory-aware Transformers for Robotic Explainability. In ICRA.","DOI":"10.1109\/ICRA40945.2020.9196653"},{"key":"e_1_3_2_1_14_1","volume-title":"Explaining transformer-based image captioning models: An empirical analysis. AI Communications","author":"Cornia Marcella","year":"2021","unstructured":"Marcella Cornia , Lorenzo Baraldi , and Rita Cucchiara . 2021. Explaining transformer-based image captioning models: An empirical analysis. AI Communications ( 2021 ), 1\u201319. Marcella Cornia, Lorenzo Baraldi, and Rita Cucchiara. 2021. Explaining transformer-based image captioning models: An empirical analysis. AI Communications (2021), 1\u201319."},{"key":"e_1_3_2_1_15_1","volume-title":"Universal Captioner: Inducing Content-Style Separation in Vision-and-Language Model Training. arXiv preprint arXiv:2111.12727(2022).","author":"Cornia Marcella","year":"2022","unstructured":"Marcella Cornia , Lorenzo Baraldi , Giuseppe Fiameni , and Rita Cucchiara . 2022 . Universal Captioner: Inducing Content-Style Separation in Vision-and-Language Model Training. arXiv preprint arXiv:2111.12727(2022). Marcella Cornia, Lorenzo Baraldi, Giuseppe Fiameni, and Rita Cucchiara. 2022. Universal Captioner: Inducing Content-Style Separation in Vision-and-Language Model Training. arXiv preprint arXiv:2111.12727(2022)."},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"crossref","unstructured":"Marcella Cornia Matteo Stefanini Lorenzo Baraldi and Rita Cucchiara. 2020. Meshed-Memory Transformer for Image Captioning. In CVPR. Marcella Cornia Matteo Stefanini Lorenzo Baraldi and Rita Cucchiara. 2020. Meshed-Memory Transformer for Image Captioning. In CVPR.","DOI":"10.1109\/CVPR42600.2020.01059"},{"key":"e_1_3_2_1_17_1","volume-title":"BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL.","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2018 . BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: Pre-training of deep bidirectional transformers for language understanding. In NAACL."},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"crossref","unstructured":"Jeffrey Donahue Lisa Anne\u00a0Hendricks Sergio Guadarrama Marcus Rohrbach Subhashini Venugopalan Kate Saenko and Trevor Darrell. 2015. Long-term recurrent convolutional networks for visual recognition and description. In CVPR. Jeffrey Donahue Lisa Anne\u00a0Hendricks Sergio Guadarrama Marcus Rohrbach Subhashini Venugopalan Kate Saenko and Trevor Darrell. 2015. Long-term recurrent convolutional networks for visual recognition and description. In CVPR.","DOI":"10.1109\/CVPR.2015.7298878"},{"key":"e_1_3_2_1_19_1","unstructured":"Alexey Dosovitskiy Lucas Beyer Alexander Kolesnikov Dirk Weissenborn Xiaohua Zhai Thomas Unterthiner Mostafa Dehghani Matthias Minderer Georg Heigold Sylvain Gelly Jakob Uszkoreit and Neil Houlsby. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In ICLR. Alexey Dosovitskiy Lucas Beyer Alexander Kolesnikov Dirk Weissenborn Xiaohua Zhai Thomas Unterthiner Mostafa Dehghani Matthias Minderer Georg Heigold Sylvain Gelly Jakob Uszkoreit and Neil Houlsby. 2021. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. In ICLR."},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"crossref","unstructured":"Shiv\u00a0Ram Dubey Satish\u00a0Kumar Singh and Wei-Ta Chu. 2021. Vision Transformer Hashing for Image Retrieval. In ICME. Shiv\u00a0Ram Dubey Satish\u00a0Kumar Singh and Wei-Ta Chu. 2021. Vision Transformer Hashing for Image Retrieval. In ICME.","DOI":"10.1109\/ICME52920.2022.9859900"},{"key":"e_1_3_2_1_21_1","unstructured":"Alaaeldin El-Nouby Natalia Neverova Ivan Laptev and Herv\u00e9 J\u00e9gou. 2021. Training Vision Transformers for Image Retrieval. arXiv preprint arXiv:2102.05644(2021). Alaaeldin El-Nouby Natalia Neverova Ivan Laptev and Herv\u00e9 J\u00e9gou. 2021. Training Vision Transformers for Image Retrieval. arXiv preprint arXiv:2102.05644(2021)."},{"key":"e_1_3_2_1_22_1","unstructured":"Longteng Guo Jing Liu Xinxin Zhu Peng Yao Shichen Lu and Hanqing Lu. 2020. Normalized and Geometry-Aware Self-Attention Network for Image Captioning. In CVPR. Longteng Guo Jing Liu Xinxin Zhu Peng Yao Shichen Lu and Hanqing Lu. 2020. Normalized and Geometry-Aware Self-Attention Network for Image Captioning. In CVPR."},{"key":"e_1_3_2_1_23_1","volume-title":"REALM: Retrieval-Augmented Language Model Pre-Training. In ICML.","author":"Guu Kelvin","year":"2020","unstructured":"Kelvin Guu , Kenton Lee , Zora Tung , Panupong Pasupat , and Ming-Wei Chang . 2020 . REALM: Retrieval-Augmented Language Model Pre-Training. In ICML. Kelvin Guu, Kenton Lee, Zora Tung, Panupong Pasupat, and Ming-Wei Chang. 2020. REALM: Retrieval-Augmented Language Model Pre-Training. In ICML."},{"key":"e_1_3_2_1_24_1","unstructured":"Kaiming He Xiangyu Zhang Shaoqing Ren and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR. Kaiming He Xiangyu Zhang Shaoqing Ren and Jian Sun. 2016. Deep residual learning for image recognition. In CVPR."},{"key":"e_1_3_2_1_25_1","volume-title":"Image Captioning: Transforming Objects into Words. In NeurIPS.","author":"Herdade Simao","year":"2019","unstructured":"Simao Herdade , Armin Kappeler , Kofi Boakye , and Joao Soares . 2019 . Image Captioning: Transforming Objects into Words. In NeurIPS. Simao Herdade, Armin Kappeler, Kofi Boakye, and Joao Soares. 2019. Image Captioning: Transforming Objects into Words. In NeurIPS."},{"key":"e_1_3_2_1_26_1","unstructured":"Xiaowei Hu Zhe Gan Jianfeng Wang Zhengyuan Yang Zicheng Liu Yumao Lu and Lijuan Wang. 2022. Scaling Up Vision-Language Pre-Training for Image Captioning. In CVPR. Xiaowei Hu Zhe Gan Jianfeng Wang Zhengyuan Yang Zicheng Liu Yumao Lu and Lijuan Wang. 2022. Scaling Up Vision-Language Pre-Training for Image Captioning. In CVPR."},{"key":"e_1_3_2_1_27_1","doi-asserted-by":"crossref","unstructured":"Lun Huang Wenmin Wang Jie Chen and Xiao-Yong Wei. 2019. Attention on Attention for Image Captioning. In ICCV. Lun Huang Wenmin Wang Jie Chen and Xiao-Yong Wei. 2019. Attention on Attention for Image Captioning. In ICCV.","DOI":"10.1109\/ICCV.2019.00473"},{"key":"e_1_3_2_1_28_1","doi-asserted-by":"crossref","unstructured":"Gautier Izacard and Edouard Grave. 2021. Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering. In EACL. Gautier Izacard and Edouard Grave. 2021. Leveraging Passage Retrieval with Generative Models for Open Domain Question Answering. In EACL.","DOI":"10.18653\/v1\/2021.eacl-main.74"},{"key":"e_1_3_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1109\/TBDATA.2019.2921572"},{"key":"e_1_3_2_1_30_1","unstructured":"Jared Kaplan Sam McCandlish Tom Henighan Tom\u00a0B Brown Benjamin Chess Rewon Child Scott Gray Alec Radford Jeffrey Wu and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361(2020). Jared Kaplan Sam McCandlish Tom Henighan Tom\u00a0B Brown Benjamin Chess Rewon Child Scott Gray Alec Radford Jeffrey Wu and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361(2020)."},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"crossref","unstructured":"Andrej Karpathy and Li Fei-Fei. 2015. Deep visual-semantic alignments for generating image descriptions. In CVPR. Andrej Karpathy and Li Fei-Fei. 2015. Deep visual-semantic alignments for generating image descriptions. In CVPR.","DOI":"10.1109\/CVPR.2015.7298932"},{"key":"e_1_3_2_1_32_1","unstructured":"Urvashi Khandelwal Omer Levy Dan Jurafsky Luke Zettlemoyer and Mike Lewis. 2020. Generalization through Memorization: Nearest Neighbor Language Models. In ICLR. Urvashi Khandelwal Omer Levy Dan Jurafsky Luke Zettlemoyer and Mike Lewis. 2020. Generalization through Memorization: Nearest Neighbor Language Models. In ICLR."},{"key":"e_1_3_2_1_33_1","volume-title":"Adam: A Method for Stochastic Optimization. In ICLR.","author":"Kingma P","year":"2015","unstructured":"Diederik\u00a0 P Kingma and Jimmy Ba . 2015 . Adam: A Method for Stochastic Optimization. In ICLR. Diederik\u00a0P Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In ICLR."},{"key":"e_1_3_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2021.08.030"},{"key":"e_1_3_2_1_35_1","unstructured":"Patrick Lewis Ethan Perez Aleksandra Piktus Fabio Petroni Vladimir Karpukhin Naman Goyal Heinrich K\u00fcttler Mike Lewis Wen-tau Yih Tim Rockt\u00e4schel Sebastian Riedel and Douwe Kiela. 2020. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. In NeurIPS. Patrick Lewis Ethan Perez Aleksandra Piktus Fabio Petroni Vladimir Karpukhin Naman Goyal Heinrich K\u00fcttler Mike Lewis Wen-tau Yih Tim Rockt\u00e4schel Sebastian Riedel and Douwe Kiela. 2020. Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks. In NeurIPS."},{"key":"e_1_3_2_1_36_1","volume-title":"Oscar: Object-semantics aligned pre-training for vision-language tasks. In ECCV.","author":"Li Xiujun","year":"2020","unstructured":"Xiujun Li , Xi Yin , Chunyuan Li , Pengchuan Zhang , Xiaowei Hu , Lei Zhang , Lijuan Wang , Houdong Hu , Li Dong , Furu Wei , 2020 . Oscar: Object-semantics aligned pre-training for vision-language tasks. In ECCV. Xiujun Li, Xi Yin, Chunyuan Li, Pengchuan Zhang, Xiaowei Hu, Lei Zhang, Lijuan Wang, Houdong Hu, Li Dong, Furu Wei, 2020. Oscar: Object-semantics aligned pre-training for vision-language tasks. In ECCV."},{"key":"e_1_3_2_1_37_1","volume-title":"ACL Workshops.","author":"Lin Chin-Yew","year":"2004","unstructured":"Chin-Yew Lin . 2004 . Rouge: A package for automatic evaluation of summaries . In ACL Workshops. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In ACL Workshops."},{"key":"e_1_3_2_1_38_1","unstructured":"Tsung-Yi Lin Michael Maire Serge Belongie James Hays Pietro Perona Deva Ramanan Piotr Doll\u00e1r and C\u00a0Lawrence Zitnick. 2014. Microsoft COCO: Common Objects in Context. In ECCV. Tsung-Yi Lin Michael Maire Serge Belongie James Hays Pietro Perona Deva Ramanan Piotr Doll\u00e1r and C\u00a0Lawrence Zitnick. 2014. Microsoft COCO: Common Objects in Context. In ECCV."},{"key":"e_1_3_2_1_39_1","volume-title":"Prophet Attention: Predicting Attention with Future Attention. In NeurIPS.","author":"Liu Fenglin","year":"2020","unstructured":"Fenglin Liu , Xuancheng Ren , Xian Wu , Shen Ge , Wei Fan , Yuexian Zou , and Xu Sun . 2020 . Prophet Attention: Predicting Attention with Future Attention. In NeurIPS. Fenglin Liu, Xuancheng Ren, Xian Wu, Shen Ge, Wei Fan, Yuexian Zou, and Xu Sun. 2020. Prophet Attention: Predicting Attention with Future Attention. In NeurIPS."},{"key":"e_1_3_2_1_40_1","volume-title":"CPTR: Full Transformer Network for Image Captioning. arXiv preprint arXiv:2101.10804(2021).","author":"Liu Wei","year":"2021","unstructured":"Wei Liu , Sihan Chen , Longteng Guo , Xinxin Zhu , and Jing Liu . 2021 . CPTR: Full Transformer Network for Image Captioning. arXiv preprint arXiv:2101.10804(2021). Wei Liu, Sihan Chen, Longteng Guo, Xinxin Zhu, and Jing Liu. 2021. CPTR: Full Transformer Network for Image Captioning. arXiv preprint arXiv:2101.10804(2021)."},{"key":"e_1_3_2_1_41_1","unstructured":"Yunpeng Luo Jiayi Ji Xiaoshuai Sun Liujuan Cao Yongjian Wu Feiyue Huang Chia-Wen Lin and Rongrong Ji. 2021. Dual-Level Collaborative Transformer for Image Captioning. In AAAI. Yunpeng Luo Jiayi Ji Xiaoshuai Sun Liujuan Cao Yongjian Wu Feiyue Huang Chia-Wen Lin and Rongrong Ji. 2021. Dual-Level Collaborative Transformer for Image Captioning. In AAAI."},{"key":"e_1_3_2_1_42_1","unstructured":"Paulius Micikevicius Sharan Narang Jonah Alben Gregory Diamos Erich Elsen David Garcia Boris Ginsburg Michael Houston Oleksii Kuchaiev Ganesh Venkatesh and Hao Wu. 2018. Mixed Precision Training. In ICLR. Paulius Micikevicius Sharan Narang Jonah Alben Gregory Diamos Erich Elsen David Garcia Boris Ginsburg Michael Houston Oleksii Kuchaiev Ganesh Venkatesh and Hao Wu. 2018. Mixed Precision Training. In ICLR."},{"key":"e_1_3_2_1_43_1","unstructured":"Yingwei Pan Ting Yao Yehao Li and Tao Mei. 2020. X-Linear Attention Networks for Image Captioning. In CVPR. Yingwei Pan Ting Yao Yehao Li and Tao Mei. 2020. X-Linear Attention Networks for Image Captioning. In CVPR."},{"key":"e_1_3_2_1_44_1","doi-asserted-by":"crossref","unstructured":"Kishore Papineni Salim Roukos Todd Ward and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In ACL. Kishore Papineni Salim Roukos Todd Ward and Wei-Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In ACL.","DOI":"10.3115\/1073083.1073135"},{"key":"e_1_3_2_1_45_1","unstructured":"Alec Radford Jong\u00a0Wook Kim Chris Hallacy Aditya Ramesh Gabriel Goh Sandhini Agarwal Girish Sastry Amanda Askell Pamela Mishkin Jack Clark Gretchen Krueger and Ilya Sutskever. 2021. Learning Transferable Visual Models From Natural Language Supervision. In ICML. Alec Radford Jong\u00a0Wook Kim Chris Hallacy Aditya Ramesh Gabriel Goh Sandhini Agarwal Girish Sastry Amanda Askell Pamela Mishkin Jack Clark Gretchen Krueger and Ilya Sutskever. 2021. Learning Transferable Visual Models From Natural Language Supervision. In ICML."},{"key":"e_1_3_2_1_46_1","first-page":"9","article-title":"Language Models are Unsupervised Multitask Learners","volume":"1","author":"Radford Alec","year":"2019","unstructured":"Alec Radford , Jeffrey Wu , Rewon Child , David Luan , Dario Amodei , and Ilya Sutskever . 2019 . Language Models are Unsupervised Multitask Learners . OpenAI Blog 1 , 8 (2019), 9 . Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language Models are Unsupervised Multitask Learners. OpenAI Blog 1, 8 (2019), 9.","journal-title":"OpenAI Blog"},{"key":"e_1_3_2_1_47_1","doi-asserted-by":"crossref","unstructured":"Samyam Rajbhandari Jeff Rasley Olatunji Ruwase and Yuxiong He. 2020. ZeRO: Memory optimizations Toward Training Trillion Parameter Models. In SC. Samyam Rajbhandari Jeff Rasley Olatunji Ruwase and Yuxiong He. 2020. ZeRO: Memory optimizations Toward Training Trillion Parameter Models. In SC.","DOI":"10.1109\/SC41405.2020.00024"},{"key":"e_1_3_2_1_48_1","doi-asserted-by":"crossref","unstructured":"Steven\u00a0J Rennie Etienne Marcheret Youssef Mroueh Jarret Ross and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In CVPR. Steven\u00a0J Rennie Etienne Marcheret Youssef Mroueh Jarret Ross and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In CVPR.","DOI":"10.1109\/CVPR.2017.131"},{"key":"e_1_3_2_1_49_1","doi-asserted-by":"crossref","unstructured":"Rico Sennrich Barry Haddow and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In ACL. Rico Sennrich Barry Haddow and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In ACL.","DOI":"10.18653\/v1\/P16-1162"},{"key":"e_1_3_2_1_50_1","unstructured":"Sheng Shen Liunian\u00a0Harold Li Hao Tan Mohit Bansal Anna Rohrbach Kai-Wei Chang Zhewei Yao and Kurt Keutzer. 2021. How Much Can CLIP Benefit Vision-and-Language Tasks?arXiv preprint arXiv:2107.06383(2021). Sheng Shen Liunian\u00a0Harold Li Hao Tan Mohit Bansal Anna Rohrbach Kai-Wei Chang Zhewei Yao and Kurt Keutzer. 2021. How Much Can CLIP Benefit Vision-and-Language Tasks?arXiv preprint arXiv:2107.06383(2021)."},{"key":"e_1_3_2_1_51_1","doi-asserted-by":"crossref","unstructured":"Zhan Shi Xu Zhou Xipeng Qiu and Xiaodan Zhu. 2020. Improving Image Captioning with Better Use of Captions. In ACL. Zhan Shi Xu Zhou Xipeng Qiu and Xiaodan Zhu. 2020. Improving Image Captioning with Better Use of Captions. In ACL.","DOI":"10.18653\/v1\/2020.acl-main.664"},{"key":"e_1_3_2_1_52_1","unstructured":"Sainbayar Sukhbaatar Edouard Grave Guillaume Lample Herve Jegou and Armand Joulin. 2019. Augmenting Self-Attention with Persistent Memory. arXiv preprint arXiv:1907.01470(2019). Sainbayar Sukhbaatar Edouard Grave Guillaume Lample Herve Jegou and Armand Joulin. 2019. Augmenting Self-Attention with Persistent Memory. arXiv preprint arXiv:1907.01470(2019)."},{"key":"e_1_3_2_1_53_1","unstructured":"Giorgos Tolias Ronan Sicre and Herv\u00e9 J\u00e9gou. 2016. Particular object retrieval with integral max-pooling of CNN activations. In ICLR. Giorgos Tolias Ronan Sicre and Herv\u00e9 J\u00e9gou. 2016. Particular object retrieval with integral max-pooling of CNN activations. In ICLR."},{"key":"e_1_3_2_1_54_1","unstructured":"Hugo Touvron Matthieu Cord Matthijs Douze Francisco Massa Alexandre Sablayrolles and Herv\u00e9 J\u00e9gou. 2021. Training data-efficient image transformers & distillation through attention. In ICML. Hugo Touvron Matthieu Cord Matthijs Douze Francisco Massa Alexandre Sablayrolles and Herv\u00e9 J\u00e9gou. 2021. Training data-efficient image transformers & distillation through attention. In ICML."},{"key":"e_1_3_2_1_55_1","unstructured":"Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan\u00a0N Gomez \u0141ukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS. Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan\u00a0N Gomez \u0141ukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS."},{"key":"e_1_3_2_1_56_1","doi-asserted-by":"crossref","unstructured":"Ramakrishna Vedantam C Lawrence\u00a0Zitnick and Devi Parikh. 2015. CIDEr: Consensus-based Image Description Evaluation. In CVPR. Ramakrishna Vedantam C Lawrence\u00a0Zitnick and Devi Parikh. 2015. CIDEr: Consensus-based Image Description Evaluation. In CVPR.","DOI":"10.1109\/CVPR.2015.7299087"},{"key":"e_1_3_2_1_57_1","doi-asserted-by":"crossref","unstructured":"Oriol Vinyals Alexander Toshev Samy Bengio and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In CVPR. Oriol Vinyals Alexander Toshev Samy Bengio and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In CVPR.","DOI":"10.1109\/CVPR.2015.7298935"},{"key":"e_1_3_2_1_58_1","unstructured":"Yuhuai Wu Markus\u00a0N Rabe DeLesley Hutchins and Christian Szegedy. 2022. Memorizing Transformers. In ICLR. Yuhuai Wu Markus\u00a0N Rabe DeLesley Hutchins and Christian Szegedy. 2022. Memorizing Transformers. In ICLR."},{"key":"e_1_3_2_1_59_1","doi-asserted-by":"crossref","unstructured":"Xu Yang Kaihua Tang Hanwang Zhang and Jianfei Cai. 2019. Auto-Encoding Scene Graphs for Image Captioning. In CVPR. Xu Yang Kaihua Tang Hanwang Zhang and Jianfei Cai. 2019. Auto-Encoding Scene Graphs for Image Captioning. In CVPR.","DOI":"10.1109\/CVPR.2019.01094"},{"key":"e_1_3_2_1_60_1","doi-asserted-by":"crossref","unstructured":"Xu Yang Hanwang Zhang and Jianfei Cai. 2019. Learning to Collocate Neural Modules for Image Captioning. In ICCV. Xu Yang Hanwang Zhang and Jianfei Cai. 2019. Learning to Collocate Neural Modules for Image Captioning. In ICCV.","DOI":"10.1109\/ICCV.2019.00435"},{"key":"e_1_3_2_1_61_1","doi-asserted-by":"crossref","unstructured":"Ting Yao Yingwei Pan Yehao Li and Tao Mei. 2018. Exploring Visual Relationship for Image Captioning. In ECCV. Ting Yao Yingwei Pan Yehao Li and Tao Mei. 2018. Exploring Visual Relationship for Image Captioning. In ECCV.","DOI":"10.1007\/978-3-030-01264-9_42"},{"key":"e_1_3_2_1_62_1","doi-asserted-by":"crossref","first-page":"362","DOI":"10.1162\/tacl_a_00371","article-title":"Adaptive Semiparametric Language Models","volume":"9","author":"Yogatama Dani","year":"2021","unstructured":"Dani Yogatama , Cyprien de Masson\u00a0d\u2019Autume , and Lingpeng Kong . 2021 . Adaptive Semiparametric Language Models . TACL 9 (2021), 362 \u2013 373 . Dani Yogatama, Cyprien de Masson\u00a0d\u2019Autume, and Lingpeng Kong. 2021. Adaptive Semiparametric Language Models. TACL 9(2021), 362\u2013373.","journal-title":"TACL"},{"key":"e_1_3_2_1_63_1","unstructured":"Yang You Jing Li Sashank Reddi Jonathan Hseu Sanjiv Kumar Srinadh Bhojanapalli Xiaodan Song James Demmel Kurt Keutzer and Cho-Jui Hsieh. 2020. Large Batch Optimization for Deep Learning: Training BERT in 76 minutes. In ICLR. Yang You Jing Li Sashank Reddi Jonathan Hseu Sanjiv Kumar Srinadh Bhojanapalli Xiaodan Song James Demmel Kurt Keutzer and Cho-Jui Hsieh. 2020. Large Batch Optimization for Deep Learning: Training BERT in 76 minutes. In ICLR."},{"key":"e_1_3_2_1_64_1","doi-asserted-by":"crossref","unstructured":"Xuying Zhang Xiaoshuai Sun Yunpeng Luo Jiayi Ji Yiyi Zhou Yongjian Wu Feiyue Huang and Rongrong Ji. 2021. RSTNet: Captioning with Adaptive Attention on Visual and Non-Visual Words. In CVPR. Xuying Zhang Xiaoshuai Sun Yunpeng Luo Jiayi Ji Yiyi Zhou Yongjian Wu Feiyue Huang and Rongrong Ji. 2021. RSTNet: Captioning with Adaptive Attention on Visual and Non-Visual Words. In CVPR.","DOI":"10.1109\/CVPR46437.2021.01521"},{"key":"e_1_3_2_1_65_1","doi-asserted-by":"crossref","unstructured":"Luowei Zhou Hamid Palangi Lei Zhang Houdong Hu Jason\u00a0J Corso and Jianfeng Gao. 2020. Unified Vision-Language Pre-Training for Image Captioning and VQA. In AAAI. Luowei Zhou Hamid Palangi Lei Zhang Houdong Hu Jason\u00a0J Corso and Jianfeng Gao. 2020. Unified Vision-Language Pre-Training for Image Captioning and VQA. In AAAI.","DOI":"10.1609\/aaai.v34i07.7005"}],"event":{"name":"CBMI 2022: International Conference on Content-based Multimedia Indexing","location":"Graz Austria","acronym":"CBMI 2022"},"container-title":["International Conference on Content-based Multimedia Indexing"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3549555.3549585","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3549555.3549585","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:00:12Z","timestamp":1750186812000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3549555.3549585"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,14]]},"references-count":65,"alternative-id":["10.1145\/3549555.3549585","10.1145\/3549555"],"URL":"https:\/\/doi.org\/10.1145\/3549555.3549585","relation":{},"subject":[],"published":{"date-parts":[[2022,9,14]]}}}