{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,7]],"date-time":"2026-03-07T18:37:29Z","timestamp":1772908649727,"version":"3.50.1"},"reference-count":53,"publisher":"MIT Press - Journals","license":[{"start":{"date-parts":[[2021,9,13]],"date-time":"2021-09-13T00:00:00Z","timestamp":1631491200000},"content-version":"vor","delay-in-days":255,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["direct.mit.edu"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,9,8]]},"abstract":"<jats:title>Abstract<\/jats:title>\n               <jats:p>Large-scale pretraining and task-specific fine- tuning is now the standard methodology for many tasks in computer vision and natural language processing. Recently, a multitude of methods have been proposed for pretraining vision and language BERTs to tackle challenges at the intersection of these two key areas of AI. These models can be categorized into either single-stream or dual-stream encoders. We study the differences between these two categories, and show how they can be unified under a single theoretical framework. We then conduct controlled experiments to discern the empirical differences between five vision and language BERTs. Our experiments show that training data and hyperparameters are responsible for most of the differences between the reported results, but they also reveal that the embedding layer plays a crucial role in these massive models.<\/jats:p>","DOI":"10.1162\/tacl_a_00408","type":"journal-article","created":{"date-parts":[[2021,9,13]],"date-time":"2021-09-13T13:28:10Z","timestamp":1631539690000},"page":"978-994","update-policy":"https:\/\/doi.org\/10.1162\/mitpressjournals.corrections.policy","source":"Crossref","is-referenced-by-count":59,"title":["Multimodal Pretraining Unmasked: A Meta-Analysis and a Unified Framework of Vision-and-Language BERTs"],"prefix":"10.1162","volume":"9","author":[{"given":"Emanuele","family":"Bugliarello","sequence":"first","affiliation":[{"name":"University of Copenhagen. emanuele@di.ku.dk"}]},{"given":"Ryan","family":"Cotterell","sequence":"additional","affiliation":[{"name":"University of Cambridge"},{"name":"ETH Z\u00fcrich. rcotterell@inf.ethz.ch"}]},{"given":"Naoaki","family":"Okazaki","sequence":"additional","affiliation":[{"name":"Tokyo Institute of Technology. okazaki@c.titech.ac.jp"}]},{"given":"Desmond","family":"Elliott","sequence":"additional","affiliation":[{"name":"University of Copenhagen. de@di.ku.dk"}]}],"member":"281","published-online":{"date-parts":[[2021,9,8]]},"reference":[{"key":"2021092116472570100_bib1","doi-asserted-by":"publisher","first-page":"6077","DOI":"10.1109\/CVPR.2018.00636","article-title":"Bottom-up and top-down attention for image captioning and visual question answering","volume-title":"Proceedings of the IEEE\/ CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Anderson","year":"2018"},{"key":"2021092116472570100_bib2","doi-asserted-by":"publisher","first-page":"2425","DOI":"10.1109\/ICCV.2015.279","article-title":"VQA: Visual Question Answering","volume-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV)","author":"Antol","year":"2015"},{"key":"2021092116472570100_bib3","article-title":"Layer normalization","author":"Ba","year":"2016","journal-title":"arXiv preprint arXiv:1607.06450"},{"issue":"1","key":"2021092116472570100_bib4","doi-asserted-by":"publisher","first-page":"6","DOI":"10.1038\/s41562-017-0189-z","article-title":"Redefine statistical significance","volume":"2","author":"Benjamin","year":"2018","journal-title":"Nature Human Behaviour"},{"key":"2021092116472570100_bib5","doi-asserted-by":"publisher","first-page":"104","DOI":"10.1007\/978-3-030-58577-8_7","article-title":"Uniter: Universal image- text representation learning","volume-title":"European Conference on Computer Vision","author":"Chen","year":"2020"},{"key":"2021092116472570100_bib6","doi-asserted-by":"publisher","first-page":"8785","DOI":"10.18653\/v1\/2020.emnlp-main.707","article-title":"X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal Transformers","volume-title":"Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)","author":"Cho","year":"2020"},{"key":"2021092116472570100_bib7","first-page":"4466","article-title":"GuessWhat?! Visual object discovery through multi-modal dialogue","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)","author":"de Vries","year":"2017"},{"key":"2021092116472570100_bib8","first-page":"4171","article-title":"BERT: Pre-training of deep bidirectional transformers for language understanding","volume-title":"Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)","author":"Devlin","year":"2019"},{"key":"2021092116472570100_bib9","article-title":"Fine-tuning pretrained language models: Weight initializations, data orders, and early stopping","author":"Dodge","year":"2020","journal-title":"arXiv preprint arXiv:2002.06305"},{"key":"2021092116472570100_bib10","doi-asserted-by":"publisher","first-page":"1307","DOI":"10.18653\/v1\/2020.findings-emnlp.117","article-title":"Evaluating models\u2019 local decision boundaries via contrast sets","volume-title":"Findings of the Association for Computational Linguistics: EMNLP 2020","author":"Gardner","year":"2020"},{"key":"2021092116472570100_bib11","doi-asserted-by":"publisher","first-page":"1161","DOI":"10.18653\/v1\/D19-1107","article-title":"Are we modeling the task or the annotator? An investigation of annotator bias in natural language understanding datasets","volume-title":"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)","author":"Geva","year":"2019"},{"key":"2021092116472570100_bib12","doi-asserted-by":"publisher","first-page":"6325","DOI":"10.1109\/CVPR.2017.670","article-title":"Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Goyal","year":"2017"},{"issue":"10","key":"2021092116472570100_bib13","doi-asserted-by":"publisher","first-page":"2222","DOI":"10.1109\/TNNLS.2016.2582924","article-title":"LSTM: A search space odyssey","volume":"28","author":"Greff","year":"2017","journal-title":"IEEE Transactions on Neural Networks and Learning Systems"},{"key":"2021092116472570100_bib14","doi-asserted-by":"publisher","first-page":"107","DOI":"10.18653\/v1\/N18-2017","article-title":"Annotation artifacts in natural language inference data","volume-title":"Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers)","author":"Gururangan","year":"2018"},{"key":"2021092116472570100_bib15","first-page":"770","article-title":"Deep residual learning for image recognition","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"He","year":"2016"},{"key":"2021092116472570100_bib16","article-title":"Grounded language learning fast and slow","volume-title":"International Conference on Learning Representations","author":"Hill","year":"2021"},{"key":"2021092116472570100_bib17","article-title":"Pixel-bert: Aligning image pixels with text by deep multi-modal transformers","author":"Huang","year":"2020","journal-title":"arXiv preprint arXiv:2004.00849"},{"key":"2021092116472570100_bib18","doi-asserted-by":"publisher","first-page":"6700","DOI":"10.1109\/CVPR.2019.00686","article-title":"GQA: A new dataset for real-world visual reasoning and compositional question answering","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Hudson","year":"2019"},{"key":"2021092116472570100_bib19","doi-asserted-by":"publisher","first-page":"787","DOI":"10.3115\/v1\/D14-1086","article-title":"ReferItGame: Referring to objects in photographs of natural scenes","volume-title":"Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)","author":"Kazemzadeh","year":"2014"},{"key":"2021092116472570100_bib20","article-title":"VILT: Vision-and-language transformer without convolution or region supervision","author":"Kim","year":"2021","journal-title":"arXiv preprint arXiv:2102.03334"},{"issue":"1","key":"2021092116472570100_bib21","doi-asserted-by":"publisher","first-page":"32","DOI":"10.1007\/s11263-016-0981-7","article-title":"Visual genome: Connecting language and vision using crowdsourced dense image annotations","volume":"123","author":"Krishna","year":"2017","journal-title":"International Journal of Computer Vision"},{"issue":"07","key":"2021092116472570100_bib22","doi-asserted-by":"publisher","first-page":"11336","DOI":"10.1609\/aaai.v34i07.6795","article-title":"Unicoder-VL: A universal encoder for vision and language by cross- modal pre-training","volume":"34","author":"Li","year":"2020","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"2021092116472570100_bib23","article-title":"VisualBERT: A simple and performant baseline for vision and language","author":"Li","year":"2019","journal-title":"arXiv preprint arXiv:1908.03557"},{"key":"2021092116472570100_bib24","doi-asserted-by":"publisher","first-page":"121","DOI":"10.1007\/978-3-030-58577-8_8","article-title":"Oscar: Object-semantics aligned pre-training for vision-language tasks","volume-title":"European Conference on Computer Vision","author":"Li","year":"2020"},{"key":"2021092116472570100_bib25","article-title":"Interbert: Vision-and-language interaction for multi-modal pretraining","author":"Lin","year":"2020","journal-title":"arXiv preprint arXiv:2003.13198"},{"key":"2021092116472570100_bib26","doi-asserted-by":"publisher","first-page":"740","DOI":"10.1007\/978-3-319-10602-1_48","article-title":"Microsoft COCO: Common objects in context","volume-title":"European Conference on Computer Vision","author":"Lin","year":"2014"},{"key":"2021092116472570100_bib27","first-page":"13","article-title":"VilBERT: Pretraining task-agnostic visiolinguistic representations for vision-and- language tasks","volume-title":"Advances in Neural Information Processing Systems","author":"Jiasen","year":"2019"},{"key":"2021092116472570100_bib28","first-page":"10434","article-title":"12-in-1: Multi-task vision and language representation learning","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Jiasen","year":"2020"},{"key":"2021092116472570100_bib29","doi-asserted-by":"publisher","first-page":"11","DOI":"10.1109\/CVPR.2016.9","article-title":"Generation and comprehension of unambiguous object descriptions","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Mao","year":"2016"},{"key":"2021092116472570100_bib30","article-title":"Do transformer modifications transfer across implementations and applications?","author":"Narang","year":"2021","journal-title":"arXiv preprint arXiv:2102.11972"},{"key":"2021092116472570100_bib31","first-page":"8024","article-title":"PyTorch: An imperative style, high-performance deep learning library","volume-title":"Advances in Neural Information Processing Systems","author":"Paszke","year":"2019"},{"key":"2021092116472570100_bib32","doi-asserted-by":"publisher","first-page":"2641","DOI":"10.1109\/ICCV.2015.303","article-title":"Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models","volume-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV)","author":"Plummer","year":"2015"},{"key":"2021092116472570100_bib33","article-title":"ImageBERT: Cross-modal pre-training with large-scale weak-supervised image-text data","author":"Di","year":"2020","journal-title":"arXiv preprint arXiv:2001.07966"},{"key":"2021092116472570100_bib34","article-title":"Learning transferable visual models from natural language supervision","author":"Radford","year":"2021","journal-title":"arXiv preprint arXiv:2103.00020"},{"key":"2021092116472570100_bib35","first-page":"91","article-title":"Faster R-CNN: Towards real-time object detection with region proposal networks","volume-title":"Advances in Neural Information Processing Systems","author":"Ren","year":"2015"},{"key":"2021092116472570100_bib36","doi-asserted-by":"publisher","first-page":"6174","DOI":"10.18653\/v1\/P19-1621","article-title":"Are red roses red? Evaluating consistency of question-answering models","volume-title":"Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics","author":"Ribeiro","year":"2019"},{"key":"2021092116472570100_bib37","doi-asserted-by":"publisher","first-page":"1256","DOI":"10.18653\/v1\/2020.findings-emnlp.112","article-title":"What can we do to improve peer review in NLP?","volume-title":"Findings of the Association for Computational Linguistics: EMNLP 2020","author":"Rogers","year":"2020"},{"key":"2021092116472570100_bib38","doi-asserted-by":"publisher","first-page":"1715","DOI":"10.18653\/v1\/P16-1162","article-title":"Neural machine translation of rare words with subword units","volume-title":"Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)","author":"Sennrich","year":"2016"},{"key":"2021092116472570100_bib39","doi-asserted-by":"publisher","first-page":"512","DOI":"10.1109\/CVPRW.2014.131","article-title":"CNN features off-the-shelf: An astounding baseline for recognition","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops","author":"Razavian","year":"2014"},{"key":"2021092116472570100_bib40","doi-asserted-by":"crossref","first-page":"2556","DOI":"10.18653\/v1\/P18-1238","article-title":"Conceptual captions: A cleaned, hypernymed, image alt-text dataset for automatic image captioning","volume-title":"Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)","author":"Sharma","year":"2018"},{"key":"2021092116472570100_bib41","doi-asserted-by":"publisher","first-page":"3645","DOI":"10.18653\/v1\/P18-1238","article-title":"Energy and policy considerations for deep learning in NLP","volume-title":"Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics","author":"Strubell","year":"2019"},{"key":"2021092116472570100_bib42","article-title":"Vl-BERT: Pre-training of generic visual-linguistic representations","volume-title":"International Conference on Learning Representations","author":"Weijie","year":"2020"},{"key":"2021092116472570100_bib43","doi-asserted-by":"crossref","first-page":"6418","DOI":"10.18653\/v1\/P19-1644","article-title":"A corpus for reasoning about natural language grounded in photographs","volume-title":"Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics","author":"Suhr","year":"2019"},{"key":"2021092116472570100_bib44","doi-asserted-by":"publisher","first-page":"5100","DOI":"10.18653\/v1\/D19-1514","article-title":"LXMERT: Learning cross-modality encoder representations from transformers","volume-title":"Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)","author":"Tan","year":"2019"},{"key":"2021092116472570100_bib45","first-page":"5998","article-title":"Attention is all you need","volume-title":"Advances in Neural Information Processing Systems","author":"Vaswani","year":"2017"},{"key":"2021092116472570100_bib46","article-title":"Google\u2019s neural machine translation system: Bridging the gap between human and machine translation","author":"Yonghui","year":"2016","journal-title":"arXiv preprint arXiv:1609.08144"},{"key":"2021092116472570100_bib47","article-title":"Visual entailment: A novel task for fine-grained image understanding","author":"Xie","year":"2019","journal-title":"arXiv preprint arXiv:1901.06706"},{"key":"2021092116472570100_bib48","first-page":"5987","article-title":"Aggregated residual transformations for deep neural networks","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Xie","year":"2017"},{"key":"2021092116472570100_bib49","article-title":"Ernie-vil: Knowledge enhanced vision- language representations through scene graph","author":"Fei","year":"2021","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"2021092116472570100_bib50","first-page":"1307","article-title":"Mattnet: Modular attention network for referring expression comprehension","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Licheng","year":"2018"},{"key":"2021092116472570100_bib51","doi-asserted-by":"publisher","first-page":"6713","DOI":"10.1109\/CVPR.2019.00688","article-title":"From recognition to cognition: Visual commonsense reasoning","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Zellers","year":"2019"},{"issue":"07","key":"2021092116472570100_bib52","doi-asserted-by":"publisher","first-page":"13041","DOI":"10.1609\/aaai.v34i07.7005","article-title":"Unified vision-language pre-training for image captioning and vqa","volume":"34","author":"Zhou","year":"2020","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"2021092116472570100_bib53","doi-asserted-by":"publisher","first-page":"4995","DOI":"10.1109\/CVPR.2016.540","article-title":"Visual7w: Grounded question answering in images","volume-title":"2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Zhu","year":"2016"}],"container-title":["Transactions of the Association for Computational Linguistics"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/direct.mit.edu\/tacl\/article-pdf\/doi\/10.1162\/tacl_a_00408\/1963734\/tacl_a_00408.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"http:\/\/direct.mit.edu\/tacl\/article-pdf\/doi\/10.1162\/tacl_a_00408\/1963734\/tacl_a_00408.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,9,21]],"date-time":"2021-09-21T16:52:44Z","timestamp":1632243164000},"score":1,"resource":{"primary":{"URL":"https:\/\/direct.mit.edu\/tacl\/article\/doi\/10.1162\/tacl_a_00408\/107279\/Multimodal-Pretraining-Unmasked-A-Meta-Analysis"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021]]},"references-count":53,"URL":"https:\/\/doi.org\/10.1162\/tacl_a_00408","relation":{},"ISSN":["2307-387X"],"issn-type":[{"value":"2307-387X","type":"electronic"}],"subject":[],"published-other":{"date-parts":[[2021]]},"published":{"date-parts":[[2021]]}}}