{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:23:16Z","timestamp":1750220596162,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":11,"publisher":"ACM","license":[{"start":{"date-parts":[[2020,9,26]],"date-time":"2020-09-26T00:00:00Z","timestamp":1601078400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2020,9,26]]},"DOI":"10.1145\/3415959.3416000","type":"proceedings-article","created":{"date-parts":[[2020,9,25]],"date-time":"2020-09-25T15:00:13Z","timestamp":1601046013000},"page":"38-43","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["Predicting Twitter Engagement With Deep Language Models"],"prefix":"10.1145","author":[{"given":"Maksims","family":"Volkovs","sequence":"first","affiliation":[{"name":"Layer 6 AI, Canada"}]},{"given":"Zhaoyue","family":"Cheng","sequence":"additional","affiliation":[{"name":"Layer 6 AI, Canada"}]},{"given":"Mathieu","family":"Ravaut","sequence":"additional","affiliation":[{"name":"Layer 6 AI, Canada"}]},{"given":"Hojin","family":"Yang","sequence":"additional","affiliation":[{"name":"University of Toronto, Canada"}]},{"given":"Kevin","family":"Shen","sequence":"additional","affiliation":[{"name":"Layer6 AI, Canada"}]},{"given":"Jin Peng","family":"Zhou","sequence":"additional","affiliation":[{"name":"University of Toronto, Canada"}]},{"given":"Anson","family":"Wong","sequence":"additional","affiliation":[{"name":"Layer 6 AI, Canada"}]},{"given":"Saba","family":"Zuberi","sequence":"additional","affiliation":[{"name":"Layer 6 AI, Canada"}]},{"given":"Ivan","family":"Zhang","sequence":"additional","affiliation":[{"name":"Cohere"}]},{"given":"Nick","family":"Frosst","sequence":"additional","affiliation":[{"name":"Cohere"}]},{"given":"Helen","family":"Ngo","sequence":"additional","affiliation":[{"name":"Cohere"}]},{"given":"Carol","family":"Chen","sequence":"additional","affiliation":[{"name":"Cohere"}]},{"given":"Bharat","family":"Venkitesh","sequence":"additional","affiliation":[{"name":"Cohere"}]},{"given":"Stephen","family":"Gou","sequence":"additional","affiliation":[{"name":"Cohere"}]},{"given":"Aidan N.","family":"Gomez","sequence":"additional","affiliation":[{"name":"Cohere AI, Canada"}]}],"member":"320","published-online":{"date-parts":[[2020,9,26]]},"reference":[{"key":"e_1_3_2_1_1_1","unstructured":"Luca Belli Sofia\u00a0Ira Ktena Alykhan Tejani Alexandre Lung-Yut-Fon Frank Portman Xiao Zhu Yuanpu Xie Akshay Gupta Michael Bronstein Amra Deli\u0107 2020. Privacy-Preserving Recommender Systems Challenge on Twitter\u2019s Home Timeline. arXiv preprint arXiv:2004.13715(2020).  Luca Belli Sofia\u00a0Ira Ktena Alykhan Tejani Alexandre Lung-Yut-Fon Frank Portman Xiao Zhu Yuanpu Xie Akshay Gupta Michael Bronstein Amra Deli\u0107 2020. Privacy-Preserving Recommender Systems Challenge on Twitter\u2019s Home Timeline. arXiv preprint arXiv:2004.13715(2020)."},{"key":"e_1_3_2_1_2_1","volume-title":"International Conference on Learning Representations.","author":"Clark Kevin","year":"2020","unstructured":"Kevin Clark , Minh-Thang Luong , Quoc\u00a0 V Le , and Christopher\u00a0 D Manning . 2020 . Electra: Pre-training text encoders as discriminators rather than generators . In International Conference on Learning Representations. Kevin Clark, Minh-Thang Luong, Quoc\u00a0V Le, and Christopher\u00a0D Manning. 2020. Electra: Pre-training text encoders as discriminators rather than generators. In International Conference on Learning Representations."},{"volume-title":"Unsupervised cross-lingual representation learning at scale","author":"Conneau Alexis","key":"e_1_3_2_1_3_1","unstructured":"Alexis Conneau , Kartikay Khandelwal , Naman Goyal , Vishrav Chaudhary , Guillaume Wenzek , Francisco Guzm\u00e1n , Edouard Grave , Myle Ott , Luke Zettlemoyer , and Veselin Stoyanov . 2019. Unsupervised cross-lingual representation learning at scale . In Association for Computational Linguistics . Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Unsupervised cross-lingual representation learning at scale. In Association for Computational Linguistics."},{"key":"e_1_3_2_1_4_1","volume-title":"Bert: Pre-training of deep bidirectional transformers for language understanding","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2018 . Bert: Pre-training of deep bidirectional transformers for language understanding . In Association for Computational Linguistics . Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. In Association for Computational Linguistics."},{"key":"e_1_3_2_1_5_1","volume-title":"Ninth International AAAI Conference on Web and Social Media.","author":"Hu Yuheng","year":"2015","unstructured":"Yuheng Hu , Shelly Farnham , and Kartik Talamadupula . 2015 . Predicting user engagement on twitter with real-world events . In Ninth International AAAI Conference on Web and Social Media. Yuheng Hu, Shelly Farnham, and Kartik Talamadupula. 2015. Predicting user engagement on twitter with real-world events. In Ninth International AAAI Conference on Web and Social Media."},{"key":"e_1_3_2_1_6_1","volume-title":"International Conference on Learning Representations.","author":"Lan Zhenzhong","year":"2020","unstructured":"Zhenzhong Lan , Mingda Chen , Sebastian Goodman , Kevin Gimpel , Piyush Sharma , and Radu Soricut . 2020 . Albert: A lite Bert for self-supervised learning of language representations . In International Conference on Learning Representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. Albert: A lite Bert for self-supervised learning of language representations. In International Conference on Learning Representations."},{"key":"e_1_3_2_1_7_1","volume-title":"Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692(2019).","author":"Liu Yinhan","year":"2019","unstructured":"Yinhan Liu , Myle Ott , Naman Goyal , Jingfei Du , Mandar Joshi , Danqi Chen , Omer Levy , Mike Lewis , Luke Zettlemoyer , and Veselin Stoyanov . 2019 . Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692(2019). Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692(2019)."},{"key":"e_1_3_2_1_8_1","unstructured":"Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101(2017).  Ilya Loshchilov and Frank Hutter. 2017. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101(2017)."},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/371920.372071"},{"key":"e_1_3_2_1_10_1","unstructured":"Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan\u00a0N Gomez \u0141ukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In Neural Information Processing Systems.  Ashish Vaswani Noam Shazeer Niki Parmar Jakob Uszkoreit Llion Jones Aidan\u00a0N Gomez \u0141ukasz Kaiser and Illia Polosukhin. 2017. Attention is all you need. In Neural Information Processing Systems."},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1093\/biomet\/87.4.954"}],"event":{"name":"RecSys Challenge '20: Proceedings of the Recommender Systems Challenge 2020","acronym":"RecSys Challenge '20","location":"Virtual Event Brazil"},"container-title":["Proceedings of the Recommender Systems Challenge 2020"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3415959.3416000","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3415959.3416000","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T21:31:53Z","timestamp":1750195913000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3415959.3416000"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,9,26]]},"references-count":11,"alternative-id":["10.1145\/3415959.3416000","10.1145\/3415959"],"URL":"https:\/\/doi.org\/10.1145\/3415959.3416000","relation":{},"subject":[],"published":{"date-parts":[[2020,9,26]]},"assertion":[{"value":"2020-09-26","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}