{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T04:13:10Z","timestamp":1750219990220,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":21,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,9,20]],"date-time":"2022-09-20T00:00:00Z","timestamp":1663632000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,9,20]]},"DOI":"10.1145\/3558100.3563843","type":"proceedings-article","created":{"date-parts":[[2022,11,18]],"date-time":"2022-11-18T18:03:30Z","timestamp":1668794610000},"page":"1-4","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":3,"title":["Triplet transformer network for multi-label document classification"],"prefix":"10.1145","author":[{"given":"Johannes","family":"Melsbach","sequence":"first","affiliation":[{"name":"University of Cologne, Cologne, Germany"}]},{"given":"Sven","family":"Stahlmann","sequence":"additional","affiliation":[{"name":"University of Cologne, Cologne, Germany"}]},{"given":"Stefan","family":"Hirschmeier","sequence":"additional","affiliation":[{"name":"University of Cologne, Cologne, Germany"}]},{"given":"Detlef","family":"Schoder","sequence":"additional","affiliation":[{"name":"University of Cologne, Cologne, Germany"}]}],"member":"320","published-online":{"date-parts":[[2022,11,18]]},"reference":[{"key":"e_1_3_2_1_1_1","first-page":"08398","article-title":"DocBERT: BERT for Document","volume":"1904","author":"Adhikari Ashutosh","year":"2019","unstructured":"Ashutosh Adhikari , Achyudh Ram , Raphael Tang , and Jimmy Lin . 2019 . DocBERT: BERT for Document Classification. Tech. rep. arXiv : 1904 . 08398 . arXiv, (Aug. 2019). http:\/\/arxiv.org\/abs\/1904.08398. Ashutosh Adhikari, Achyudh Ram, Raphael Tang, and Jimmy Lin. 2019. DocBERT: BERT for Document Classification. Tech. rep. arXiv:1904.08398. arXiv, (Aug. 2019). http:\/\/arxiv.org\/abs\/1904.08398.","journal-title":"Classification. Tech. rep. arXiv"},{"key":"e_1_3_2_1_2_1","volume-title":"Kyung Hyun Cho, and Yoshua Bengio","author":"Bahdanau Dzmitry","year":"2015","unstructured":"Dzmitry Bahdanau , Kyung Hyun Cho, and Yoshua Bengio . 2015 . Neural machine translation by jointly learning to align and translate. English (US). In (Jan . 2015). Dzmitry Bahdanau, Kyung Hyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. English (US). In (Jan. 2015)."},{"key":"e_1_3_2_1_3_1","volume-title":"Longformer: The Long-Document Transformer. CoRR, abs\/2004.05150. arXiv","author":"Beltagy Iz","year":"2020","unstructured":"Iz Beltagy , Matthew E. Peters , and Arman Cohan . 2020 . Longformer: The Long-Document Transformer. CoRR, abs\/2004.05150. arXiv : 2004.05150. https:\/\/arxiv.org\/abs\/2004.05150. Iz Beltagy, Matthew E. Peters, and Arman Cohan. 2020. Longformer: The Long-Document Transformer. CoRR, abs\/2004.05150. arXiv: 2004.05150. https:\/\/arxiv.org\/abs\/2004.05150."},{"key":"e_1_3_2_1_4_1","volume-title":"Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2018 . Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_5_1","DOI":"10.18653\/v1\/P18-2009"},{"key":"e_1_3_2_1_6_1","volume-title":"Bag of Tricks for Efficient Text Classification. en, (Aug","author":"Joulin Armand","year":"2016","unstructured":"Armand Joulin , Edouard Grave , Piotr Bojanowski , and Tomas Mikolov . 2016. Bag of Tricks for Efficient Text Classification. en, (Aug . 2016 ). arXiv:1607.01759 [cs]. http:\/\/arxiv.org\/abs\/1607.01759. Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2016. Bag of Tricks for Efficient Text Classification. en, (Aug. 2016). arXiv:1607.01759 [cs]. http:\/\/arxiv.org\/abs\/1607.01759."},{"key":"e_1_3_2_1_7_1","first-page":"6980","article-title":"Adam: A Method for Stochastic","volume":"1412","author":"Kingma Diederik P.","year":"2017","unstructured":"Diederik P. Kingma and Jimmy Ba . 2017 . Adam: A Method for Stochastic Optimization. Tech. rep. arXiv : 1412 . 6980 . arXiv:1412.6980 [cs] type: article. arXiv, (Jan. 2017). Retrieved June 6, 2022 from http:\/\/arxiv.org\/abs\/1412.6980. Diederik P. Kingma and Jimmy Ba. 2017. Adam: A Method for Stochastic Optimization. Tech. rep. arXiv:1412.6980. arXiv:1412.6980 [cs] type: article. arXiv, (Jan. 2017). Retrieved June 6, 2022 from http:\/\/arxiv.org\/abs\/1412.6980.","journal-title":"Optimization. Tech. rep. arXiv"},{"key":"e_1_3_2_1_8_1","volume-title":"ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In 8th International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=H1eA7AEtvS.","author":"Lan Zhenzhong","year":"2020","unstructured":"Zhenzhong Lan , Mingda Chen , Sebastian Goodman , Kevin Gimpel , Piyush Sharma , and Radu Soricut . 2020 . ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In 8th International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=H1eA7AEtvS. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2020. ALBERT: A Lite BERT for Self-supervised Learning of Language Representations. In 8th International Conference on Learning Representations. https:\/\/openreview.net\/forum?id=H1eA7AEtvS."},{"key":"e_1_3_2_1_9_1","volume-title":"International conference on machine learning. PMLR, 1188--1196","author":"Le Quoc","year":"2014","unstructured":"Quoc Le and Tomas Mikolov . 2014 . Distributed representations of sentences and documents . In International conference on machine learning. PMLR, 1188--1196 . Quoc Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In International conference on machine learning. PMLR, 1188--1196."},{"key":"e_1_3_2_1_10_1","volume-title":"Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692.","author":"Yinhan Liu","year":"2019","unstructured":"Yinhan Liu et al. 2019 . Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Yinhan Liu et al. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692."},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_11_1","DOI":"10.3115\/v1\/D14-1162"},{"key":"e_1_3_2_1_12_1","first-page":"8","article-title":"Language models are unsupervised multitask learners","volume":"1","author":"Radford Alec","year":"2019","unstructured":"Alec Radford , Jeffrey Wu , Rewon Child , David Luan , Dario Amodei , and Ilya Sutskever . 2019 . Language models are unsupervised multitask learners . OpenAI Blog , 1 , 8 , 9. Number: 8. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1, 8, 9. Number: 8.","journal-title":"OpenAI Blog"},{"key":"e_1_3_2_1_13_1","article-title":"Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer","volume":"21","author":"Raffel Colin","year":"2020","unstructured":"Colin Raffel , Noam Shazeer , Adam Roberts , Katherine Lee , Sharan Narang , Michael Matena , Yanqi Zhou , Wei Li , and Peter J. Liu . 2020 . Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer . Journal of Machine Learning Research , 21 , 140, 1--67. http:\/\/jmlr.org\/papers\/v21\/20-074.html. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine Learning Research, 21, 140, 1--67. http:\/\/jmlr.org\/papers\/v21\/20-074.html.","journal-title":"Journal of Machine Learning Research"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_14_1","DOI":"10.18653\/v1\/D19-1410"},{"unstructured":"Victor Sanh Lysandre Debut Julien Chaumond and Thomas Wolf. 2019. DistilBERT a distilled version of BERT: smaller faster cheaper and lighter. arXiv preprint arXiv:1910.01108.  Victor Sanh Lysandre Debut Julien Chaumond and Thomas Wolf. 2019. DistilBERT a distilled version of BERT: smaller faster cheaper and lighter. arXiv preprint arXiv:1910.01108.","key":"e_1_3_2_1_15_1"},{"doi-asserted-by":"publisher","key":"e_1_3_2_1_16_1","DOI":"10.1109\/CVPR.2015.7298682"},{"key":"e_1_3_2_1_17_1","first-page":"07120","article-title":"Super-Convergence: Very Fast Training of Neural Networks Using Large Learning","volume":"1708","author":"Smith Leslie N.","year":"2018","unstructured":"Leslie N. Smith and Nicholay Topin . 2018 . Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates. Tech. rep. arXiv : 1708 . 07120 . arXiv:1708.07120 [cs, stat] type: article. arXiv, (May 2018). Retrieved June 6, 2022 from http:\/\/arxiv.org\/abs\/1708.07120. Leslie N. Smith and Nicholay Topin. 2018. Super-Convergence: Very Fast Training of Neural Networks Using Large Learning Rates. Tech. rep. arXiv:1708.07120. arXiv:1708.07120 [cs, stat] type: article. arXiv, (May 2018). Retrieved June 6, 2022 from http:\/\/arxiv.org\/abs\/1708.07120.","journal-title":"Rates. Tech. rep. arXiv"},{"key":"e_1_3_2_1_18_1","volume-title":"Advances in Neural Information Processing Systems","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani , Noam Shazeer , Niki Parmar , Jakob Uszkoreit , Llion Jones , Aidan N Gomez , \u0141ukasz Kaiser , and Illia Polosukhin . 2017. Attention is all you need . In Advances in Neural Information Processing Systems . I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, (Eds.) Vol. 30. Curran Associates, Inc. https:\/\/proceedings.neurips.cc\/paper\/ 2017 \/file\/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems. I. Guyon, U. Von Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Garnett, (Eds.) Vol. 30. Curran Associates, Inc. https:\/\/proceedings.neurips.cc\/paper\/2017\/file\/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf."},{"key":"e_1_3_2_1_19_1","volume-title":"Retrieved","author":"Wang Alex","year":"2022","unstructured":"Alex Wang , Amanpreet Singh , Julian Michael , Felix Hill , Omer Levy , and Samuel R. Bowman . 2018. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. en. In (Sept. 2018) . Retrieved June 14, 2022 from https:\/\/openreview.net\/forum?id=rJ4km2R5t7. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding. en. In (Sept. 2018). Retrieved June 14, 2022 from https:\/\/openreview.net\/forum?id=rJ4km2R5t7."},{"key":"e_1_3_2_1_20_1","volume-title":"Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32.","author":"Yang Zhilin","year":"2019","unstructured":"Zhilin Yang , Zihang Dai , Yiming Yang , Jaime Carbonell , Russ R Salakhutdinov , and Quoc V Le . 2019 . Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32."},{"unstructured":"Hsiang-Fu Yu Kai Zhong Inderjit S. Dhillon Wei-Cheng Wang and Yiming Yang. 2019. X-bert: extreme multi-label text classification using bidirectional encoder representations from transformers. In NeurIPS 2019 Workshop on Science Meets Engineering of Deep Learning. https:\/\/www.amazon.science\/publications\/x-bert-extreme-multi-label-text-classification-using-bidirectional-encoder-representations-from-transformers.  Hsiang-Fu Yu Kai Zhong Inderjit S. Dhillon Wei-Cheng Wang and Yiming Yang. 2019. X-bert: extreme multi-label text classification using bidirectional encoder representations from transformers. In NeurIPS 2019 Workshop on Science Meets Engineering of Deep Learning. https:\/\/www.amazon.science\/publications\/x-bert-extreme-multi-label-text-classification-using-bidirectional-encoder-representations-from-transformers.","key":"e_1_3_2_1_21_1"}],"event":{"sponsor":["SIGWEB ACM Special Interest Group on Hypertext, Hypermedia, and Web","SIGDOC ACM Special Interest Group on Systems Documentation"],"acronym":"DocEng '22","name":"DocEng '22: ACM Symposium on Document Engineering 2022","location":"San Jose California"},"container-title":["Proceedings of the 22nd ACM Symposium on Document Engineering"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3558100.3563843","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3558100.3563843","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:49:32Z","timestamp":1750182572000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3558100.3563843"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,20]]},"references-count":21,"alternative-id":["10.1145\/3558100.3563843","10.1145\/3558100"],"URL":"https:\/\/doi.org\/10.1145\/3558100.3563843","relation":{},"subject":[],"published":{"date-parts":[[2022,9,20]]},"assertion":[{"value":"2022-11-18","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}