{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,10]],"date-time":"2026-03-10T09:50:42Z","timestamp":1773136242086,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":49,"publisher":"ACM","license":[{"start":{"date-parts":[[2021,4,8]],"date-time":"2021-04-08T00:00:00Z","timestamp":1617840000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"NIH","award":["NIH\/NIBIB R01-EB025020"],"award-info":[{"award-number":["NIH\/NIBIB R01-EB025020"]}]},{"DOI":"10.13039\/100000893","name":"Simons Foundation","doi-asserted-by":"publisher","award":["400837"],"award-info":[{"award-number":["400837"]}],"id":[{"id":"10.13039\/100000893","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,4,8]]},"DOI":"10.1145\/3450439.3451856","type":"proceedings-article","created":{"date-parts":[[2021,3,23]],"date-time":"2021-03-23T22:25:27Z","timestamp":1616538327000},"page":"14-24","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["Affinitention nets"],"prefix":"10.1145","author":[{"given":"David","family":"Dov","sequence":"first","affiliation":[{"name":"Duke University"}]},{"given":"Serge","family":"Assaad","sequence":"additional","affiliation":[{"name":"Duke University"}]},{"given":"Shijing","family":"Si","sequence":"additional","affiliation":[{"name":"Duke University"}]},{"given":"Rui","family":"Wang","sequence":"additional","affiliation":[{"name":"Duke University"}]},{"given":"Hongteng","family":"Xu","sequence":"additional","affiliation":[{"name":"Renmin University of China, Beijing, China"}]},{"given":"Shahar Ziv","family":"Kovalsky","sequence":"additional","affiliation":[{"name":"University of North Carolina"}]},{"given":"Jonathan","family":"Bell","sequence":"additional","affiliation":[{"name":"Duke University Medical Center"}]},{"given":"Danielle Elliott","family":"Range","sequence":"additional","affiliation":[{"name":"Duke University Medical Center"}]},{"given":"Jonathan","family":"Cohen","sequence":"additional","affiliation":[{"name":"Kaplan Medical Center, Rehovot, Israel"}]},{"given":"Ricardo","family":"Henao","sequence":"additional","affiliation":[{"name":"Duke University"}]},{"given":"Lawrence","family":"Carin","sequence":"additional","affiliation":[{"name":"Duke University"}]}],"member":"320","published-online":{"date-parts":[[2021,4,8]]},"reference":[{"key":"e_1_3_2_1_1_1","volume-title":"N-gcn: Multi-scale graph convolution for semi-supervised node classification. In uncertainty in artificial intelligence. PMLR, 841--851.","author":"Abu-El-Haija Sami","year":"2020","unstructured":"Sami Abu-El-Haija , Amol Kapoor , Bryan Perozzi , and Joonseok Lee . 2020 . N-gcn: Multi-scale graph convolution for semi-supervised node classification. In uncertainty in artificial intelligence. PMLR, 841--851. Sami Abu-El-Haija, Amol Kapoor, Bryan Perozzi, and Joonseok Lee. 2020. N-gcn: Multi-scale graph convolution for semi-supervised node classification. In uncertainty in artificial intelligence. PMLR, 841--851."},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"crossref","unstructured":"Nicolas Carion Francisco Massa Gabriel Synnaeve Nicolas Usunier Alexander Kirillov and Sergey Zagoruyko. 2020. End-to-End Object Detection with Transformers. arXiv:2005.12872 [cs.CV] Nicolas Carion Francisco Massa Gabriel Synnaeve Nicolas Usunier Alexander Kirillov and Sergey Zagoruyko. 2020. End-to-End Object Detection with Transformers. arXiv:2005.12872 [cs.CV]","DOI":"10.1007\/978-3-030-58452-8_13"},{"key":"e_1_3_2_1_3_1","volume-title":"Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509","author":"Child Rewon","year":"2019","unstructured":"Rewon Child , Scott Gray , Alec Radford , and Ilya Sutskever . 2019. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509 ( 2019 ). Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. 2019. Generating long sequences with sparse transformers. arXiv preprint arXiv:1904.10509 (2019)."},{"key":"e_1_3_2_1_4_1","unstructured":"Krzysztof Choromanski Valerii Likhosherstov David Dohan Xingyou Song Andreea Gane Tamas Sarlos Peter Hawkins Jared Davis Afroz Mohiuddin Lukasz Kaiser etal 2020. Rethinking attention with performers. arXiv preprint arXiv:2009.14794 (2020). Krzysztof Choromanski Valerii Likhosherstov David Dohan Xingyou Song Andreea Gane Tamas Sarlos Peter Hawkins Jared Davis Afroz Mohiuddin Lukasz Kaiser et al. 2020. Rethinking attention with performers. arXiv preprint arXiv:2009.14794 (2020)."},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"crossref","unstructured":"R. R. Coifman and S. Lafon. 2006. Diffusion maps. Applied and computational harmonic analysis 21 1 (2006) 5--30. R. R. Coifman and S. Lafon. 2006. Diffusion maps. Applied and computational harmonic analysis 21 1 (2006) 5--30.","DOI":"10.1016\/j.acha.2006.04.006"},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10994-009-5157-z"},{"key":"e_1_3_2_1_7_1","volume-title":"Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805","author":"Devlin Jacob","year":"2018","unstructured":"Jacob Devlin , Ming-Wei Chang , Kenton Lee , and Kristina Toutanova . 2018 . Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018). Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)."},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-28931-6_31"},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.media.2020.101814"},{"key":"e_1_3_2_1_10_1","volume-title":"Machine Learning for Healthcare Conference. PMLR, 553--570","author":"Dov David","year":"2019","unstructured":"David Dov , Shahar Z Kovalsky , Jonathan Cohen , Danielle Elliott Range , Ricardo Henao , and Lawrence Carin . 2019 . Thyroid cancer malignancy prediction from whole slide cytopathology images . In Machine Learning for Healthcare Conference. PMLR, 553--570 . David Dov, Shahar Z Kovalsky, Jonathan Cohen, Danielle Elliott Range, Ricardo Henao, and Lawrence Carin. 2019. Thyroid cancer malignancy prediction from whole slide cytopathology images. In Machine Learning for Healthcare Conference. PMLR, 553--570."},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1109\/TSP.2016.2605068"},{"key":"e_1_3_2_1_12_1","volume-title":"Application of a machine learning algorithm to predict malignancy in thyroid cytopathology. Cancer cytopathology 128, 4","author":"Range Danielle Elliott","year":"2020","unstructured":"Danielle Elliott Range , David Dov , Shahar Z Kovalsky , Ricardo Henao , Lawrence Carin , and Jonathan Cohen . 2020. Application of a machine learning algorithm to predict malignancy in thyroid cytopathology. Cancer cytopathology 128, 4 ( 2020 ), 287--295. Danielle Elliott Range, David Dov, Shahar Z Kovalsky, Ricardo Henao, Lawrence Carin, and Jonathan Cohen. 2020. Application of a machine learning algorithm to predict malignancy in thyroid cytopathology. Cancer cytopathology 128, 4 (2020), 287--295."},{"key":"e_1_3_2_1_13_1","volume-title":"Regularization networks and support vector machines. Advances in computational mathematics 13, 1","author":"Evgeniou Theodoros","year":"2000","unstructured":"Theodoros Evgeniou , Massimiliano Pontil , and Tomaso Poggio . 2000. Regularization networks and support vector machines. Advances in computational mathematics 13, 1 ( 2000 ), 1. Theodoros Evgeniou, Massimiliano Pontil, and Tomaso Poggio. 2000. Regularization networks and support vector machines. Advances in computational mathematics 13, 1 (2000), 1."},{"key":"e_1_3_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.5555\/3295222.3295399"},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00033"},{"key":"e_1_3_2_1_16_1","volume-title":"Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778","author":"He K.","unstructured":"K. He , X. Zhang , S. Ren , and J. Sun . 2016. Deep residual learning for image recognition . In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778 . K. He, X. Zhang, S. Ren, and J. Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778."},{"key":"e_1_3_2_1_17_1","volume-title":"Attention-based deep multiple instance learning. arXiv preprint arXiv:1802.04712 (ICML18)","author":"Ilse Maximilian","year":"2018","unstructured":"Maximilian Ilse , Jakub M Tomczak , and Max Welling . 2018. Attention-based deep multiple instance learning. arXiv preprint arXiv:1802.04712 (ICML18) ( 2018 ). Maximilian Ilse, Jakub M Tomczak, and Max Welling. 2018. Attention-based deep multiple instance learning. arXiv preprint arXiv:1802.04712 (ICML18) (2018)."},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.3115\/v1\/D14-1181"},{"key":"e_1_3_2_1_19_1","unstructured":"T. N. Kipf and M. Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016). T. N. Kipf and M. Welling. 2016. Semi-supervised classification with graph convolutional networks. arXiv preprint arXiv:1609.02907 (2016)."},{"key":"e_1_3_2_1_20_1","volume-title":"Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451","author":"Kitaev Nikita","year":"2020","unstructured":"Nikita Kitaev , \u0141ukasz Kaiser , and Anselm Levskaya . 2020 . Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451 (2020). Nikita Kitaev, \u0141ukasz Kaiser, and Anselm Levskaya. 2020. Reformer: The efficient transformer. arXiv preprint arXiv:2001.04451 (2020)."},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1093\/bioinformatics\/btw252"},{"key":"e_1_3_2_1_22_1","unstructured":"A. Krizhevsky G. Hinton etal 2009. Learning multiple layers of features from tiny images. Technical Report. Citeseer. A. Krizhevsky G. Hinton et al. 2009. Learning multiple layers of features from tiny images. Technical Report. Citeseer."},{"key":"e_1_3_2_1_23_1","volume-title":"Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942","author":"Lan Zhenzhong","year":"2019","unstructured":"Zhenzhong Lan , Mingda Chen , Sebastian Goodman , Kevin Gimpel , Piyush Sharma , and Radu Soricut . 2019 . Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942 (2019). Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. arXiv preprint arXiv:1909.11942 (2019)."},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.acha.2015.09.002"},{"key":"e_1_3_2_1_25_1","volume-title":"Set transformer: A framework for attention-based permutation-invariant neural networks. arXiv preprint arXiv:1810.00825","author":"Lee Juho","year":"2018","unstructured":"Juho Lee , Yoonho Lee , Jungtaek Kim , Adam R Kosiorek , Seungjin Choi , and Yee Whye Teh . 2018. Set transformer: A framework for attention-based permutation-invariant neural networks. arXiv preprint arXiv:1810.00825 ( 2018 ). Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R Kosiorek, Seungjin Choi, and Yee Whye Teh. 2018. Set transformer: A framework for attention-based permutation-invariant neural networks. arXiv preprint arXiv:1810.00825 (2018)."},{"key":"e_1_3_2_1_26_1","doi-asserted-by":"crossref","first-page":"1234","DOI":"10.1093\/bioinformatics\/btz682","article-title":"BioBERT: a pre-trained biomedical language representation model for biomedical text mining","volume":"36","author":"Lee Jinhyuk","year":"2020","unstructured":"Jinhyuk Lee , Wonjin Yoon , Sungdong Kim , Donghyeon Kim , Sunkyu Kim , Chan Ho So , and Jaewoo Kang . 2020 . BioBERT: a pre-trained biomedical language representation model for biomedical text mining . Bioinformatics 36 , 4 (2020), 1234 -- 1240 . Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36, 4 (2020), 1234--1240.","journal-title":"Bioinformatics"},{"key":"e_1_3_2_1_27_1","volume-title":"Lanczosnet: Multi-scale deep graph convolutional networks. arXiv preprint arXiv:1901.01484","author":"Liao Renjie","year":"2019","unstructured":"Renjie Liao , Zhizhen Zhao , Raquel Urtasun , and Richard S Zemel . 2019 . Lanczosnet: Multi-scale deep graph convolutional networks. arXiv preprint arXiv:1901.01484 (2019). Renjie Liao, Zhizhen Zhao, Raquel Urtasun, and Richard S Zemel. 2019. Lanczosnet: Multi-scale deep graph convolutional networks. arXiv preprint arXiv:1901.01484 (2019)."},{"key":"e_1_3_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2019.08.005"},{"key":"e_1_3_2_1_29_1","volume-title":"Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen Awm Van Der Laak, Bram Van Ginneken, and Clara I S\u00e1nchez.","author":"Litjens Geert","year":"2017","unstructured":"Geert Litjens , Thijs Kooi , Babak Ehteshami Bejnordi , Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen Awm Van Der Laak, Bram Van Ginneken, and Clara I S\u00e1nchez. 2017 . A survey on deep learning in medical image analysis. Medical image analysis 42 (2017), 60--88. Geert Litjens, Thijs Kooi, Babak Ehteshami Bejnordi, Arnaud Arindra Adiyoso Setio, Francesco Ciompi, Mohsen Ghafoorian, Jeroen Awm Van Der Laak, Bram Van Ginneken, and Clara I S\u00e1nchez. 2017. A survey on deep learning in medical image analysis. Medical image analysis 42 (2017), 60--88."},{"key":"e_1_3_2_1_30_1","volume-title":"Object-centric learning with slot attention. arXiv preprint arXiv:2006.15055","author":"Locatello Francesco","year":"2020","unstructured":"Francesco Locatello , Dirk Weissenborn , Thomas Unterthiner , Aravindh Mahendran , Georg Heigold , Jakob Uszkoreit , Alexey Dosovitskiy , and Thomas Kipf . 2020. Object-centric learning with slot attention. arXiv preprint arXiv:2006.15055 ( 2020 ). Francesco Locatello, Dirk Weissenborn, Thomas Unterthiner, Aravindh Mahendran, Georg Heigold, Jakob Uszkoreit, Alexey Dosovitskiy, and Thomas Kipf. 2020. Object-centric learning with slot attention. arXiv preprint arXiv:2006.15055 (2020)."},{"key":"e_1_3_2_1_31_1","series-title":"Series B (Methodological)","volume-title":"Regression models for ordinal data. Journal of the royal statistical society","author":"McCullagh P.","year":"1980","unstructured":"P. McCullagh . 1980. Regression models for ordinal data. Journal of the royal statistical society . Series B (Methodological) ( 1980 ), 109--142. P. McCullagh. 1980. Regression models for ordinal data. Journal of the royal statistical society. Series B (Methodological) (1980), 109--142."},{"key":"e_1_3_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.5555\/3045390.3045598"},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"crossref","unstructured":"G. Quellec G. Cazuguel B. Cochener and M. Lamard. 2017. Multiple-instance learning for medical image and video analysis. IEEE reviews in biomedical engineering 10 (2017) 213--234. G. Quellec G. Cazuguel B. Cochener and M. Lamard. 2017. Multiple-instance learning for medical image and video analysis. IEEE reviews in biomedical engineering 10 (2017) 213--234.","DOI":"10.1109\/RBME.2017.2651164"},{"key":"e_1_3_2_1_34_1","unstructured":"Alec Radford Karthik Narasimhan Tim Salimans and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. Alec Radford Karthik Narasimhan Tim Salimans and Ilya Sutskever. 2018. Improving language understanding by generative pre-training."},{"key":"e_1_3_2_1_35_1","volume-title":"Sketchformer: Transformer-Based Representation for Sketched Structure. In IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR).","author":"Ferraz Ribeiro Leo Sampaio","year":"2020","unstructured":"Leo Sampaio Ferraz Ribeiro , Tu Bui , John Collomosse , and Moacir Ponti . 2020 . Sketchformer: Transformer-Based Representation for Sketched Structure. In IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Leo Sampaio Ferraz Ribeiro, Tu Bui, John Collomosse, and Moacir Ponti. 2020. Sketchformer: Transformer-Based Representation for Sketched Structure. In IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)."},{"key":"e_1_3_2_1_36_1","unstructured":"Y. Shi J\/ Oliva and M. Niethammer. 2019. Deep Message Passing on Sets. arXiv preprint arXiv:1909.09877 (2019). Y. Shi J\/ Oliva and M. Niethammer. 2019. Deep Message Passing on Sets. arXiv preprint arXiv:1909.09877 (2019)."},{"key":"e_1_3_2_1_37_1","volume-title":"Students Need More Attention: BERT-based Attention Model for Small Data with Application to Automatic Patient Message Triage (Proceedings of Machine Learning Research","author":"Si Shijing","unstructured":"Shijing Si , Rui Wang , Jedrek Wosik , Hao Zhang , David Dov , Guoyin Wang , and Lawrence Carin . 2020. Students Need More Attention: BERT-based Attention Model for Small Data with Application to Automatic Patient Message Triage (Proceedings of Machine Learning Research , Vol. 126), Finale Doshi-Velez, Jim Fackler, Ken Jung, David Kale, Rajesh Ranganath, Byron Wallace, and Jenna Wiens (Eds.). PMLR, Virtual, 436-- 456 . http:\/\/proceedings.mlr.press\/v126\/si20a.html Shijing Si, Rui Wang, Jedrek Wosik, Hao Zhang, David Dov, Guoyin Wang, and Lawrence Carin. 2020. Students Need More Attention: BERT-based Attention Model for Small Data with Application to Automatic Patient Message Triage (Proceedings of Machine Learning Research, Vol. 126), Finale Doshi-Velez, Jim Fackler, Ken Jung, David Kale, Rajesh Ranganath, Byron Wallace, and Jenna Wiens (Eds.). PMLR, Virtual, 436--456. http:\/\/proceedings.mlr.press\/v126\/si20a.html"},{"key":"e_1_3_2_1_38_1","unstructured":"Y. Tian L. Zhao X. Peng and D. Metaxas. 2019. Rethinking kernel methods for node representation learning on graphs. In Advances in Neural Information Processing Systems. 11681--11692. Y. Tian L. Zhao X. Peng and D. Metaxas. 2019. Rethinking kernel methods for node representation learning on graphs. In Advances in Neural Information Processing Systems. 11681--11692."},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1162\/153244302760185243"},{"key":"e_1_3_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.5555\/3295222.3295349"},{"key":"e_1_3_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.5555\/2354409.2354754"},{"key":"e_1_3_2_1_42_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00813"},{"key":"e_1_3_2_1_43_1","unstructured":"Z. Wu Shirui Pan Fengwen C. G. Long C. Zhang and P. S. Yu. 2019. A comprehensive survey on graph neural networks. arXiv preprint arXiv:1901.00596 (2019). Z. Wu Shirui Pan Fengwen C. G. Long C. Zhang and P. S. Yu. 2019. A comprehensive survey on graph neural networks. arXiv preprint arXiv:1901.00596 (2019)."},{"key":"e_1_3_2_1_44_1","volume-title":"Garnett (Eds.)","volume":"32","author":"Yang Zhilin","year":"2019","unstructured":"Zhilin Yang , Zihang Dai , Yiming Yang , Jaime Carbonell , Russ R Salakhutdinov , and Quoc V Le . 2019 . XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R . Garnett (Eds.) , Vol. 32 . Curran Associates, Inc., 5753--5763. https:\/\/proceedings.neurips.cc\/paper\/ 2019\/file\/dc6a7e655d7e5840e66733e9ee67cc69-Paper.pdf Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Russ R Salakhutdinov, and Quoc V Le. 2019. XLNet: Generalized Autoregressive Pretraining for Language Understanding. In Advances in Neural Information Processing Systems, H. Wallach, H. Larochelle, A. Beygelzimer, F. d'Alch\u00e9-Buc, E. Fox, and R. Garnett (Eds.), Vol. 32. Curran Associates, Inc., 5753--5763. https:\/\/proceedings.neurips.cc\/paper\/2019\/file\/dc6a7e655d7e5840e66733e9ee67cc69-Paper.pdf"},{"key":"e_1_3_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/N16-1174"},{"key":"e_1_3_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.01151"},{"key":"e_1_3_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.5555\/3294996.3295098"},{"key":"e_1_3_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.5555\/2976248.2976426"},{"key":"e_1_3_2_1_49_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D16-1024"}],"event":{"name":"ACM CHIL '21: ACM Conference on Health, Inference, and Learning","location":"Virtual Event USA","acronym":"ACM CHIL '21","sponsor":["ACM Association for Computing Machinery"]},"container-title":["Proceedings of the Conference on Health, Inference, and Learning"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3450439.3451856","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3450439.3451856","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:18:44Z","timestamp":1750191524000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3450439.3451856"}},"subtitle":["kernel perspective on attention architectures for set classification with applications to medical text and images"],"short-title":[],"issued":{"date-parts":[[2021,4,8]]},"references-count":49,"alternative-id":["10.1145\/3450439.3451856","10.1145\/3450439"],"URL":"https:\/\/doi.org\/10.1145\/3450439.3451856","relation":{},"subject":[],"published":{"date-parts":[[2021,4,8]]},"assertion":[{"value":"2021-04-08","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}