{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,9]],"date-time":"2026-01-09T13:26:51Z","timestamp":1767965211263,"version":"3.49.0"},"reference-count":72,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2023,6,16]],"date-time":"2023-06-16T00:00:00Z","timestamp":1686873600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"NSFC","doi-asserted-by":"crossref","award":["12271215 and 11871248"],"award-info":[{"award-number":["12271215 and 11871248"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100021171","name":"Guangdong Basic and Applied Basic Research Foundation","doi-asserted-by":"crossref","award":["2021A1515010857, 2022A1515010029"],"award-info":[{"award-number":["2021A1515010857, 2022A1515010029"]}],"id":[{"id":"10.13039\/501100021171","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100004543","name":"China Scholarship Council","doi-asserted-by":"crossref","award":["202206780011"],"award-info":[{"award-number":["202206780011"]}],"id":[{"id":"10.13039\/501100004543","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Outstanding Innovative Talents Cultivation Funded Programs for Doctoral Students of Jinan University","award":["2022CXB013"],"award-info":[{"award-number":["2022CXB013"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Asian Low-Resour. Lang. Inf. Process."],"published-print":{"date-parts":[[2023,6,30]]},"abstract":"<jats:p>The Chinese radiology report summarization is a crucial component in smart healthcare that employs language models to summarize key findings in radiology reports and communicate these findings to physicians. However, most language models for radiology report summarization utilize a softmax transformation in their output layer, leading to dense alignments and strictly positive output probabilities. This density is inefficient, reducing model interpretability and giving probability mass to many unrealistic outputs. To tackle this issue, we propose a novel approach named nucleusmax. Nucleusmax is able to mitigate dense outputs and improve model interpretability by truncating the unreliable tail of the probability distribution. In addition, we incorporate nucleusmax with a copy mechanism, a useful technique to avoid professional errors in the generated diagnostic opinions. To further promote the research of radiology report summarization, we also have created a Chinese radiology report summarization dataset, which is freely available. Experimental results showed via both automatic and human evaluation that the proposed approach substantially improves the sparsity and overall quality of outputs over competitive softmax models, producing radiology summaries that approach the quality of those authored by physicians. In general, our work demonstrates the feasibility and prospect of the language model to the domain of radiology and smart healthcare.<\/jats:p>","DOI":"10.1145\/3596219","type":"journal-article","created":{"date-parts":[[2023,5,13]],"date-time":"2023-05-13T11:32:11Z","timestamp":1683977531000},"page":"1-21","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":23,"title":["From Softmax to Nucleusmax: A Novel Sparse Language Model for Chinese Radiology Report Summarization"],"prefix":"10.1145","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-5174-5182","authenticated-orcid":false,"given":"Shuai","family":"Zhao","sequence":"first","affiliation":[{"name":"Jinan University, China and Nanyang Technological University, Singapore"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1705-1045","authenticated-orcid":false,"given":"Qing","family":"Li","sequence":"additional","affiliation":[{"name":"Jinan University, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6489-286X","authenticated-orcid":false,"given":"Yuer","family":"Yang","sequence":"additional","affiliation":[{"name":"Jinan University, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9277-6038","authenticated-orcid":false,"given":"Jinming","family":"Wen","sequence":"additional","affiliation":[{"name":"Jinan University, China and Pazhou Lab, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5605-7397","authenticated-orcid":false,"given":"Weiqi","family":"Luo","sequence":"additional","affiliation":[{"name":"Jinan University, China"}]}],"member":"320","published-online":{"date-parts":[[2023,6,16]]},"reference":[{"key":"e_1_3_2_2_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.bionlp-1.8"},{"key":"e_1_3_2_3_2","volume-title":"Proc. of NAACL","author":"Adams Griffin","year":"2021","unstructured":"Griffin Adams, Emily Alsentzer, Mert Ketenci, Jason Zucker, and No\u00e9mie Elhadad. 2021. What\u2019s in a summary? Laying the groundwork for advances in hospital-course summarization. In Proc. of NAACL."},{"key":"e_1_3_2_4_2","article-title":"Latent Dirichlet allocation","author":"Blei David M.","year":"2003","unstructured":"David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. The Journal of Machine Learning Research (2003).","journal-title":"The Journal of Machine Learning Research"},{"key":"e_1_3_2_5_2","article-title":"Learning with Fenchel-Young losses.","author":"Blondel Mathieu","year":"2020","unstructured":"Mathieu Blondel, Andr\u00e9 F. T. Martins, and Vlad Niculae. 2020. Learning with Fenchel-Young losses. J. Mach. Learn. Res. (2020).","journal-title":"J. Mach. Learn. Res."},{"key":"e_1_3_2_6_2","volume-title":"Neurocomputing","author":"Bridle John S.","year":"1990","unstructured":"John S. Bridle. 1990. Probabilistic interpretation of feedforward classification network outputs, with relationships to statistical pattern recognition. In Neurocomputing."},{"key":"e_1_3_2_7_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.media.2020.101797"},{"key":"e_1_3_2_8_2","article-title":"ChestXRayBERT: A pretrained language model for chest radiology report summarization","author":"Cai Xiaoyan","year":"2021","unstructured":"Xiaoyan Cai, Sen Liu, Junwei Han, Libin Yang, Zhenguo Liu, and Tianming Liu. 2021. ChestXRayBERT: A pretrained language model for chest radiology report summarization. IEEE Transactions on Multimedia (2021).","journal-title":"IEEE Transactions on Multimedia"},{"key":"e_1_3_2_9_2","article-title":"Microsoft COCO captions: Data collection and evaluation server","author":"Chen Xinlei","year":"2015","unstructured":"Xinlei Chen, Hao Fang, Tsung-Yi Lin, Ramakrishna Vedantam, Piotr Gupta, and C. Lawrence Zitnick. 2015. Microsoft COCO captions: Data collection and evaluation server. IEEE Conference on Computer Vision and Pattern Recognition (2015).","journal-title":"IEEE Conference on Computer Vision and Pattern Recognition"},{"key":"e_1_3_2_10_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/P18-1063"},{"key":"e_1_3_2_11_2","volume-title":"Proc. of EMNLP","author":"Chen Zhihong","year":"2020","unstructured":"Zhihong Chen, Yan Song, Tsung-Hui Chang, and Xiang Wan. 2020. Generating radiology reports via memory-driven transformer. In Proc. of EMNLP."},{"key":"e_1_3_2_12_2","volume-title":"Proc. of CVPR","author":"Cornia Marcella","year":"2020","unstructured":"Marcella Cornia, Matteo Stefanini, Lorenzo Baraldi, and Rita Cucchiara. 2020. Meshed-memory transformer for image captioning. In Proc. of CVPR."},{"key":"e_1_3_2_13_2","volume-title":"Proc. of EMNLP","author":"Cui Yiming","year":"2020","unstructured":"Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Shijin Wang, and Guoping Hu. 2020. Revisiting pre-trained models for Chinese natural language processing. In Proc. of EMNLP."},{"key":"e_1_3_2_14_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.dib.2020.106056"},{"key":"e_1_3_2_15_2","doi-asserted-by":"publisher","DOI":"10.1093\/jamia\/ocv080"},{"key":"e_1_3_2_16_2","volume-title":"Proc. of CVPR","author":"Deng Jia","year":"2009","unstructured":"Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In Proc. of CVPR."},{"key":"e_1_3_2_17_2","volume-title":"Proc. of CVPR","author":"Deng Jiankang","year":"2019","unstructured":"Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou. 2019. Arcface: Additive angular margin loss for deep face recognition. In Proc. of CVPR."},{"key":"e_1_3_2_18_2","volume-title":"Proc. of NAACL","author":"Devlin Jacob","year":"2019","unstructured":"Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proc. of NAACL."},{"key":"e_1_3_2_19_2","volume-title":"Proc. of ICONIP","author":"Dong Li","year":"2019","unstructured":"Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pre-training for natural language understanding and generation. In Proc. of ICONIP."},{"key":"e_1_3_2_20_2","volume-title":"Seminars in Ultrasound, CT and MRI","author":"Donnelly Lane F.","year":"2022","unstructured":"Lane F. Donnelly, Robert Grzeszczuk, and Carolina V. Guimaraes. 2022. Use of natural language processing (NLP) in evaluation of radiology reports: An update on applications and technology advances. In Seminars in Ultrasound, CT and MRI."},{"key":"e_1_3_2_21_2","unstructured":"Yongping Du Yiliang Zhao Jingya Yan and Qingxiao Li. 2022. UGDAS: Unsupervised graph-network based denoiser for abstractive summarization in biomedical domain. (2022)."},{"key":"e_1_3_2_22_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/D19-5409"},{"key":"e_1_3_2_23_2","volume-title":"Proc. of ICLR","author":"Holtzman Ari","year":"2019","unstructured":"Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degeneration. In Proc. of ICLR."},{"key":"e_1_3_2_24_2","volume-title":"Proc. of EMNLP","author":"Hu Baotian","year":"2015","unstructured":"Baotian Hu, Qingcai Chen, and Fangze Zhu. 2015. LCSTS: A large scale Chinese short text summarization dataset. In Proc. of EMNLP."},{"key":"e_1_3_2_25_2","volume-title":"Proc. of ACL","author":"Hu Jinpeng","year":"2022","unstructured":"Jinpeng Hu, Zhuo Li, Zhihong Chen, Zhen Li, Xiang Wan, and Tsung-Hui Chang. 2022. Graph enhanced contrastive learning for radiology findings summarization. In Proc. of ACL."},{"key":"e_1_3_2_26_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ins.2020.03.080"},{"key":"e_1_3_2_27_2","doi-asserted-by":"publisher","DOI":"10.1038\/s41597-019-0322-0"},{"key":"e_1_3_2_28_2","volume-title":"Proc. of ACL","author":"Karn Sanjeev Kumar","year":"2022","unstructured":"Sanjeev Kumar Karn, Ning Liu, Hinrich Sch\u00fctze, and Oladimeji Farri. 2022. Differentiable multi-agent actor-critic for multi-step radiology report summarization. In Proc. of ACL."},{"key":"e_1_3_2_29_2","unstructured":"Navdeep Kaur Ajay Mittal and Gurprem Singh. 2022. Methods for automatic generation of radiological reports of chest radiographs: A comprehensive survey. (2022)."},{"key":"e_1_3_2_30_2","volume-title":"ICLR (Poster)","author":"Kingma Diederik P.","year":"2015","unstructured":"Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR (Poster)."},{"key":"e_1_3_2_31_2","volume-title":"Proc. of NAACL","author":"Lin Chin-Yew","year":"2003","unstructured":"Chin-Yew Lin and Eduard Hovy. 2003. Automatic evaluation of summaries using n-gram co-occurrence statistics. In Proc. of NAACL."},{"key":"e_1_3_2_32_2","article-title":"Auto-encoding knowledge graph for unsupervised medical report generation","author":"Liu Fenglin","year":"2021","unstructured":"Fenglin Liu, Chenyu You, Xian Wu, Shen Ge, Xu Sun, et\u00a0al. 2021. Auto-encoding knowledge graph for unsupervised medical report generation. Proc. of NeurIPS (2021).","journal-title":"Proc. of NeurIPS"},{"key":"e_1_3_2_33_2","volume-title":"Proc. of CVPR","author":"Liu Weiyang","year":"2017","unstructured":"Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song. 2017. Sphereface: Deep hypersphere embedding for face recognition. In Proc. of CVPR."},{"key":"e_1_3_2_34_2","volume-title":"Proc. of ICML","author":"Liu Weiyang","year":"2016","unstructured":"Weiyang Liu, Yandong Wen, Zhiding Yu, and Meng Yang. 2016. Large-margin softmax loss for convolutional neural networks. In Proc. of ICML."},{"key":"e_1_3_2_35_2","volume-title":"Proc. of ACL Findings","author":"Liu Xuebo","year":"2021","unstructured":"Xuebo Liu, Longyue Wang, Derek F. Wong, Liang Ding, Lidia S. Chao, Shuming Shi, and Zhaopeng Tu. 2021. On the copying behaviors of pre-training for neural machine translation. In Proc. of ACL Findings."},{"key":"e_1_3_2_36_2","article-title":"Fine-tune BERT for extractive summarization","author":"Liu Yang","year":"2019","unstructured":"Yang Liu. 2019. Fine-tune BERT for extractive summarization. arXiv preprint arXiv:1903.10318 (2019).","journal-title":"arXiv preprint arXiv:1903.10318"},{"key":"e_1_3_2_37_2","volume-title":"Proc. of CVPR","author":"Lu Jiasen","year":"2017","unstructured":"Jiasen Lu, Caiming Xiong, Devi Parikh, and Richard Socher. 2017. Knowing when to look: Adaptive attention via a visual sentinel for image captioning. In Proc. of CVPR."},{"key":"e_1_3_2_38_2","volume-title":"Proc. of EMNLP","author":"Lu Yao","year":"2020","unstructured":"Yao Lu, Yue Dong, and Laurent Charlin. 2020. Multi-XScience: A large-scale dataset for extreme multi-document summarization of scientific articles. In Proc. of EMNLP."},{"key":"e_1_3_2_39_2","volume-title":"Proc. of SIGIR","author":"MacAvaney Sean","year":"2019","unstructured":"Sean MacAvaney, Sajad Sotudeh, Arman Cohan, Nazli Goharian, Ish Talati, and Ross W. Filice. 2019. Ontology-aware clinical abstractive summarization. In Proc. of SIGIR."},{"key":"e_1_3_2_40_2","volume-title":"Proc. of ICML","author":"Martins Andre","year":"2016","unstructured":"Andre Martins and Ramon Astudillo. 2016. From softmax to sparsemax: A sparse model of attention and multi-label classification. In Proc. of ICML."},{"key":"e_1_3_2_41_2","volume-title":"Proc. of EMNLP","author":"Martins Pedro Henrique","year":"2020","unstructured":"Pedro Henrique Martins, Zita Marinho, and Andr\u00e9 F. T. Martins. 2020. Sparse text generation. In Proc. of EMNLP."},{"key":"e_1_3_2_42_2","volume-title":"Proc. of EMNLP","author":"Mihalcea Rada","year":"2004","unstructured":"Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In Proc. of EMNLP."},{"key":"e_1_3_2_43_2","volume-title":"Proc. of CoNLL","author":"Nallapati Ramesh","year":"2016","unstructured":"Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, \u00c7a\u011flar Gu\u0307l\u00e7ehre, and Bing Xiang. 2016. Abstractive text summarization using sequence-to-sequence RNNs and beyond. In Proc. of CoNLL."},{"key":"e_1_3_2_44_2","volume-title":"Proc. of ICONIP","author":"Niculae Vlad","year":"2017","unstructured":"Vlad Niculae and Mathieu Blondel. 2017. A regularized framework for sparse and structured neural attention. In Proc. of ICONIP."},{"key":"e_1_3_2_45_2","volume-title":"Proc. of ICML","author":"Niculae Vlad","year":"2018","unstructured":"Vlad Niculae, Andre Martins, Mathieu Blondel, and Claire Cardie. 2018. SparseMAP: Differentiable sparse structured inference. In Proc. of ICML."},{"key":"e_1_3_2_46_2","doi-asserted-by":"publisher","DOI":"10.2967\/jnumed.112.112177"},{"key":"e_1_3_2_47_2","volume-title":"Proc. of ACL","author":"Papineni Kishore","year":"2002","unstructured":"Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proc. of ACL."},{"key":"e_1_3_2_48_2","volume-title":"Proc. of ACL","author":"Peters Ben","year":"2019","unstructured":"Ben Peters, Vlad Niculae, and Andr\u00e9 F. T. Martins. 2019. Sparse sequence-to-sequence models. In Proc. of ACL."},{"key":"e_1_3_2_49_2","doi-asserted-by":"crossref","unstructured":"Steven T. Piantadosi. 2014. Zipf\u2019s word frequency law in natural language: A critical review and future directions. (2014).","DOI":"10.3758\/s13423-014-0585-6"},{"key":"e_1_3_2_50_2","volume-title":"Proc. of EMNLP","author":"Pilault Jonathan","year":"2020","unstructured":"Jonathan Pilault, Raymond Li, Sandeep Subramanian, and Christopher Pal. 2020. On extractive and abstractive neural document summarization with transformer language models. In Proc. of EMNLP."},{"key":"e_1_3_2_51_2","volume-title":"Proc. of CVPR","author":"Rennie Steven J.","year":"2017","unstructured":"Steven J. Rennie, Etienne Marcheret, Youssef Mroueh, Jerret Ross, and Vaibhava Goel. 2017. Self-critical sequence training for image captioning. In Proc. of CVPR."},{"key":"e_1_3_2_52_2","volume-title":"Proc. of ACL","author":"See Abigail","year":"2017","unstructured":"Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer-generator networks. In Proc. of ACL."},{"key":"e_1_3_2_53_2","volume-title":"SPACES: \u201cExtractive-abstractive\u201d Long Text Summaries","author":"Su Jianlin","year":"2020","unstructured":"Jianlin Su. 2020. SPACES: \u201cExtractive-abstractive\u201d Long Text Summaries. Technical Report."},{"key":"e_1_3_2_54_2","unstructured":"Shaoshi Sun Zhenyuan Zhang BoCheng Huang Pengbin Lei Jianlin Su Shengfeng Pan and Jiarun Cao. 2021. Sparse-softmax: A simpler and faster alternative softmax transformation. (2021)."},{"key":"e_1_3_2_55_2","volume-title":"Proc. of NeurIPS","author":"Vaswani Ashish","year":"2017","unstructured":"Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, and Kaiser. 2017. Attention is all you need. In Proc. of NeurIPS."},{"key":"e_1_3_2_56_2","volume-title":"Proc. of ICONIP","author":"Vinyals Oriol","year":"2015","unstructured":"Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. 2015. Pointer networks. In Proc. of ICONIP."},{"key":"e_1_3_2_57_2","volume-title":"Proc. of CVPR","author":"Vinyals Oriol","year":"2015","unstructured":"Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In Proc. of CVPR."},{"key":"e_1_3_2_58_2","volume-title":"Proc. of CVPR","author":"Wang Hao","year":"2018","unstructured":"Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu. 2018. CosFace: Large margin cosine loss for deep face recognition. In Proc. of CVPR."},{"key":"e_1_3_2_59_2","volume-title":"Proc. of CVPR","author":"Wang Xiaosong","year":"2018","unstructured":"Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, and Ronald M. Summers. 2018. TieNet: Text-image embedding network for common thorax disease classification and reporting in chest x-rays. In Proc. of CVPR."},{"key":"e_1_3_2_60_2","article-title":"NEZHA: Neural contextualized representation for Chinese language understanding","author":"Wei Junqiu","year":"2019","unstructured":"Junqiu Wei, Xiaozhe Ren, Xiaoguang Li, Wenyong Huang, Yi Liao, Yasheng Wang, Jiashu Lin, Xin Jiang, Xiao Chen, and Qun Liu. 2019. NEZHA: Neural contextualized representation for Chinese language understanding. arXiv preprint arXiv:1909.00204 (2019).","journal-title":"arXiv preprint arXiv:1909.00204"},{"key":"e_1_3_2_61_2","volume-title":"Proc. of ICLR","author":"Wu Felix","year":"2018","unstructured":"Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. 2018. Pay less attention with lightweight and dynamic convolutions. In Proc. of ICLR."},{"key":"e_1_3_2_62_2","volume-title":"Proc. of AAAI","author":"Xiao Liqiang","year":"2020","unstructured":"Liqiang Xiao, Lu Wang, Hao He, and Yaohui Jin. 2020. Copy or rewrite: Hybrid summarization with hierarchical reinforcement learning. In Proc. of AAAI."},{"key":"e_1_3_2_63_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.bionlp-1.29"},{"key":"e_1_3_2_64_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jbi.2022.104040"},{"key":"e_1_3_2_65_2","volume-title":"Proc. of ICLR","author":"Yu Adams Wei","year":"2018","unstructured":"Adams Wei Yu, David Dohan, Minh-Thang Luong, Rui Zhao, Kai Chen, Mohammad Norouzi, and Quoc V. Le. 2018. QANet: Combining local convolution with global self-attention for reading comprehension. In Proc. of ICLR."},{"key":"e_1_3_2_66_2","doi-asserted-by":"crossref","unstructured":"Biao Zhang Ivan Titov and Rico Sennrich. 2021. Sparse attention with linear units. (2021).","DOI":"10.18653\/v1\/2021.emnlp-main.523"},{"key":"e_1_3_2_67_2","unstructured":"Ningyu Zhang Qianghuai Jia Kangping Yin Liang Dong Feng Gao and Nengwei Hua. 2020. Conceptualized representation learning for Chinese biomedical text mining. (2020)."},{"key":"e_1_3_2_68_2","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/W18-5623"},{"key":"e_1_3_2_69_2","volume-title":"Proc. of ACL","author":"Zhang Yuhao","year":"2020","unstructured":"Yuhao Zhang, Derek Merck, Emily Tsai, Christopher D. Manning, and Curtis Langlotz. 2020. Optimizing the factual correctness of a summary: A study of summarizing radiology reports. In Proc. of ACL."},{"key":"e_1_3_2_70_2","volume-title":"Proc. of ACL","author":"Zhao Chao","year":"2020","unstructured":"Chao Zhao, Marilyn Walker, and Snigdha Chaturvedi. 2020. Bridging the structural gap between encoding and decoding for data-to-text generation. In Proc. of ACL."},{"key":"e_1_3_2_71_2","article-title":"Sparsing and smoothing for the seq2seq models","author":"Zhao Shuai","year":"2022","unstructured":"Shuai Zhao, Zhuoqian Liang, Jinming Wen, and Jie Chen. 2022. Sparsing and smoothing for the seq2seq models. IEEE Transactions on Artificial Intelligence (2022).","journal-title":"IEEE Transactions on Artificial Intelligence"},{"key":"e_1_3_2_72_2","article-title":"AP-BERT: Enhanced pre-trained model through average pooling","author":"Zhao Shuai","year":"2022","unstructured":"Shuai Zhao, Tianyu Zhang, Man Hu, Wen Chang, and Fucheng You. 2022. AP-BERT: Enhanced pre-trained model through average pooling. Applied Intelligence (2022).","journal-title":"Applied Intelligence"},{"key":"e_1_3_2_73_2","article-title":"The standards for PET\/CT diagnostic reports: Setting and exploring","author":"Shen Xubai Xuan Zhihui","year":"2019","unstructured":"Xubai Xuan Zhihui Shen, and Wang Ruimin. 2019. The standards for PET\/CT diagnostic reports: Setting and exploring. Labeled Immunoassays and Clinical Medicine (2019).","journal-title":"Labeled Immunoassays and Clinical Medicine"}],"container-title":["ACM Transactions on Asian and Low-Resource Language Information Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3596219","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3596219","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:35:57Z","timestamp":1750178157000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3596219"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,6,16]]},"references-count":72,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2023,6,30]]}},"alternative-id":["10.1145\/3596219"],"URL":"https:\/\/doi.org\/10.1145\/3596219","relation":{},"ISSN":["2375-4699","2375-4702"],"issn-type":[{"value":"2375-4699","type":"print"},{"value":"2375-4702","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,6,16]]},"assertion":[{"value":"2022-09-19","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-04-29","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-06-16","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}