{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,8,2]],"date-time":"2025-08-02T17:47:10Z","timestamp":1754156830120,"version":"3.41.2"},"reference-count":50,"publisher":"Emerald","issue":"3","license":[{"start":{"date-parts":[[2023,11,29]],"date-time":"2023-11-29T00:00:00Z","timestamp":1701216000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.emerald.com\/insight\/site-policies"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["DTA"],"published-print":{"date-parts":[[2024,7,19]]},"abstract":"<jats:sec><jats:title content-type=\"abstract-subheading\">Purpose<\/jats:title><jats:p>The purpose of this study is to investigate and demonstrate the advancements achieved in the field of chest X-ray image captioning through the utilization of dynamic convolutional encoder\u2013decoder networks (DyCNN). Typical convolutional neural networks (CNNs) are unable to capture both local and global contextual information effectively and apply a uniform operation to all pixels in an image. To address this, we propose an innovative approach that integrates a dynamic convolution operation at the encoder stage, improving image encoding quality and disease detection. In addition, a decoder based on the gated recurrent unit (GRU) is used for language modeling, and an attention network is incorporated to enhance consistency. This novel combination allows for improved feature extraction, mimicking the expertise of radiologists by selectively focusing on important areas and producing coherent captions with valuable clinical information.<\/jats:p><\/jats:sec><jats:sec><jats:title content-type=\"abstract-subheading\">Design\/methodology\/approach<\/jats:title><jats:p>In this study, we have presented a new report generation approach that utilizes dynamic convolution applied Resnet-101 (DyCNN) as an encoder (Verelst and Tuytelaars, 2019) and GRU as a decoder (Dey and Salemt, 2017; Pan et al., 2020), along with an attention network (see Figure 1). This integration innovatively extends the capabilities of image encoding and sequential caption generation, representing a shift from conventional CNN architectures. With its ability to dynamically adapt receptive fields, the DyCNN excels at capturing features of varying scales within the CXR images. This dynamic adaptability significantly enhances the granularity of feature extraction, enabling precise representation of localized abnormalities and structural intricacies. By incorporating this flexibility into the encoding process, our model can distil meaningful and contextually rich features from the radiographic data. While the attention mechanism enables the model to selectively focus on different regions of the image during caption generation. The attention mechanism enhances the report generation process by allowing the model to assign different importance weights to different regions of the image, mimicking human perception. In parallel, the GRU-based decoder adds a critical dimension to the process by ensuring a smooth, sequential generation of captions.<\/jats:p><\/jats:sec><jats:sec><jats:title content-type=\"abstract-subheading\">Findings<\/jats:title><jats:p>The findings of this study highlight the significant advancements achieved in chest X-ray image captioning through the utilization of dynamic convolutional encoder\u2013decoder networks (DyCNN). Experiments conducted using the IU-Chest X-ray datasets showed that the proposed model outperformed other state-of-the-art approaches. The model achieved notable scores, including a BLEU_1 score of 0.591, a BLEU_2 score of 0.347, a BLEU_3 score of 0.277 and a BLEU_4 score of 0.155. These results highlight the efficiency and efficacy of the model in producing precise radiology reports, enhancing image interpretation and clinical decision-making.<\/jats:p><\/jats:sec><jats:sec><jats:title content-type=\"abstract-subheading\">Originality\/value<\/jats:title><jats:p>This work is the first of its kind, which employs DyCNN as an encoder to extract features from CXR images. In addition, GRU as the decoder for language modeling was utilized and the attention mechanisms into the model architecture were incorporated.<\/jats:p><\/jats:sec>","DOI":"10.1108\/dta-07-2023-0307","type":"journal-article","created":{"date-parts":[[2023,11,29]],"date-time":"2023-11-29T05:19:09Z","timestamp":1701235149000},"page":"427-446","source":"Crossref","is-referenced-by-count":1,"title":["Deep understanding of radiology reports: leveraging dynamic convolution in chest X-ray images"],"prefix":"10.1108","volume":"58","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-3963-4548","authenticated-orcid":false,"given":"Tarun","family":"Jaiswal","sequence":"first","affiliation":[]},{"given":"Manju","family":"Pandey","sequence":"additional","affiliation":[]},{"given":"Priyanka","family":"Tripathi","sequence":"additional","affiliation":[]}],"member":"140","published-online":{"date-parts":[[2023,11,29]]},"reference":[{"key":"key2024072308212744500_ref001","doi-asserted-by":"publisher","first-page":"100557","DOI":"10.1016\/j.imu.2021.100557","article-title":"Automated radiology report generation using conditioned transformers","volume":"24","year":"2021","journal-title":"Informatics in Medicine Unlocked"},{"key":"key2024072308212744500_ref002","doi-asserted-by":"publisher","first-page":"6077","DOI":"10.1109\/CVPR.2018.00636","article-title":"Bottom-up and top-down attention for image captioning and visual question answering","year":"2018"},{"issue":"3","key":"key2024072308212744500_ref003","doi-asserted-by":"publisher","first-page":"564","DOI":"10.1007\/s10278-021-00567-7","article-title":"A multilevel transfer learning technique and LSTM framework for generating medical captions for limited CT and DBT images","volume":"35","year":"2022","journal-title":"Journal of Digital Imaging"},{"key":"key2024072308212744500_ref004","doi-asserted-by":"publisher","first-page":"102075","DOI":"10.1016\/j.artmed.2021.102075","article-title":"Evaluating diagnostic content of AI-generated radiology reports of chest X-rays","volume":"116","year":"2021","journal-title":"Artificial Intelligence in Medicine"},{"key":"key2024072308212744500_ref005","doi-asserted-by":"publisher","first-page":"1439","DOI":"10.18653\/v1\/2020.emnlp-main.112","article-title":"Generating radiology reports via memory-driven transformer","year":"2020"},{"key":"key2024072308212744500_ref006","doi-asserted-by":"publisher","first-page":"1724","DOI":"10.3115\/v1\/D14-1179","article-title":"Learning phrase representations using RNN encoder-decoder for statistical machine translation","year":"2014"},{"issue":"2","key":"key2024072308212744500_ref007","doi-asserted-by":"publisher","first-page":"304","DOI":"10.1093\/jamia\/ocv080","article-title":"Preparing a collection of radiology examinations for distribution and retrieval","volume":"23","year":"2016","journal-title":"Journal of the American Medical Informatics Association"},{"first-page":"79","article-title":"BLEU in characters: towards automatic MT evaluation in languages without word delimiters","year":"2005","key":"key2024072308212744500_ref008"},{"key":"key2024072308212744500_ref009","doi-asserted-by":"publisher","first-page":"1597","DOI":"10.1109\/MWSCAS.2017.8053243","article-title":"Gate-variants of gated recurrent unit (GRU) neural networks","year":"2017"},{"year":"2020","key":"key2024072308212744500_ref010","article-title":"Addressing data bias problems for chest x-ray image report generation"},{"key":"key2024072308212744500_ref011","doi-asserted-by":"publisher","first-page":"293","DOI":"10.1007\/978-3-030-87234-2_28","article-title":"RATCHET: medical transformer for chest X-ray diagnosis and reporting","year":"2021"},{"key":"key2024072308212744500_ref012","doi-asserted-by":"publisher","first-page":"21236","DOI":"10.1109\/ACCESS.2021.3056175","article-title":"Automatic report generation for chest X-ray images via adversarial reinforcement learning","volume":"9","year":"2021","journal-title":"IEEE Access"},{"key":"key2024072308212744500_ref013","doi-asserted-by":"publisher","first-page":"154808","DOI":"10.1109\/ACCESS.2019.2947134","article-title":"Multi-attention and incorporating background information model for chest X-ray image report generation","volume":"7","year":"2019","journal-title":"IEEE Access"},{"first-page":"1","article-title":"Categorical reparameterization with Gumbel-Softmax","year":"2016","key":"key2024072308212744500_ref014"},{"key":"key2024072308212744500_ref015","doi-asserted-by":"publisher","first-page":"110","DOI":"10.1007\/978-3-030-87589-3_12","article-title":"Improving joint learning of chest X-Ray and radiology report by word region alignment","year":"2021"},{"key":"key2024072308212744500_ref016","doi-asserted-by":"publisher","first-page":"6570","DOI":"10.18653\/v1\/P19-1657","article-title":"Show, describe and conclude: on exploiting the structure information of chest X-ray reports","year":"2019"},{"key":"key2024072308212744500_ref017","doi-asserted-by":"publisher","first-page":"2577","DOI":"10.18653\/v1\/P18-1240","article-title":"On the automatic generation of medical imaging reports","year":"2018"},{"key":"key2024072308212744500_ref018","doi-asserted-by":"publisher","first-page":"124631","DOI":"10.1016\/j.jhydrol.2020.124631","article-title":"Exploring a long short-term memory based encoder-decoder framework for multi-step-ahead flood forecasting","volume":"583","year":"2020","journal-title":"Journal of Hydrology"},{"issue":"6","key":"key2024072308212744500_ref019","doi-asserted-by":"publisher","first-page":"7485","DOI":"10.1007\/s12652-022-04454-z","article-title":"CheXPrune: sparse chest X-ray report generation model using multi-attention and one-shot global pruning","volume":"14","year":"2022","journal-title":"Journal of Ambient Intelligence and Humanized Computing"},{"key":"key2024072308212744500_ref020","doi-asserted-by":"crossref","first-page":"1429","DOI":"10.3390\/s22041429","article-title":"Cross encoder-decoder transformer with global-local visual extractor for medical image captioning","volume":"22","year":"2022","journal-title":"Sensors (Basel, Switzerland)"},{"issue":"1","key":"key2024072308212744500_ref021","doi-asserted-by":"publisher","first-page":"6666","DOI":"10.1609\/aaai.v33i01.33016666","article-title":"Knowledge-driven encode, retrieve, paraphrase for medical image report generation","volume":"33","year":"2019","journal-title":"Proceedings of the AAAI Conference on Artificial Intelligence"},{"key":"key2024072308212744500_ref022","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/IJCNN.2015.7280652","article-title":"Generating image description by modeling spatial context of an image","year":"2015"},{"issue":"1","key":"key2024072308212744500_ref023","doi-asserted-by":"publisher","first-page":"253","DOI":"10.1007\/s11280-022-01013-6","article-title":"Auxiliary signal-guided knowledge encoder-decoder for medical report generation","volume":"26","year":"2023","journal-title":"World Wide Web"},{"key":"key2024072308212744500_ref024","first-page":"1530","article-title":"Hybrid retrieval-generation reinforced agent for medical image report generation","volume":"31","year":"2018","journal-title":"Advances in Neural Information Processing Systems"},{"key":"key2024072308212744500_ref026","doi-asserted-by":"publisher","first-page":"13748","DOI":"10.1109\/CVPR46437.2021.01354","article-title":"Exploring and distilling posterior and prior knowledge for radiology report generation","year":"2021"},{"key":"key2024072308212744500_ref027","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.findings-acl.23","article-title":"Contrastive attention for automatic chest X-ray report generation","year":"2021","journal-title":"Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021"},{"key":"key2024072308212744500_ref025","doi-asserted-by":"publisher","first-page":"3001","DOI":"10.18653\/v1\/2021.acl-long.234","article-title":"Competence-based multimodal curriculum learning for medical report generation","year":"2021"},{"key":"key2024072308212744500_ref028","first-page":"249","article-title":"Clinically accurate chest X-ray report generation","volume":"106","year":"2019","journal-title":"Proceedings of Machine Learning Research"},{"key":"key2024072308212744500_ref029","doi-asserted-by":"publisher","first-page":"3242","DOI":"10.1109\/CVPR.2017.345","article-title":"Knowing when to look: adaptive attention via a visual sentinel for image captioning","year":"2017"},{"issue":"4","key":"key2024072308212744500_ref030","doi-asserted-by":"publisher","first-page":"318","DOI":"10.1016\/j.carj.2016.03.001","article-title":"The art of the radiology report: practical and stylistic guidelines for perfecting the conveyance of imaging findings","volume":"67","year":"2016","journal-title":"Canadian Association of Radiologists Journal"},{"key":"key2024072308212744500_ref031","doi-asserted-by":"publisher","first-page":"102603","DOI":"10.1016\/j.media.2022.102603","article-title":"Uncertainty-aware report generation for chest X-rays by variational topic inference","volume":"82","year":"2022","journal-title":"Medical Image Analysis"},{"key":"key2024072308212744500_ref032","doi-asserted-by":"publisher","first-page":"2824","DOI":"10.18653\/v1\/2021.findings-emnlp.241","article-title":"Progressive transformer-based generation of radiology reports","year":"2021"},{"key":"key2024072308212744500_ref033","doi-asserted-by":"publisher","first-page":"60090","DOI":"10.1109\/ACCESS.2020.2982433","article-title":"Water level prediction model based on GRU and CNN","volume":"8","year":"2020","journal-title":"IEEE Access"},{"key":"key2024072308212744500_ref034","doi-asserted-by":"publisher","first-page":"311","DOI":"10.3115\/1073083.1073135","article-title":"BLEU: a method for automatic evaluation of machine translation","year":"2002"},{"key":"key2024072308212744500_ref035","doi-asserted-by":"publisher","first-page":"654","DOI":"10.1007\/978-3-030-87589-3_67","article-title":"Clinically correct report generation from chest X-rays using templates","year":"2021"},{"key":"key2024072308212744500_ref036","doi-asserted-by":"publisher","first-page":"2497","DOI":"10.1109\/CVPR.2016.274","article-title":"Learning to read chest X-rays: recurrent neural cascade model for automated image annotation","year":"2016"},{"issue":"13","key":"key2024072308212744500_ref037","doi-asserted-by":"publisher","first-page":"7441","DOI":"10.1007\/s00521-021-05943-6","article-title":"Show, tell and summarise: learning to generate and summarise radiology findings from medical images","volume":"33","year":"2021","journal-title":"Neural Computing and Applications"},{"key":"key2024072308212744500_ref038","doi-asserted-by":"publisher","first-page":"600","DOI":"10.1007\/978-3-030-69541-5_36","article-title":"Hierarchical X-ray report generation via pathology tags and multi head attention","year":"2021"},{"key":"key2024072308212744500_ref039","doi-asserted-by":"publisher","first-page":"561","DOI":"10.1007\/978-3-030-59713-9_54","article-title":"Chest X-ray report generation through fine-grained label learning","year":"2020"},{"key":"key2024072308212744500_ref040","doi-asserted-by":"publisher","first-page":"2317","DOI":"10.1109\/CVPR42600.2020.00239","article-title":"Dynamic convolutions: exploiting spatial sparsity for faster inference","year":"2019"},{"key":"key2024072308212744500_ref041","doi-asserted-by":"publisher","first-page":"3156","DOI":"10.1109\/CVPR.2015.7298935","article-title":"Show and tell: a neural image caption generator","year":"2015"},{"key":"key2024072308212744500_ref042","doi-asserted-by":"publisher","first-page":"9049","DOI":"10.1109\/CVPR.2018.00943","article-title":"TieNet: text-image embedding network for common thorax disease classification and reporting in chest X-rays","year":"2018"},{"key":"key2024072308212744500_ref043","doi-asserted-by":"publisher","first-page":"2433","DOI":"10.1109\/CVPR46437.2021.00246","article-title":"A self-boosting framework for automated radiographic report generation","year":"2021"},{"first-page":"2048","article-title":"Show, attend and tell: neural image caption generation with visual attention","year":"2015","key":"key2024072308212744500_ref044"},{"key":"key2024072308212744500_ref045","doi-asserted-by":"publisher","first-page":"457","DOI":"10.1007\/978-3-030-00928-1_52","article-title":"Multimodal recurrent model with attention for automated radiology report generation","year":"2018"},{"key":"key2024072308212744500_ref046","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.findings-emnlp.336","article-title":"Weakly supervised contrastive learning for chest X-ray report generation","volume":"abs\/2109.1","year":"2021"},{"key":"key2024072308212744500_ref047","doi-asserted-by":"publisher","first-page":"102510","DOI":"10.1016\/j.media.2022.102510","article-title":"Knowledge matters: chest radiology report generation with general and specific knowledge","volume":"80","year":"2022","journal-title":"Medical Image Analysis"},{"key":"key2024072308212744500_ref048","doi-asserted-by":"publisher","first-page":"72","DOI":"10.1007\/978-3-030-87199-4_7","volume":"12903","year":"2022"},{"key":"key2024072308212744500_ref049","doi-asserted-by":"publisher","first-page":"4651","DOI":"10.1109\/CVPR.2016.503","article-title":"Image captioning with semantic attention","year":"2016"},{"key":"key2024072308212744500_ref050","doi-asserted-by":"publisher","first-page":"721","DOI":"10.1007\/978-3-030-32226-7_80","article-title":"Automatic radiology report generation based on multi-view image fusion and medical concept enrichment","volume":"11769","year":"2019","journal-title":"Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)"}],"container-title":["Data Technologies and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.emerald.com\/insight\/content\/doi\/10.1108\/DTA-07-2023-0307\/full\/xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.emerald.com\/insight\/content\/doi\/10.1108\/DTA-07-2023-0307\/full\/html","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,7,24]],"date-time":"2025-07-24T23:15:15Z","timestamp":1753398915000},"score":1,"resource":{"primary":{"URL":"http:\/\/www.emerald.com\/dta\/article\/58\/3\/427-446\/1226325"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,11,29]]},"references-count":50,"journal-issue":{"issue":"3","published-online":{"date-parts":[[2023,11,29]]},"published-print":{"date-parts":[[2024,7,19]]}},"alternative-id":["10.1108\/DTA-07-2023-0307"],"URL":"https:\/\/doi.org\/10.1108\/dta-07-2023-0307","relation":{},"ISSN":["2514-9288","2514-9288"],"issn-type":[{"type":"print","value":"2514-9288"},{"type":"electronic","value":"2514-9288"}],"subject":[],"published":{"date-parts":[[2023,11,29]]}}}