{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,7]],"date-time":"2026-02-07T10:10:56Z","timestamp":1770459056818,"version":"3.49.0"},"reference-count":26,"publisher":"SAGE Publications","issue":"5","license":[{"start":{"date-parts":[[2021,3,27]],"date-time":"2021-03-27T00:00:00Z","timestamp":1616803200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/journals.sagepub.com\/page\/policies\/text-and-data-mining-license"}],"content-domain":{"domain":["journals.sagepub.com"],"crossmark-restriction":true},"short-container-title":["Journal of Intelligent &amp; Fuzzy Systems"],"published-print":{"date-parts":[[2021,11,17]]},"abstract":"<jats:p>\u00a0The multi-sensor, multi-modal, composite design of medical images merged into a single image, contributes to identifying features that are relevant to medical diagnoses and treatments. Although, current image fusion technologies, including conventional and deep learning algorithms, can produce superior fused images, however, they will require huge volumes of images of various modalities. This solution may not be viable for some situations, where time efficiency is expected or the equipment is inadequate. This paper addressed a modified end-to-end Generative Adversarial Network(GAN), termed Loss Minimized Fusion Generative Adversarial Network (LMF-GAN), a triple ConvNet deep learning architecture for the fusion of medical images with a limited sampling rate. The encoding network is combined with a convolutional neural network layer and a dense block called GAN, in contrast to conventional convolutional networks. The loss is minimized by training GAN\u2019s discriminator with all the source images by learning more parameters to generate more features in the fused image. The LMF-GAN can produce fused images with clear textures through adversarial training of the generator and discriminator. The proposed fusion method has the ability to achieve state-of-the-art quality in objective and subjective evaluation, in comparison with current fusion methods. The model has experimented with standard data sets.<\/jats:p>","DOI":"10.3233\/jifs-189860","type":"journal-article","created":{"date-parts":[[2021,3,30]],"date-time":"2021-03-30T14:37:07Z","timestamp":1617115027000},"page":"5375-5386","update-policy":"https:\/\/doi.org\/10.1177\/sage-journals-update-policy","source":"Crossref","is-referenced-by-count":54,"title":["Multi-modal medical image fusion using LMF-GAN - A maximum parameter infusion technique"],"prefix":"10.1177","volume":"41","author":[{"given":"Rekha R.","family":"Nair","sequence":"first","affiliation":[{"name":"Department of Computer Science and Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru, India"}]},{"given":"Tripty","family":"Singh","sequence":"additional","affiliation":[{"name":"Department of Computer Science and Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru, India"}]},{"given":"Rashmi","family":"Sankar","sequence":"additional","affiliation":[{"name":"Department of Computer Science and Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru, India"}]},{"given":"Klement","family":"Gunndu","sequence":"additional","affiliation":[{"name":"Department of Computer Science and Engineering, Amrita School of Engineering, Amrita Vishwa Vidyapeetham, Bengaluru, India"}]}],"member":"179","published-online":{"date-parts":[[2021,3,27]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"crossref","unstructured":"ChavanS.S. and TalbarS.N. Multimodality image fusion in frequency domain for radiation therapy In 2014 International Conference on Medical Imaging m-Health and Emerging Communication Systems (MedCom) pages 174\u2013178. IEEE (2014).","DOI":"10.1109\/MedCom.2014.7005998"},{"key":"e_1_3_1_3_2","doi-asserted-by":"crossref","unstructured":"DokeA.R. SinghT. ShantanuK. and NayarR. Comparative analysis of wavelet transform methods for fusion of ct and pet images In 2017 IEEE International Conference on Power Control Signals and Instrumentation Engineering (ICPCSI) pages 2152\u20132156. IEEE (2017).","DOI":"10.1109\/ICPCSI.2017.8392098"},{"key":"e_1_3_1_4_2","doi-asserted-by":"publisher","DOI":"10.1613\/jair.5756"},{"key":"e_1_3_1_5_2","first-page":"2672","article-title":"Generative adversarialnets","volume":"27","author":"Goodfellow I.","year":"2014","unstructured":"GoodfellowI., Pouget-AbadieJ., MirzaM., XuB., Warde-FarleyD., OzairS., CourvilleA. and BengioY., Generative adversarialnets, Advances in Neural Information Processing Systems27 (2014), 2672\u20132680.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_3_1_6_2","unstructured":"IandolaF.N. HanS. MoskewiczM.W. AshrafK. DallyW.J. and KeutzerK. Squeezenet: Alexnet-level accuracy with 50x fewer parameters and<0.5 mb model size arXiv preprint arXiv:1602.07360 (2016)."},{"key":"e_1_3_1_7_2","unstructured":"IglovikovV. and ShvetsA. Ternausnet: U-net with vgg11 encoder pre-trained on imagenet for image segmentation arXiv preprint arXiv:1801.05746 (2018)."},{"key":"e_1_3_1_8_2","unstructured":"JamesA.P. and DasarathyB. A review of feature and data fusion with medical images arXiv preprint arXiv:1506.00097 (2015)."},{"key":"e_1_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2013.12.002"},{"key":"e_1_3_1_10_2","unstructured":"KrizhevskyA. SutskeverI. and HintonG.E. Imagenet classification with deep convolutional neural networks In Advances in Neural Information Processing Systems (2012) 1097\u20131105."},{"key":"e_1_3_1_11_2","doi-asserted-by":"publisher","DOI":"10.1006\/gmip.1995.1022"},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","DOI":"10.1016\/S0167-8655(02)00029-6"},{"key":"e_1_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2016.05.004"},{"key":"e_1_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.sigpro.2013.10.010"},{"key":"e_1_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/LSP.2016.2618776"},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2016.12.001"},{"key":"e_1_3_1_17_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2017.10.007"},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.inffus.2018.09.004"},{"key":"e_1_3_1_19_2","doi-asserted-by":"crossref","unstructured":"ManeeshaP. SinghT. NayarR. and KumarS. Multi modal medical image fusion using convolution neural network In 2019 Third International Conference on Inventive Systems and Control (ICISC) pages 351\u2013357. IEEE (2019).","DOI":"10.1109\/ICISC44355.2019.9036373"},{"key":"e_1_3_1_20_2","doi-asserted-by":"crossref","unstructured":"NairR.R. and SinghT. Multi-sensor multi-modal medical image fusion for color images:Amulti-resolution approach In 2018 Tenth International Conference on Advanced Computing (ICoAC) pages 249\u2013254. IEEE (2018).","DOI":"10.1109\/ICoAC44903.2018.8939112"},{"key":"e_1_3_1_21_2","doi-asserted-by":"publisher","DOI":"10.1049\/iet-ipr.2018.6556"},{"issue":"5","key":"e_1_3_1_22_2","first-page":"5353","article-title":"Multi-modal based msmif using hybrid fusion with 1-d wavelet transform","volume":"29","author":"Nair R.R.","year":"2020","unstructured":"NairR.R. and SinghT., Multi-modal based msmif using hybrid fusion with 1-d wavelet transform, International Journal of Advanced Science and Technology29(5) (2020), 5353\u20135368.","journal-title":"International Journal of Advanced Science and Technology"},{"key":"e_1_3_1_23_2","doi-asserted-by":"crossref","unstructured":"NairR.R. NayarR. SinghT. and KumarS. Modified level cut liver segmentation from ct images In 2017 Ninth International Conference on Advanced Computing (ICoAC) pages 186\u2013191. IEEE (2017).","DOI":"10.1109\/ICoAC.2017.8441362"},{"key":"e_1_3_1_24_2","doi-asserted-by":"crossref","unstructured":"NieD. ZhangH. AdeliE. LiuL. and ShenD. 3d deep learning for multi-modal imaging-guided survival time prediction of brain tumor patients In International conference on medical image computing and computer-assisted intervention pages 212\u2013220. Springer (2016).","DOI":"10.1007\/978-3-319-46723-8_25"},{"key":"e_1_3_1_25_2","unstructured":"QassimH. FeinzimerD. and VermaA. Residual squeeze vgg16. arXiv preprint arXiv:1705.03004 (2017)."},{"key":"e_1_3_1_26_2","unstructured":"SimonyanK. and ZissermanA. Very deep convolutional networks for large-scale image recognition arXiv preprint arXiv:1409.1556 (2014)."},{"key":"e_1_3_1_27_2","doi-asserted-by":"crossref","unstructured":"XuH. LiangP. YuW. JiangJ. and MaJ. Learning a generative model for fusing infrared and visible images via conditional generative adversarial network with dual discriminators In IJCAI pages 3954\u20133960 (2019).","DOI":"10.24963\/ijcai.2019\/549"}],"container-title":["Journal of Intelligent &amp; Fuzzy Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/journals.sagepub.com\/doi\/pdf\/10.3233\/JIFS-189860","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/journals.sagepub.com\/doi\/full-xml\/10.3233\/JIFS-189860","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/journals.sagepub.com\/doi\/pdf\/10.3233\/JIFS-189860","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,2,2]],"date-time":"2026-02-02T04:16:17Z","timestamp":1770005777000},"score":1,"resource":{"primary":{"URL":"https:\/\/journals.sagepub.com\/doi\/10.3233\/JIFS-189860"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,3,27]]},"references-count":26,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2021,11,17]]}},"alternative-id":["10.3233\/JIFS-189860"],"URL":"https:\/\/doi.org\/10.3233\/jifs-189860","relation":{},"ISSN":["1064-1246","1875-8967"],"issn-type":[{"value":"1064-1246","type":"print"},{"value":"1875-8967","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,3,27]]}}}