{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,19]],"date-time":"2026-03-19T16:34:48Z","timestamp":1773938088457,"version":"3.50.1"},"reference-count":29,"publisher":"Fuji Technology Press Ltd.","issue":"2","funder":[{"DOI":"10.13039\/501100001691","name":"Japan Society for the Promotion of Science","doi-asserted-by":"publisher","award":["JP23K28367"],"award-info":[{"award-number":["JP23K28367"]}],"id":[{"id":"10.13039\/501100001691","id-type":"DOI","asserted-by":"publisher"}]},{"name":"White Rock Foundation"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["JACIII","J. Adv. Comput. Intell. Intell. Inform."],"published-print":{"date-parts":[[2026,3,20]]},"abstract":"<jats:p>\n                    Multimodal medical imaging is pivotal for early disease screening; however, its deployment is often constrained by limited resources. While deep learning-based synthesis cannot replace clinical imaging, high-fidelity cross-modal translation can provide actionable prior information for preliminary assessment. To this end, we introduce a chained extension framework that scales model capacity and precision by linking multiple encoder\u2013decoder modules. Starting from a minimal encoder\u2013decoder backbone, we construct a triple-stage generative adversarial network and integrate a brightness-sensitive loss that reweights luminance-dependent errors. This staged design decomposes positron emission tomography to computed tomography translation into complementary subtasks targeting structural consistency, texture enhancement, and key-region refinement. \tComprehensive experiments indicate that the proposed approach generates synthetic CT images that closely match reference CT scans, visually and quantitatively, achieving a structural similarity index of 0.85, peak signal-to-noise ratio of 23.31 dB, and mean absolute error of 6.93\u00d710\n                    <jats:sup>-2<\/jats:sup>\n                    . Thus, our framework is feasible as an assistive tool for early screening workflows in resource-limited settings. Moreover, the staged training strategy, coupled with brightness-aware weighting, mitigates common optimization plateaus in cross-modal synthesis, suggesting a principled path toward further gains in fidelity and robustness.\n                  <\/jats:p>","DOI":"10.20965\/jaciii.2026.p0457","type":"journal-article","created":{"date-parts":[[2026,3,19]],"date-time":"2026-03-19T15:02:06Z","timestamp":1773932526000},"page":"457-471","source":"Crossref","is-referenced-by-count":0,"title":["Brightness-Sensitive Generative Adversarial Network Using a Chained Extension Framework for PET-to-CT Medical Image Synthesis"],"prefix":"10.20965","volume":"30","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-9191-6505","authenticated-orcid":true,"given":"Xiaoyu","family":"Deng","sequence":"first","affiliation":[{"name":"Graduate School of Engineering, University of Fukui, 3-9-1 Bunkyo, Fukui, Fukui 910-0017, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9744-8033","authenticated-orcid":true,"given":"Kouki","family":"Nagamune","sequence":"additional","affiliation":[{"name":"Department of Electronics and Computer Science, Graduate School of Engineering, University of Hyogo, 2167 Shosha, Himeji, Hyogo 671-2280, Japan"}]},{"given":"Hiroki","family":"Takada","sequence":"additional","affiliation":[{"name":"Graduate School of Engineering, University of Fukui, 3-9-1 Bunkyo, Fukui, Fukui 910-0017, Japan"}]}],"member":"8550","published-online":{"date-parts":[[2026,3,20]]},"reference":[{"key":"key-10.20965\/jaciii.2026.p0457-1","doi-asserted-by":"crossref","unstructured":"H. Sch\u00f6der, Y. E. Erdi, S. M. Larson, and H. W. D. Yeung, \u201cPET\/CT: A new imaging technology in nuclear medicine,\u201d European J. of Nuclear Medicine and Molecular Imaging, Vol.30, No.10, pp. 1419-1437, 2003. https:\/\/doi.org\/10.1007\/s00259-003-1299-6","DOI":"10.1007\/s00259-003-1299-6"},{"key":"key-10.20965\/jaciii.2026.p0457-2","unstructured":"M. W. Saif, I. Tzannou, N. Makrilia, and K. Syrigos, \u201cRole and Cost Effectiveness of PET\/CT in Management of Patients with Cancer,\u201d The Yale J. of Biology and Medicine, Vol.83, No.2, pp. 53-65, 2010."},{"key":"key-10.20965\/jaciii.2026.p0457-3","doi-asserted-by":"crossref","unstructured":"O. Ronneberger, P. Fischer, and T. Brox, \u201cU-Net: Convolutional Networks for Biomedical Image Segmentation,\u201d N. Navab, J. Hornegger, W. M. Wells, and A. F. Frangi (Eds.), Medical Image Computing and Computer-Assisted Intervention \u2013 MICCAI 2015, Lecture Notes in Computer Science, Vol.9351, pp. 234-241, 2015. https:\/\/doi.org\/10.1007\/978-3-319-24574-4_28","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"key-10.20965\/jaciii.2026.p0457-4","doi-asserted-by":"crossref","unstructured":"P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, \u201cImage-to-Image Translation with Conditional Adversarial Networks,\u201d 2017 IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 5967-5976, 2017. https:\/\/doi.org\/10.1109\/CVPR.2017.632","DOI":"10.1109\/CVPR.2017.632"},{"key":"key-10.20965\/jaciii.2026.p0457-5","doi-asserted-by":"crossref","unstructured":"T. Wang, Y. Lei, Y. Fu, J. F. Wynne, W. J. Curran, T. Liu, and X. Yang, \u201cA review on medical imaging synthesis using deep learning and its clinical applications,\u201d J. of Applied Clinical Medical Physics, Vol.22, Issue 1, pp. 11-36, 2021. https:\/\/doi.org\/10.1002\/acm2.13121","DOI":"10.1002\/acm2.13121"},{"key":"key-10.20965\/jaciii.2026.p0457-6","doi-asserted-by":"crossref","unstructured":"S. Kaji and S. Kida, \u201cOverview of image-to-image translation by use of deep neural networks: Denoising, super-resolution, modality conversion, and reconstruction in medical imaging,\u201d Radiological Physics and Technology, Vol.12, No.3, pp. 235-248, 2019. https:\/\/doi.org\/10.1007\/s12194-019-00520-y","DOI":"10.1007\/s12194-019-00520-y"},{"key":"key-10.20965\/jaciii.2026.p0457-7","doi-asserted-by":"crossref","unstructured":"M. E. Rayed, S. M. S. Islam, S. I. Niha, J. R. Jim, M. M. Kabir, and M. Mridha, \u201cDeep learning for medical image segmentation: State-of-the-art advancements and challenges,\u201d Informatics in Medicine Unlocked, Vol.47, Article No.101504, 2024. https:\/\/doi.org\/10.1016\/j.imu.2024.101504","DOI":"10.1016\/j.imu.2024.101504"},{"key":"key-10.20965\/jaciii.2026.p0457-8","doi-asserted-by":"crossref","unstructured":"K. Suzuki, \u201cOverview of deep learning in medical imaging,\u201d Radiological Physics and Technology, Vol.10, No.3, pp. 257-273, 2017. https:\/\/doi.org\/10.1007\/s12194-017-0406-5","DOI":"10.1007\/s12194-017-0406-5"},{"key":"key-10.20965\/jaciii.2026.p0457-9","doi-asserted-by":"crossref","unstructured":"V. Sevetlidis, M. V. Giuffrida, and S. A. Tsaftaris, \u201cWhole Image Synthesis Using a Deep encoder\u2013decoder Network,\u201d S. A. Tsaftaris, A. Gooya, A. F. Frangi, and J. L. Prince (Eds.), \u201cSimulation and Synthesis in Medical Imaging,\u201d Lecture Notes in Computer Science, Vol.9968, pp. 127-137, 2016. https:\/\/doi.org\/10.1007\/978-3-319-46630-9_13","DOI":"10.1007\/978-3-319-46630-9_13"},{"key":"key-10.20965\/jaciii.2026.p0457-10","doi-asserted-by":"crossref","unstructured":"F. Hashimoto, M. Ito, K. Ote, T. Isobe, H. Okada, and Y. Ouchi, \u201cDeep learning-based attenuation correction for brain PET with various radiotracers,\u201d Annals of Nuclear Medicine, Vol.35, No.6, pp. 691-701, 2021. https:\/\/doi.org\/10.1007\/s12149-021-01611-w","DOI":"10.1007\/s12149-021-01611-w"},{"key":"key-10.20965\/jaciii.2026.p0457-11","doi-asserted-by":"crossref","unstructured":"J. Zhang, Z. Cui, C. Jiang, J. Zhang, F. Gao, and D. Shen, \u201cMapping in Cycles: Dual-Domain PET-CT Synthesis Framework with Cycle-Consistent Constraints,\u201d L. Wang, Q. Dou, P. T. Fletcher, S. Speidel, and S. Li (Eds.), \u201cMedical Image Computing and Computer Assisted Intervention (MICCAI 2022),\u201d Lecture Notes in Computer Science, Vol.13436, pp. 758-767, 2022. https:\/\/doi.org\/10.1007\/978-3-031-16446-0_72","DOI":"10.1007\/978-3-031-16446-0_72"},{"key":"key-10.20965\/jaciii.2026.p0457-12","doi-asserted-by":"crossref","unstructured":"Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, \u201cGradient-based learning applied to document recognition,\u201d Proc. of the IEEE, Vol.86, Issue 11, pp. 2278-2324, 1998. https:\/\/doi.org\/10.1109\/5.726791","DOI":"10.1109\/5.726791"},{"key":"key-10.20965\/jaciii.2026.p0457-13","doi-asserted-by":"crossref","unstructured":"J. Pons, S. Pascual, G. Cengarle, and J. Serr\u00e0, \u201cUpsampling Artifacts in Neural Audio Synthesis,\u201d 2021 IEEE Int. Conf. on Acoustics, Speech and Signal Processing (ICASSP 2011), pp. 3005-3009, 2021. https:\/\/doi.org\/10.1109\/ICASSP39728.2021.9414913","DOI":"10.1109\/ICASSP39728.2021.9414913"},{"key":"key-10.20965\/jaciii.2026.p0457-14","unstructured":"I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, and Y. Bengio, \u201cGenerative Adversarial Nets,\u201d Advances in Neural Information Processing Systems, Vol.27, 2014."},{"key":"key-10.20965\/jaciii.2026.p0457-15","doi-asserted-by":"crossref","unstructured":"C. Guo, M. Szemenyei, Y. Yi, W. Wang, B. Chen, and C. Fan, \u201cSA-UNet: Spatial Attention U-Net for Retinal Vessel Segmentation,\u201d 2020 25th Int. Conf. on Pattern Recognition (ICPR), pp. 1236-1242, 2021. https:\/\/doi.org\/10.1109\/ICPR48806.2021.9413346","DOI":"10.1109\/ICPR48806.2021.9413346"},{"key":"key-10.20965\/jaciii.2026.p0457-16","doi-asserted-by":"crossref","unstructured":"O. Petit, N. Thome, C. Rambour, L. Themyr, T. Collins, and L. Soler, \u201cU-Net Transformer: Self and Cross Attention for Medical Image Segmentation,\u201d C. Lian, X. Cao, I. Rekik, X. Xu, and P. Yan (Eds.), \u201cMachine Learning in Medical Imaging,\u201d Lecture Notes in Computer Science, Vol.12966, pp. 267-276, 2021. https:\/\/doi.org\/10.1007\/978-3-030-87589-3_28","DOI":"10.1007\/978-3-030-87589-3_28"},{"key":"key-10.20965\/jaciii.2026.p0457-17","doi-asserted-by":"crossref","unstructured":"A. Singh, J. Kwiecinski, S. Cadet, A. Killekar, E. Tzolos, M. C. Williams, M. R. Dweck, D. E. Newby, D. Dey, and P. J. Slomka, \u201cAutomated nonlinear registration of coronary PET to CT angiography using pseudo-CT generated from PET with generative adversarial networks,\u201d J. of Nuclear Cardiology, Vol.30, Issue 2, pp. 604-615, 2023. https:\/\/doi.org\/10.1007\/s12350-022-03010-8","DOI":"10.1007\/s12350-022-03010-8"},{"key":"key-10.20965\/jaciii.2026.p0457-18","doi-asserted-by":"crossref","unstructured":"X. Dong, T. Wang, Y. Lei, K. Higgins, T. Liu, W. J. Curran, H. Mao, J. A. Nye, and X. Yang, \u201cSynthetic CT generation from non-attenuation corrected PET images for whole-body PET imaging,\u201d Physics in Medicine & Biology, Vol.64, No.21, Article No.215016, 2019. https:\/\/doi.org\/10.1088\/1361-6560\/ab4eb7","DOI":"10.1088\/1361-6560\/ab4eb7"},{"key":"key-10.20965\/jaciii.2026.p0457-19","doi-asserted-by":"crossref","unstructured":"J. Li, Y. Wang, Y. Yang, X. Zhang, Z. Qu, and S. Hu, \u201cSmall animal PET to CT image synthesis based on conditional generation network,\u201d 2021 14th Int. Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), 2021. https:\/\/doi.org\/10.1109\/CISP-BMEI53629.2021.9624232","DOI":"10.1109\/CISP-BMEI53629.2021.9624232"},{"key":"key-10.20965\/jaciii.2026.p0457-20","doi-asserted-by":"crossref","unstructured":"H. Wang, X. Wang, F. Liu, G. Zhang, G. Zhang, Q. Zhang, and M. L. Lang, \u201cDSG-GAN: A dual-stage-generator-based GAN for cross-modality synthesis from PET to CT,\u201d Computers in Biology and Medicine, Vol.172, Article No.108296, 2024. https:\/\/doi.org\/10.1016\/j.compbiomed.2024.108296","DOI":"10.1016\/j.compbiomed.2024.108296"},{"key":"key-10.20965\/jaciii.2026.p0457-21","doi-asserted-by":"crossref","unstructured":"J. Li, Z. Qu, Y. Yang, F. Zhang, M. Li, and S. Hu, \u201cTCGAN: A transformer-enhanced GAN for PET synthetic CT,\u201d Biomedical Optics Express, Vol.13, Issue 11, pp. 6003-6018, 2022. https:\/\/doi.org\/10.1364\/BOE.467683","DOI":"10.1364\/BOE.467683"},{"key":"key-10.20965\/jaciii.2026.p0457-22","doi-asserted-by":"crossref","unstructured":"X. Chen, S. Luo, C.-M. Pun, and S. Wang, \u201cMedPrompt: Cross-modal Prompting for Multi-task Medical Image Translation,\u201d Z. Lin, M.-M. Cheng, R. He, K. Ubul, W. Silamu, H. Zha, J. Zhou, and C.-L. Liu (Eds.), \u201cPattern Recognition and Computer Vision,\u201d Lecture Notes in Computer Science, Vol.15044, pp. 61-75, 2025. https:\/\/doi.org\/10.1007\/978-981-97-8496-7_5","DOI":"10.1007\/978-981-97-8496-7_5"},{"key":"key-10.20965\/jaciii.2026.p0457-23","doi-asserted-by":"crossref","unstructured":"J. Liang, H. Zeng, and L. Zhang, \u201cHigh-Resolution Photorealistic Image Translation in Real-Time: A Laplacian Pyramid Translation Network,\u201d Proc. of 2021 IEEE\/CVF Conf. on Computer Vision and Pattern Recognition (CVPR), pp. 9387-9395, 2021. https:\/\/doi.org\/10.1109\/CVPR46437.2021.00927","DOI":"10.1109\/CVPR46437.2021.00927"},{"key":"key-10.20965\/jaciii.2026.p0457-24","doi-asserted-by":"crossref","unstructured":"O. Dalmaz, M. Yurt, and T. Cukur, \u201cResViT: Residual Vision Transformers for Multimodal Medical Image Synthesis,\u201d IEEE Trans. on Medical Imaging, Vol.41, Issue 10, pp. 2598-2614, 2022. https:\/\/doi.org\/10.1109\/TMI.2022.3167808","DOI":"10.1109\/TMI.2022.3167808"},{"key":"key-10.20965\/jaciii.2026.p0457-25","doi-asserted-by":"crossref","unstructured":"F. Gao, T. Wu, X. Chu, H. Yoon, Y. Xu, and B. Patel, \u201cDeep Residual Inception Encoder\u2013Decoder Network for Medical Imaging Synthesis,\u201d IEEE J. of Biomedical and Health Informatics, Vol.24, Issue 1, pp. 39-49, 2020. https:\/\/doi.org\/10.1109\/JBHI.2019.2912659","DOI":"10.1109\/JBHI.2019.2912659"},{"key":"key-10.20965\/jaciii.2026.p0457-26","unstructured":"M.-Y. Liu, T. Breuel, and J. Kautz, \u201cUnsupervised Image-to-Image Translation Networks,\u201d Proc. of the 31st Int. Conf. on Neural Information Processing Systems (NIPS\u201917), pp. 700-708, 2017."},{"key":"key-10.20965\/jaciii.2026.p0457-27","doi-asserted-by":"crossref","unstructured":"X. Huang, M.-Y. Liu, S. Belongie, and J. Kautz, \u201cMultimodal Unsupervised Image-to-Image Translation,\u201d V. Ferrari, M. Hebert, C. Sminchisescu, and Y. Weiss (Eds.), \u201cProc. of the European Conf. on Computer Vision (ECCV2018),\u201d Lecture Notes in Computer Science, Vol.11207, pp. 179-196, 2018. https:\/\/doi.org\/10.1007\/978-3-030-01219-9_11","DOI":"10.1007\/978-3-030-01219-9_11"},{"key":"key-10.20965\/jaciii.2026.p0457-28","doi-asserted-by":"crossref","unstructured":"M.-Y. Liu, X. Huang, A. Mallya, T. Karras, T. Aila, J. Lehtinen, and J. Kautz, \u201cFew-Shot Unsupervised Image-to-Image Translation,\u201d Proc. of 2019 IEEE\/CVF Int. Conf. on Computer Vision (ICCV), pp. 10550-10559, 2019. https:\/\/doi.org\/10.1109\/ICCV.2019.01065","DOI":"10.1109\/ICCV.2019.01065"},{"key":"key-10.20965\/jaciii.2026.p0457-29","doi-asserted-by":"crossref","unstructured":"M. K. Sherwani and S. Gopalakrishnan, \u201cA systematic literature review: Deep learning techniques for synthetic medical image generation and their applications in radiotherapy,\u201d Frontiers in Radiology, Vol.4, Article No.1385742, 2024. https:\/\/doi.org\/10.3389\/fradi.2024.1385742","DOI":"10.3389\/fradi.2024.1385742"}],"container-title":["Journal of Advanced Computational Intelligence and Intelligent Informatics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.fujipress.jp\/main\/wp-content\/themes\/Fujipress\/hyosetsu.php?ppno=jacii003000020014","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,19]],"date-time":"2026-03-19T15:04:13Z","timestamp":1773932653000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.fujipress.jp\/jaciii\/jc\/jacii003000020457"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,20]]},"references-count":29,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2026,3,20]]},"published-print":{"date-parts":[[2026,3,20]]}},"URL":"https:\/\/doi.org\/10.20965\/jaciii.2026.p0457","relation":{},"ISSN":["1883-8014","1343-0130"],"issn-type":[{"value":"1883-8014","type":"electronic"},{"value":"1343-0130","type":"print"}],"subject":[],"published":{"date-parts":[[2026,3,20]]}}}