{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,24]],"date-time":"2026-02-24T12:04:43Z","timestamp":1771934683348,"version":"3.50.1"},"reference-count":45,"publisher":"Wiley","license":[{"start":{"date-parts":[[2024,5,2]],"date-time":"2024-05-02T00:00:00Z","timestamp":1714608000000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Information Technology Research Center","award":["IITP-2023-2020-0-01846"],"award-info":[{"award-number":["IITP-2023-2020-0-01846"]}]},{"DOI":"10.13039\/501100003621","name":"Ministry of Science, ICT and Future Planning","doi-asserted-by":"publisher","award":["IITP-2023-2020-0-01846"],"award-info":[{"award-number":["IITP-2023-2020-0-01846"]}],"id":[{"id":"10.13039\/501100003621","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["International Journal of Biomedical Imaging"],"published-print":{"date-parts":[[2024,5,2]]},"abstract":"<jats:p>We present a deep learning-based method that corrects motion artifacts and thus accelerates data acquisition and reconstruction of magnetic resonance images. The novel model, the Motion Artifact Correction by Swin Network (MACS-Net), uses a Swin transformer layer as the fundamental block and the Unet architecture as the neural network backbone. We employ a hierarchical transformer with shifted windows to extract multiscale contextual features during encoding. A new dual upsampling technique is employed to enhance the spatial resolutions of feature maps in the Swin transformer-based decoder layer. A raw magnetic resonance imaging dataset is used for network training and testing; the data contain various motion artifacts with ground truth images of the same subjects. The results were compared to six state-of-the-art MRI image motion correction methods using two types of motions. When motions were brief (within 5\u2009s), the method reduced the average normalized root mean square error (NRMSE) from 45.25% to 17.51%, increased the mean structural similarity index measure (SSIM) from 79.43% to 91.72%, and increased the peak signal-to-noise ratio (PSNR) from 18.24 to 26.57\u2009dB. Similarly, when motions were extended from 5 to 10\u2009s, our approach decreased the average NRMSE from 60.30% to 21.04%, improved the mean SSIM from 33.86% to 90.33%, and increased the PSNR from 15.64 to 24.99\u2009dB. The anatomical structures of the corrected images and the motion-free brain data were similar.<\/jats:p>","DOI":"10.1155\/2024\/8972980","type":"journal-article","created":{"date-parts":[[2024,5,3]],"date-time":"2024-05-03T01:50:09Z","timestamp":1714701009000},"page":"1-12","source":"Crossref","is-referenced-by-count":4,"title":["Swin Transformer and the Unet Architecture to Correct Motion Artifacts in Magnetic Resonance Image Reconstruction"],"prefix":"10.1155","volume":"2024","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6000-9128","authenticated-orcid":true,"given":"Md. Biddut","family":"Hossain","sequence":"first","affiliation":[{"name":"Department of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea"}]},{"given":"Rupali Kiran","family":"Shinde","sequence":"additional","affiliation":[{"name":"Department of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7001-1667","authenticated-orcid":true,"given":"Shariar Md","family":"Imtiaz","sequence":"additional","affiliation":[{"name":"Department of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea"}]},{"given":"F. M. Fahmid","family":"Hossain","sequence":"additional","affiliation":[{"name":"Department of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea"}]},{"given":"Seok-Hee","family":"Jeon","sequence":"additional","affiliation":[{"name":"Department of Electronics Engineering, Incheon National University, 119 Academy-ro, Yeonsu-gu, Incheon 22012, Republic of Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0334-504X","authenticated-orcid":true,"given":"Ki-Chul","family":"Kwon","sequence":"additional","affiliation":[{"name":"Department of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8109-2055","authenticated-orcid":true,"given":"Nam","family":"Kim","sequence":"additional","affiliation":[{"name":"Department of Information and Communication Engineering, Chungbuk National University, Cheongju-si 28644, Chungcheongbuk-do, Republic of Korea"}]}],"member":"311","reference":[{"key":"1","doi-asserted-by":"publisher","DOI":"10.3390\/bioengineering10010022"},{"key":"2","doi-asserted-by":"publisher","DOI":"10.1002\/jmri.24850"},{"key":"3","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP40776.2020.9054306"},{"key":"4","doi-asserted-by":"publisher","DOI":"10.1088\/0031-9155\/61\/5\/R32"},{"key":"5","doi-asserted-by":"publisher","DOI":"10.1007\/s00330-006-0470-4"},{"key":"6","doi-asserted-by":"publisher","DOI":"10.1016\/j.pnmrs.2017.04.002"},{"key":"7","doi-asserted-by":"publisher","DOI":"10.1088\/1361-6560\/ab10b2"},{"key":"8","doi-asserted-by":"publisher","DOI":"10.1016\/j.neuroimage.2015.03.013"},{"key":"9","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0133921"},{"key":"10","doi-asserted-by":"publisher","DOI":"10.1002\/mrm.25670"},{"key":"11","article-title":"MoCoNet: motion correction in 3D MPRAGE images using a convolutional neural network approach","author":"K. Pawar","year":"2018"},{"key":"12","doi-asserted-by":"publisher","DOI":"10.1016\/j.mri.2020.05.004"},{"key":"13","doi-asserted-by":"publisher","DOI":"10.13104\/imri.2019.23.2.81"},{"key":"14","doi-asserted-by":"publisher","DOI":"10.3390\/diagnostics13071306"},{"key":"15","doi-asserted-by":"publisher","DOI":"10.3390\/cancers15010012"},{"key":"16","doi-asserted-by":"publisher","DOI":"10.3390\/s24030753"},{"key":"17","doi-asserted-by":"publisher","DOI":"10.13104\/imri.2020.24.4.196"},{"key":"18","doi-asserted-by":"publisher","DOI":"10.1155\/2022\/6063779"},{"key":"19","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2017.7952268"},{"key":"20","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v31i1.11231"},{"key":"21","doi-asserted-by":"publisher","DOI":"10.1016\/j.media.2019.04.009"},{"key":"22","doi-asserted-by":"publisher","DOI":"10.1016\/j.artmed.2020.101955"},{"key":"23","doi-asserted-by":"publisher","DOI":"10.1002\/mrm.27783"},{"key":"24","doi-asserted-by":"publisher","DOI":"10.1002\/mrm.27772"},{"key":"25","doi-asserted-by":"publisher","DOI":"10.1016\/j.neuroimage.2021.117756"},{"key":"26","doi-asserted-by":"publisher","DOI":"10.1002\/mrm.27771"},{"key":"27","doi-asserted-by":"publisher","DOI":"10.1016\/j.mri.2020.05.002"},{"key":"28","doi-asserted-by":"publisher","DOI":"10.1007\/s11431-020-1647-3"},{"key":"29","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-88552-6_6"},{"key":"30","article-title":"Attention is all you need","volume-title":"Advances in Neural Information Processing Systems, 30","author":"A. Vaswani"},{"key":"31","doi-asserted-by":"publisher","DOI":"10.1109\/TMI.2022.3147426"},{"key":"32","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-87231-1_30"},{"key":"33","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"34","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"35","doi-asserted-by":"publisher","DOI":"10.1038\/s41597-022-01694-8"},{"key":"36","article-title":"Pydeface tool"},{"key":"37","doi-asserted-by":"publisher","DOI":"10.3389\/fnimg.2022.1073734"},{"key":"38","first-page":"205","article-title":"Swin-Unet: Unet-like pure transformer for medical image segmentation","volume-title":"European conference on computer vision","author":"H. Cao","year":"2022"},{"key":"39","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.207"},{"key":"40","article-title":"An image is worth 16x16 words: transformers for image recognition at scale","author":"A. Dosovitskiy","year":"2020"},{"key":"41","first-page":"558","article-title":"Tokens-to-token ViT: training vision transformers from scratch on imageNet","author":"L. Yuan"},{"key":"42","first-page":"8759","article-title":"Path aggregation network for instance segmentation","author":"S. Liu"},{"key":"43","article-title":"Motion correction in MRI using deep learning and a novel hybrid loss function","author":"L. Zhang","year":"2022"},{"key":"44","doi-asserted-by":"publisher","DOI":"10.1016\/j.neuroimage.2022.119411"},{"key":"45","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0278668"}],"container-title":["International Journal of Biomedical Imaging"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/downloads.hindawi.com\/journals\/ijbi\/2024\/8972980.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/ijbi\/2024\/8972980.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/ijbi\/2024\/8972980.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,5,3]],"date-time":"2024-05-03T01:50:15Z","timestamp":1714701015000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.hindawi.com\/journals\/ijbi\/2024\/8972980\/"}},"subtitle":[],"editor":[{"given":"Markos G.","family":"Tsipouras","sequence":"additional","affiliation":[]}],"short-title":[],"issued":{"date-parts":[[2024,5,2]]},"references-count":45,"alternative-id":["8972980","8972980"],"URL":"https:\/\/doi.org\/10.1155\/2024\/8972980","relation":{},"ISSN":["1687-4196","1687-4188"],"issn-type":[{"value":"1687-4196","type":"electronic"},{"value":"1687-4188","type":"print"}],"subject":[],"published":{"date-parts":[[2024,5,2]]}}}