{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,17]],"date-time":"2025-11-17T21:40:12Z","timestamp":1763415612910,"version":"build-2065373602"},"reference-count":40,"publisher":"MDPI AG","issue":"19","license":[{"start":{"date-parts":[[2022,9,26]],"date-time":"2022-09-26T00:00:00Z","timestamp":1664150400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100003725","name":"Korea government (MSIT)","doi-asserted-by":"publisher","award":["NRF-2022R1A2C2010363","HR14C0002"],"award-info":[{"award-number":["NRF-2022R1A2C2010363","HR14C0002"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003710","name":"Ministry of Health &amp; Welfare, Republic of Korea","doi-asserted-by":"publisher","award":["NRF-2022R1A2C2010363","HR14C0002"],"award-info":[{"award-number":["NRF-2022R1A2C2010363","HR14C0002"]}],"id":[{"id":"10.13039\/501100003710","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Recent advances in deep learning have contributed greatly to the field of parallel MR imaging, where a reduced amount of k-space data are acquired to accelerate imaging time. In our previous work, we have proposed a deep learning method to reconstruct MR images directly from k-space data acquired with Cartesian trajectories. However, MRI utilizes various non-Cartesian trajectories, such as radial trajectories, with various numbers of multi-channel RF coils according to the purpose of an MRI scan. Thus, it is important for a reconstruction network to efficiently unfold aliasing artifacts due to undersampling and to combine multi-channel k-space data into single-channel data. In this work, a neural network named \u2018ETER-net\u2019 is utilized to reconstruct an MR image directly from k-space data acquired with Cartesian and non-Cartesian trajectories and multi-channel RF coils. In the proposed image reconstruction network, the domain transform network converts k-space data into a rough image, which is then refined in the following network to reconstruct a final image. We also analyze loss functions including adversarial and perceptual losses to improve the network performance. For experiments, we acquired k-space data at a 3T MRI scanner with Cartesian and radial trajectories to show the learning mechanism of the direct mapping relationship between the k-space and the corresponding image by the proposed network and to demonstrate the practical applications. According to our experiments, the proposed method showed satisfactory performance in reconstructing images from undersampled single- or multi-channel k-space data with reduced image artifacts. In conclusion, the proposed method is a deep-learning-based MR reconstruction network, which can be used as a unified solution for parallel MRI, where k-space data are acquired with various scanning trajectories.<\/jats:p>","DOI":"10.3390\/s22197277","type":"journal-article","created":{"date-parts":[[2022,9,28]],"date-time":"2022-09-28T03:30:37Z","timestamp":1664335837000},"page":"7277","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":7,"title":["An End-to-End Recurrent Neural Network for Radial MR Image Reconstruction"],"prefix":"10.3390","volume":"22","author":[{"given":"Changheun","family":"Oh","sequence":"first","affiliation":[{"name":"Neuroscience Research Institute, Gachon University, Incheon 21565, Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7408-8215","authenticated-orcid":false,"given":"Jun-Young","family":"Chung","sequence":"additional","affiliation":[{"name":"Department of Neuroscience, College of Medicine, Gachon University, Incheon 21565, Korea"}]},{"given":"Yeji","family":"Han","sequence":"additional","affiliation":[{"name":"Department of Biomedical Engineering, Gachon University, Incheon 21936, Korea"}]}],"member":"1968","published-online":{"date-parts":[[2022,9,26]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"81","DOI":"10.1016\/j.mri.2021.04.013","article-title":"Quantification of T1, T2 relaxation times from Magnetic Resonance Fingerprinting radially undersampled data using analytical transformations","volume":"80","author":"Dikaios","year":"2021","journal-title":"Magn. Reson. Imaging"},{"key":"ref_2","unstructured":"Deans, S.R. (2007). The Radon Transform and Some of Its Applications, Courier Corporation."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"191","DOI":"10.1016\/j.jmr.2007.06.012","article-title":"On NUFFT-based gridding for non-Cartesian MRI","volume":"188","author":"Fessler","year":"2007","journal-title":"J. Magn. Reson."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"952","DOI":"10.1002\/(SICI)1522-2594(199911)42:5<952::AID-MRM16>3.0.CO;2-S","article-title":"SENSE: Sensitivity encoding for fast MRI","volume":"42","author":"Pruessmann","year":"1999","journal-title":"Magn. Reson. Med."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1202","DOI":"10.1002\/mrm.10171","article-title":"Generalized autocalibrating partially parallel acquisitions (GRAPPA)","volume":"47","author":"Griswold","year":"2002","journal-title":"Magn. Reson. Med. Off. J. Int. Soc. Magn. Reson. Med."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"1182","DOI":"10.1002\/mrm.21391","article-title":"Sparse MRI: The application of compressed sensing for rapid MR imaging","volume":"58","author":"Lustig","year":"2007","journal-title":"Magn. Reson. Med. Off. J. Int. Soc. Magn. Reson. Med."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Wu, D., and Wu, C. (2022). Research on the Time-Dependent Split Delivery Green Vehicle Routing Problem for Fresh Agricultural Products with Multiple Time Windows. Agriculture, 12.","DOI":"10.3390\/agriculture12060793"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"2781","DOI":"10.1109\/JSTARS.2021.3059451","article-title":"A hyperspectral image classification method using multifeature vectors and optimized KELM","volume":"14","author":"Chen","year":"2021","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Zhao, H., Liu, J., Chen, H., Chen, J., Li, Y., Xu, J., and Deng, W. (2022). Intelligent diagnosis using continuous wavelet transform and gauss convolutional deep belief network. IEEE Trans. Reliab.","DOI":"10.1109\/TR.2022.3180273"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"109419","DOI":"10.1016\/j.asoc.2022.109419","article-title":"An adaptive differential evolution algorithm based on belief space and generalized opposition-based learning for resource allocation","volume":"127","author":"Deng","year":"2022","journal-title":"Appl. Soft Comput."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Kawauchi, K., Furuya, S., Hirata, K., Katoh, C., Manabe, O., Kobayashi, K., Watanabe, S., and Shiga, T. (2020). A convolutional neural network-based system to classify patients using FDG PET\/CT examinations. BMC Cancer, 20.","DOI":"10.1186\/s12885-020-6694-x"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"122","DOI":"10.1016\/j.ejmp.2021.03.008","article-title":"The promise of artificial intelligence and deep learning in PET and SPECT imaging","volume":"83","author":"Arabi","year":"2021","journal-title":"Phys. Medica"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"204","DOI":"10.1109\/TMI.2019.2923601","article-title":"Co-learning feature fusion maps from PET-CT images of lung cancer","volume":"39","author":"Kumar","year":"2019","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"025019","DOI":"10.1088\/2057-1976\/ac53bd","article-title":"A few-shot U-Net deep learning model for lung cancer lesion segmentation via PET\/CT imaging","volume":"8","author":"Protonotarios","year":"2022","journal-title":"Biomed. Phys. Eng. Express"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"48","DOI":"10.1038\/s41746-022-00592-y","article-title":"Machine learning for medical imaging: Methodological failures and recommendations for the future","volume":"5","author":"Varoquaux","year":"2022","journal-title":"NPJ Digit. Med."},{"key":"ref_16","unstructured":"Han, Y., and Ye, J.C. (2018). k-Space Deep Learning for Accelerated MRI. arXiv."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"2188","DOI":"10.1002\/mrm.27201","article-title":"KIKI-net: Cross-domain convolutional neural networks for reconstructing undersampled magnetic resonance images","volume":"5","author":"Eo","year":"2018","journal-title":"Magn. Reson. Med."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"6209","DOI":"10.1002\/mp.12600","article-title":"A parallel MR imaging method using multilayer perceptron","volume":"44","author":"Kwon","year":"2017","journal-title":"Med. Phys."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"1310","DOI":"10.1109\/TMI.2017.2785879","article-title":"Dagan: Deep de-aliasing generative adversarial networks for fast compressed sensing mri reconstruction","volume":"37","author":"Yang","year":"2018","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"3055","DOI":"10.1002\/mrm.26977","article-title":"Learning a variational network for reconstruction of accelerated MRI data","volume":"79","author":"Hammernik","year":"2018","journal-title":"Magn. Reson. Med."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"491","DOI":"10.1109\/TMI.2017.2760978","article-title":"A deep cascade of convolutional neural networks for dynamic MR image reconstruction","volume":"37","author":"Schlemper","year":"2018","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"487","DOI":"10.1038\/nature25988","article-title":"Image reconstruction by domain-transform manifold learning","volume":"555","author":"Zhu","year":"2018","journal-title":"Nature"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"193","DOI":"10.1002\/mp.14566","article-title":"A k-space-to-image reconstruction network for MRI using recurrent neural network","volume":"48","author":"Oh","year":"2021","journal-title":"Med. Phys."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"1289","DOI":"10.1109\/TMI.2018.2833635","article-title":"Image reconstruction is a new frontier of machine learning","volume":"37","author":"Wang","year":"2018","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_25","unstructured":"Visin, F., Kastner, K., Cho, K., Matteucci, M., Courville, A., and Bengio, Y. (2015). Renet: A recurrent neural network based alternative to convolutional networks. arXiv."},{"key":"ref_26","unstructured":"Makhzani, A., and Frey, B.J. (2015, January 7\u201312). Winner-take-all autoencoders. Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"1418","DOI":"10.1109\/TMI.2018.2823768","article-title":"Framing U-Net via deep convolutional framelets: Application to sparse-view CT","volume":"37","author":"Han","year":"2018","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Ledig, C., Theis, L., Husz\u00e1r, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., and Wang, Z. (2017, January 21\u201326). Photo-realistic single image super-resolution using a generative adversarial network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.19"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J.Y., Zhou, T., and Efros, A.A. (2017, January 21\u201326). Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.632"},{"key":"ref_30","unstructured":"Pathak, D., Krahenbuhl, P., Donahue, J., Darrell, T., and Efros, A.A. (July, January 26). Context encoders: Feature learning by inpainting. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Zhang, R., Isola, P., and Efros, A.A. (2016). Colorful image colorization. Proceedings of the European Conference on Computer Vision, Springer.","DOI":"10.1007\/978-3-319-46487-9_40"},{"key":"ref_32","unstructured":"Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014, January 8\u201313). Generative adversarial nets. Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Johnson, J., Alahi, A., and Fei-Fei, L. (2016). Perceptual losses for real-time style transfer and super-resolution. Proceedings of the European Conference on Computer Vision, Springer.","DOI":"10.1007\/978-3-319-46475-6_43"},{"key":"ref_34","unstructured":"Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv."},{"key":"ref_35","unstructured":"Kingma, D.P., and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv."},{"key":"ref_36","unstructured":"Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., and Lerer, A. (2022, September 22). Automatic differentiation in PyTorch. NIPS-W, Available online: https:\/\/openreview.net\/forum?id=BJJsrmfCZ."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Cho, K., Van Merri\u00ebnboer, B., Gulcehre, C., Bahdanau, D., Bougares, F., Schwenk, H., and Bengio, Y. (2014). Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv.","DOI":"10.3115\/v1\/D14-1179"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"1735","DOI":"10.1162\/neco.1997.9.8.1735","article-title":"Long short-term memory","volume":"9","author":"Hochreiter","year":"1997","journal-title":"Neural Comput."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"600","DOI":"10.1109\/TIP.2003.819861","article-title":"Image quality assessment: From error visibility to structural similarity","volume":"13","author":"Wang","year":"2004","journal-title":"IEEE Trans. Image Process."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"2306","DOI":"10.1109\/TMI.2021.3075856","article-title":"Results of the 2020 fastmri challenge for machine learning mr image reconstruction","volume":"40","author":"Muckley","year":"2021","journal-title":"IEEE Trans. Med. Imaging"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/19\/7277\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:39:28Z","timestamp":1760143168000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/19\/7277"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,26]]},"references-count":40,"journal-issue":{"issue":"19","published-online":{"date-parts":[[2022,10]]}},"alternative-id":["s22197277"],"URL":"https:\/\/doi.org\/10.3390\/s22197277","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2022,9,26]]}}}