{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,28]],"date-time":"2025-10-28T15:13:14Z","timestamp":1761664394995,"version":"build-2065373602"},"reference-count":31,"publisher":"Institution of Engineering and Technology (IET)","issue":"8","license":[{"start":{"date-parts":[[2022,3,24]],"date-time":"2022-03-24T00:00:00Z","timestamp":1648080000000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0\/"}],"content-domain":{"domain":["ietresearch.onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["IET Image Processing"],"published-print":{"date-parts":[[2022,6]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>As an advanced task for image synthesis without labelled data, unsupervised image\u2010to\u2010image translation refers to the overall conversion of a certain characteristic image domain X to another domain Y. The key point is to learn a mapping relationship that can be transformed between different image domains. Existing methods mainly adopt GANs to generate authentic images. While the discriminators will be abandoned after the training process is completed. In order to avoid this waste of training resources, a feature encoder reusing method is proposed, which could reduce the number of parameters and accelerate the training speed. In addition, we add the adaptive perceptual loss for the purpose of paying more attention to the quality of generated images. This loss directly uses the encoder during training to perform feature\u2010level constraints, and applies the L1\u2010norm on the intermediate feature layer of the generated samples. The experiments illustrate that our framework can generate more natural images and provide an effective solution for unsupervised translation.<\/jats:p>","DOI":"10.1049\/ipr2.12485","type":"journal-article","created":{"date-parts":[[2022,3,24]],"date-time":"2022-03-24T19:01:06Z","timestamp":1648148466000},"page":"2219-2227","update-policy":"https:\/\/doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Re\u2010EnGAN: Unsupervised image\u2010to\u2010image translation based on reused feature encoder in CycleGAN"],"prefix":"10.1049","volume":"16","author":[{"given":"Yu","family":"Lu","sequence":"first","affiliation":[{"name":"School of Information Science and Engineering Shandong University  Qingdao China"},{"name":"Cyberspace and Information Technology Center in Shandong Province  Jinan China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3160-9233","authenticated-orcid":false,"given":"Ju","family":"Liu","sequence":"additional","affiliation":[{"name":"School of Information Science and Engineering Shandong University  Qingdao China"}]},{"given":"Lin","family":"Lv","sequence":"additional","affiliation":[{"name":"School of Information Science and Engineering Shandong University  Qingdao China"}]},{"given":"Xuesong","family":"Gao","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Digital Multimedia Technology at Hisense  Qingdao China"},{"name":"College of Intelligence and Computing Tianjin University  Tianjin China"}]},{"given":"Weiqiang","family":"Chen","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Digital Multimedia Technology at Hisense  Qingdao China"}]},{"given":"Yuyi","family":"Zhang","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Digital Multimedia Technology at Hisense  Qingdao China"}]}],"member":"265","published-online":{"date-parts":[[2022,3,24]]},"reference":[{"key":"e_1_2_9_2_1","doi-asserted-by":"crossref","unstructured":"Ledig C.&Theis L.et al.: Photo\u2010realistic single image super\u2010resolution using a generative adversarial network. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition pp. 105\u2013114. IEEE Piscataway NJ (2017)","DOI":"10.1109\/CVPR.2017.19"},{"key":"e_1_2_9_3_1","doi-asserted-by":"publisher","DOI":"10.1049\/ipr2.12250"},{"key":"e_1_2_9_4_1","doi-asserted-by":"crossref","unstructured":"Zhang R. Isola P. Efros A.:Colorful image colorization. In:2016 European Conference on Computer Vision pp.649\u2013666.Springer Cham(2016)","DOI":"10.1007\/978-3-319-46487-9_40"},{"key":"e_1_2_9_5_1","article-title":"Pixel\u2010level semantics guided image colorization","author":"Zhao J.","year":"2018","journal-title":"arXiv:1808.01597"},{"key":"e_1_2_9_6_1","doi-asserted-by":"crossref","unstructured":"Johnson J. Alahi A. Fei\u2010Fei L.:Perceptual losses for real\u2010time style transfer and super\u2010resolution. In:2016 European Conference on Computer Vision pp.694\u2013711.Springer Cham(2016)","DOI":"10.1007\/978-3-319-46475-6_43"},{"key":"e_1_2_9_7_1","unstructured":"Lu Y. Research on unsupervised image\u2010to\u2010image translation based on CycleGAN Thesis for Master Degree Shandong University (2021)"},{"key":"e_1_2_9_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2019.2939649"},{"key":"e_1_2_9_9_1","doi-asserted-by":"crossref","unstructured":"Oz M.. Vaghela H. Bagul S.:Semi\u2010supervised image\u2010to\u2010image translation. In:2019 International Conference of Artificial Intelligence and Information Technology pp.16\u201320.IEEE Piscataway NJ(2019)","DOI":"10.1109\/ICAIIT.2019.8834613"},{"key":"e_1_2_9_10_1","doi-asserted-by":"crossref","unstructured":"Lata K. Dave M. Nishanth N.:Image\u2010to\u2010image translation using generative adversarial network. In:2019 3rd International conference on Electronics Communication and Aerospace Technology pp.186\u2013189.IEEE Piscataway NJ(2019)","DOI":"10.1109\/ICECA.2019.8822195"},{"key":"e_1_2_9_11_1","doi-asserted-by":"crossref","unstructured":"Zhu J.Y. Park T. Isola P. et\u00a0al.:Unpaired image\u2010to\u2010image translation using cycle consistent adversarial networks. In:2017 IEEE International Conference on Computer Vision pp.2242\u20132251.IEEE Piscataway NJ(2017)","DOI":"10.1109\/ICCV.2017.244"},{"key":"e_1_2_9_12_1","doi-asserted-by":"crossref","unstructured":"Jacobs C.E. Hertzmann A. Oliver N. et\u00a0al.Image analogies. In:28th Annual Conference on Computer Graphics and Interactive Techniques pp.327\u2013340.ACM Press New York(2001)","DOI":"10.1145\/383259.383295"},{"key":"e_1_2_9_13_1","doi-asserted-by":"crossref","unstructured":"Isola P. Zhu J.Y. Zhou T. et\u00a0al.:Image\u2010to\u2010image translation with conditional adversarial networks. In:2017 IEEE Conference on Computer Vision and Pattern Recognition pp.5967\u20135976.IEEE Piscataway NJ(2017)","DOI":"10.1109\/CVPR.2017.632"},{"key":"e_1_2_9_14_1","unstructured":"Zhu J.Y. Zhang R. Pathak D. et\u00a0al.:Toward multimodal image\u2010to\u2010image translation. In:The 31st International Conference on Neural Information Processing Systems pp.465\u2013476.Curran Associates Red Hook NY(2017)"},{"key":"e_1_2_9_15_1","doi-asserted-by":"crossref","unstructured":"Wang T.C. Liu M.Y. Zhu J.Y. et\u00a0al.:High\u2010resolution Image synthesis and semantic manipulation with conditional GANs. In:2018 IEEE Conference on Computer Vision and Pattern Recognition pp.8798\u20138807.IEEE Piscataway NJ(2018)","DOI":"10.1109\/CVPR.2018.00917"},{"key":"e_1_2_9_16_1","article-title":"Unsupervised cross\u2010domain image generation","author":"Taigman Y.","year":"2016","journal-title":"arXiv:1611.02200"},{"key":"e_1_2_9_17_1","doi-asserted-by":"crossref","unstructured":"Yi Z. Zhang H. Tan P. et\u00a0al.:DualGAN: unsupervised dual learning for image\u2010to\u2010image translation. In:2017 IEEE International Conference on Computer Vision pp.2868\u20132876.IEEE Piscataway NJ(2017)","DOI":"10.1109\/ICCV.2017.310"},{"key":"e_1_2_9_18_1","unstructured":"Kim T. Cha M. Kim H. et\u00a0al.:Learning to discover cross\u2010domain relations with generative adversarial networks. In:2017 International Conference on Machine Learning Sydney Australia pp.1857\u20131865.Microtome Publishing Brookline MA(2017)"},{"key":"e_1_2_9_19_1","doi-asserted-by":"crossref","unstructured":"Shen Z. et\u00a0al.:One\u2010to\u2010one mapping for unpaired image\u2010to\u2010image translation. In:2020 IEEE Winter Conference on Applications of Computer Vision pp.1159\u20131168.IEEE Piscataway NJ(2020)","DOI":"10.1109\/WACV45572.2020.9093622"},{"key":"e_1_2_9_20_1","article-title":"Coupled generative adversarial networks","author":"Liu M.Y.","year":"2016","journal-title":"arXiv:1606.07536"},{"key":"e_1_2_9_21_1","unstructured":"Liu M.\u2010Y. Breuel T. Kautz J.:Unsupervised image\u2010to\u2010image translation networks. In:The 31st Conference on Neural Information Processing Systems pp.701\u2013709.Curran Associates Red Hook NY(2017)"},{"key":"e_1_2_9_22_1","doi-asserted-by":"crossref","unstructured":"Lu G. Zhou Z. Song Y. et\u00a0al.:Guiding the one\u2010to\u2010one mapping in CycleGAN via optimal transport. In:The 33th AAAI Conference on Artificial Intelligence pp.4432\u20134439.AAAI Washington D.C. (2019)","DOI":"10.1609\/aaai.v33i01.33014432"},{"key":"e_1_2_9_23_1","article-title":"Harmonic unpaired image\u2010to\u2010image translation","author":"Zhang R.","year":"2019","journal-title":"arXiv:1902.09727"},{"key":"e_1_2_9_24_1","doi-asserted-by":"crossref","unstructured":"Wu W. Cao K. Li C. et\u00a0al.:TransGaGa: Geometry\u2010aware unsupervised image\u2010to\u2010image translation. In:2019 IEEE Conference on Computer Vision and Pattern Recognition pp.8012\u20138021.IEEE Piscataway NJ(2019)","DOI":"10.1109\/CVPR.2019.00820"},{"key":"e_1_2_9_25_1","doi-asserted-by":"crossref","unstructured":"Mahendran A. Vedaldi A.:Understanding deep image representations by inverting them. In:2015 IEEE Conference on Computer Vision and Pattern Recognition pp.5188\u20135196.IEEE Piscataway NJ(2015)","DOI":"10.1109\/CVPR.2015.7299155"},{"key":"e_1_2_9_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2018.2875194"},{"key":"e_1_2_9_27_1","unstructured":"Simonyan K. Vedaldi A. Zisserman A.:Deep inside convolutional networks: visualizing image classification models and saliency maps. In:2014 International Conference on Learning Representations pp.1\u20138.ICLR Toronto(2014)"},{"key":"e_1_2_9_28_1","unstructured":"Gatys L. Ecker A. Bethge M.:Texture synthesis using convolutional neural networks. In:The 29st International Conference on Neural Information Processing Systems pp.262\u2013270.Curran Associates Red Hook NY(2015)"},{"key":"e_1_2_9_29_1","doi-asserted-by":"crossref","unstructured":"Johnson J. Alahi A. Li F.:Perceptual losses for real\u2010time style transfer and super\u2010resolution. In:2016 European Conference on Computer Vision pp.694\u2013711.Springer Cham(2016)","DOI":"10.1007\/978-3-319-46475-6_43"},{"key":"e_1_2_9_30_1","unstructured":"Simonyan K. Zisserman A.:Very deep convolutional networks for large\u2010scale image recognition. In:2015 International Conference on Learning Representations pp.1\u201314.ICLR Toronto(2015)"},{"key":"e_1_2_9_31_1","article-title":"U\u2010GAT\u2010IT: unsupervised generative attentional networks with adaptive layer\u2010instance normalization for image\u2010to\u2010image translation","author":"Junho J.","year":"2020","journal-title":"arXiv:1907.10830"},{"key":"e_1_2_9_32_1","unstructured":"Heusel M. et\u00a0al.:GANs trained by a two time\u2010scale update rule converge to a local Nash equilibrium. In:The 31st Conference on Neural Information Processing Systems pp.6627\u20136638.Curran Associates Red Hook NY(2017)"}],"container-title":["IET Image Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1049\/ipr2.12485","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/full-xml\/10.1049\/ipr2.12485","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/ietresearch.onlinelibrary.wiley.com\/doi\/pdf\/10.1049\/ipr2.12485","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,28]],"date-time":"2025-10-28T12:16:55Z","timestamp":1761653815000},"score":1,"resource":{"primary":{"URL":"https:\/\/ietresearch.onlinelibrary.wiley.com\/doi\/10.1049\/ipr2.12485"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,3,24]]},"references-count":31,"journal-issue":{"issue":"8","published-print":{"date-parts":[[2022,6]]}},"alternative-id":["10.1049\/ipr2.12485"],"URL":"https:\/\/doi.org\/10.1049\/ipr2.12485","archive":["Portico"],"relation":{},"ISSN":["1751-9659","1751-9667"],"issn-type":[{"type":"print","value":"1751-9659"},{"type":"electronic","value":"1751-9667"}],"subject":[],"published":{"date-parts":[[2022,3,24]]},"assertion":[{"value":"2022-01-06","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-03-02","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2022-03-24","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}