{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T01:58:06Z","timestamp":1760234286965,"version":"build-2065373602"},"reference-count":39,"publisher":"MDPI AG","issue":"5","license":[{"start":{"date-parts":[[2021,5,2]],"date-time":"2021-05-02T00:00:00Z","timestamp":1619913600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Entropy"],"abstract":"<jats:p>Image-to-image translation is used to convert an image of a certain style to another of the target style with the original content preserved. A desired translator should be capable of generating diverse results in a controllable many-to-many fashion. To this end, we design a novel deep translator, namely exemplar-domain aware image-to-image translator (EDIT for short). From a logical perspective, the translator needs to perform two main functions, i.e., feature extraction and style transfer. With consideration of logical network partition, the generator of our EDIT comprises of a part of blocks configured by shared parameters, and the rest by varied parameters exported by an exemplar-domain aware parameter network, for explicitly imitating the functionalities of extraction and mapping. The principle behind this is that, for images from multiple domains, the content features can be obtained by an extractor, while (re-)stylization is achieved by mapping the extracted features specifically to different purposes (domains and exemplars). In addition, a discriminator is equipped during the training phase to guarantee the output satisfying the distribution of the target domain. Our EDIT can flexibly and effectively work on multiple domains and arbitrary exemplars in a unified neat model. We conduct experiments to show the efficacy of our design, and reveal its advances over other state-of-the-art methods both quantitatively and qualitatively.<\/jats:p>","DOI":"10.3390\/e23050565","type":"journal-article","created":{"date-parts":[[2021,5,2]],"date-time":"2021-05-02T08:05:21Z","timestamp":1619942721000},"page":"565","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Unsupervised Exemplar-Domain Aware Image-to-Image Translation"],"prefix":"10.3390","volume":"23","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2011-5337","authenticated-orcid":false,"given":"Yuanbin","family":"Fu","sequence":"first","affiliation":[{"name":"College of Intelligence and Computing, Tianjin University, Tianjin 300350, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3264-3265","authenticated-orcid":false,"given":"Jiayi","family":"Ma","sequence":"additional","affiliation":[{"name":"Electronic Information School, Wuhan University, Wuhan 430072, China"}]},{"given":"Xiaojie","family":"Guo","sequence":"additional","affiliation":[{"name":"College of Intelligence and Computing, Tianjin University, Tianjin 300350, China"}]}],"member":"1968","published-online":{"date-parts":[[2021,5,2]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J.Y., Zhou, T., and Efros, A.A. (2017, January 21\u201326). Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.632"},{"key":"ref_2","unstructured":"Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. arXiv."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Zhu, J.Y., Park, T., Isola, P., and Efros, A.A. (2017, January 22\u201329). Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.244"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Yi, Z., Zhang, H., Tan, P., and Gong, M. (2017, January 22\u201329). Dualgan: Unsupervised dual learning for image-to-image translation. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.310"},{"key":"ref_5","unstructured":"Kim, T., Cha, M., Kim, H., Lee, J.K., and Kim, J. (2017). Learning to discover cross-domain relations with generative adversarial networks. arXiv."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Choi, Y., Choi, M., Kim, M., Ha, J.W., Kim, S., and Choo, J. (2018, January 18\u201322). Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. Proceedings of the 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00916"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Anoosheh, A., Agustsson, E., Timofte, R., and Van Gool, L. (2018, January 18\u201322). Combogan: Unrestrained scalability for image domain translation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPRW.2018.00122"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Hui, L., Li, X., Chen, J., He, H., and Yang, J. (2018, January 20\u201324). Unsupervised multi-domain image translation with domain-specific encoders\/decoders. Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China.","DOI":"10.1109\/ICPR.2018.8545169"},{"key":"ref_9","unstructured":"Liu, M.Y., Huang, X., Mallya, A., Karras, T., Aila, T., Lehtinen, J., and Kautz, J. (November, January 27). Few-shot unsupervised image-to-image translation. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Seoul, Korea."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Cao, K., Liao, J., and Yuan, L. (2018). Carigans: Unpaired photo-to-caricature translation. arXiv.","DOI":"10.1145\/3272127.3275046"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Chen, Y., Lai, Y.K., and Liu, Y.J. (2018, January 18\u201322). CartoonGAN: Generative adversarial networks for photo cartoonization. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00986"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Chang, H., Lu, J., Yu, F., and Finkelstein, A. (2018, January 18\u201322). Pairedcyclegan: Asymmetric style transfer for applying and removing makeup. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00012"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Wang, Z., Tang, X., Luo, W., and Gao, S. (2018, January 18\u201322). Face aging With identity-preserved conditional generative adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00828"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Gatys, L.A., Ecker, A.S., and Bethge, M. (2016, January 27\u201330). Image style transfer using convolutional neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.265"},{"key":"ref_15","unstructured":"Simonyan, K., and Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. arXiv."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Chen, D., Yuan, L., Liao, J., Yu, N., and Hua, G. (2018, January 18\u201322). Stereoscopic neural style transfer. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00696"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Korshunova, I., Shi, W., Dambre, J., and Theis, L. (2017, January 22\u201329). Fast face-swap using convolutional neural networks. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.397"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/2601097.2601137","article-title":"Style transfer for headshot portraits","volume":"33","author":"Shih","year":"2014","journal-title":"ACM Trans. Graph."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Azadi, S., Fisher, M., Kim, V.G., Wang, Z., Shechtman, E., and Darrell, T. (2018, January 18\u201322). Multi-content GAN for few-shot font style transfer. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00789"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Shen, F., Yan, S., and Zeng, G. (2018, January 18\u201322). Neural style transfer via meta networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00841"},{"key":"ref_21","unstructured":"Risser, E., Wilmot, P., and Barnes, C. (2017). Stable and controllable neural texture synthesis and style transfer using histogram losses. arXiv."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Huang, X., and Belongie, S. (2017, January 22\u201329). Arbitrary style transfer in real-time with adaptive instance normalization. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.167"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Li, Y., Wang, N., Liu, J., and Hou, X. (2017). Demystifying neural style transfer. arXiv.","DOI":"10.24963\/ijcai.2017\/310"},{"key":"ref_24","first-page":"694","article-title":"Perceptual losses for real-time style transfer and super-resolution","volume":"9906","author":"Johnson","year":"2016","journal-title":"Trans. Petri Nets Other Model. Concurr."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Chen, D., Yuan, L., Liao, J., Yu, N., and Hua, G. (2017, January 22\u201329). Stylebank: An explicit representation for neural image style transfer. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/CVPR.2017.296"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Gu, S., Chen, C., Jing, L., and Lu, Y. (2018, January 18\u201322). Arbitrary style transfer with deep feature reshuffle. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00858"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Huang, H., Hao, W., Luo, W., Lin, M., Jiang, W., Zhu, X., Li, Z., and Wei, L. (2017, January 22\u201329). Real-time neural style transfer for videos. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/CVPR.2017.745"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Lee, H.Y., Tseng, H.Y., Huang, J.B., Singh, M., and Yang, M.H. (2018, January 8\u201314). Diverse image-to-image translation via disentangled representations. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01246-5_3"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Lin, J., Xia, Y., Qin, T., Chen, Z., and Liu, T.Y. (2018, January 18\u201322). Conditional image-to-image translation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00579"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Huang, X., Liu, M.Y., Belongie, S., and Kautz, J. (2018, January 8\u201314). Multimodal unsupervised image-to-image translation. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01219-9_11"},{"key":"ref_31","unstructured":"Li, Y., Fang, C., Yang, J., Wang, Z., Lu, X., and Yang, M.H. (2017). Universal style transfer via feature transforms. arXiv."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Sheng, L., Lin, Z., Shao, J., and Wang, X. (2018, January 18\u201322). Avatar-net: Multi-scale zero-shot style transfer by feature decoration. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00860"},{"key":"ref_33","unstructured":"Ma, L., Xu, J., Georgoulis, S., Tuytelaars, T., and Gool, L.V. (2019). Exemplar guided unsupervised image-to-image translation with semantic consistency. arXiv."},{"key":"ref_34","unstructured":"Ha, D., Dai, A.M., and Le, Q.V. (2017). Hypernetworks. arXiv."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Zhang, R., Isola, P., Efros, A.A., Shechtman, E., and Wang, O. (2018, January 18\u201322). The unreasonable effectiveness of deep features as a perceptual metric. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00068"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Shrivastava, A., Pfister, T., Tuzel, O., Susskind, J., Wang, W., and Webb, R. (2017, January 21\u201326). Learning from simulated and unsupervised images through adversarial training. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.241"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Tomei, M., Cornia, M., Baraldi, L., and Cucchiara, R. (2019, January 16\u201320). Art2Real: Unfolding the reality of artworks via semantically-aware image-to-image translation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2019.00600"},{"key":"ref_38","unstructured":"Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., and Chen, X. (2016). Improved techniques for training gans. arXiv."},{"key":"ref_39","unstructured":"Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. (2017). GANs trained by a two time-scale update rule converge to a local nash equilibrium. arXiv."}],"container-title":["Entropy"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1099-4300\/23\/5\/565\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T05:56:39Z","timestamp":1760162199000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1099-4300\/23\/5\/565"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,5,2]]},"references-count":39,"journal-issue":{"issue":"5","published-online":{"date-parts":[[2021,5]]}},"alternative-id":["e23050565"],"URL":"https:\/\/doi.org\/10.3390\/e23050565","relation":{},"ISSN":["1099-4300"],"issn-type":[{"type":"electronic","value":"1099-4300"}],"subject":[],"published":{"date-parts":[[2021,5,2]]}}}