{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T03:16:25Z","timestamp":1760238985064,"version":"build-2065373602"},"reference-count":53,"publisher":"MDPI AG","issue":"18","license":[{"start":{"date-parts":[[2020,9,14]],"date-time":"2020-09-14T00:00:00Z","timestamp":1600041600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61703328"],"award-info":[{"award-number":["61703328"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100002858","name":"China Postdoctoral Science Foundation","doi-asserted-by":"publisher","award":["2018M631165"],"award-info":[{"award-number":["2018M631165"]}],"id":[{"id":"10.13039\/501100002858","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100012226","name":"Fundamental Research Funds for the Central Universities","doi-asserted-by":"publisher","award":["XJJ2018254"],"award-info":[{"award-number":["XJJ2018254"]}],"id":[{"id":"10.13039\/501100012226","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Image neural style transfer is a process of utilizing convolutional neural networks to render a content image based on a style image. The algorithm can compute a stylized image with original content from the given content image but a new style from the given style image. Style transfer has become a hot topic both in academic literature and industrial applications. The stylized results of current existing models are not ideal because of the color difference between two input images and the inconspicuous details of content image. To solve the problems, we propose two style transfer models based on robust nonparametric distribution transfer. The first model converts the color probability density function of the content image into that of the style image before style transfer. When the color dynamic range of the content image is smaller than that of style image, this model renders more reasonable spatial structure than the existing models. Then, an adaptive detail-enhanced exposure correction algorithm is proposed for underexposed images. Based this, the second model is proposed for the style transfer of underexposed content images. It can further improve the stylized results of underexposed images. Compared with popular methods, the proposed methods achieve the satisfactory qualitative and quantitative results.<\/jats:p>","DOI":"10.3390\/s20185232","type":"journal-article","created":{"date-parts":[[2020,9,14]],"date-time":"2020-09-14T09:04:53Z","timestamp":1600074293000},"page":"5232","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Robust Nonparametric Distribution Transfer with Exposure Correction for Image Neural Style Transfer"],"prefix":"10.3390","volume":"20","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0327-6729","authenticated-orcid":false,"given":"Shuai","family":"Liu","sequence":"first","affiliation":[{"name":"School of Software Engineering, Xi\u2019an Jiaotong University, Xi\u2019an 710049, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8209-4113","authenticated-orcid":false,"given":"Caixia","family":"Hong","sequence":"additional","affiliation":[{"name":"School of Software Engineering, Xi\u2019an Jiaotong University, Xi\u2019an 710049, China"}]},{"given":"Jing","family":"He","sequence":"additional","affiliation":[{"name":"School of Software Engineering, Xi\u2019an Jiaotong University, Xi\u2019an 710049, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3669-3748","authenticated-orcid":false,"given":"Zhiqiang","family":"Tian","sequence":"additional","affiliation":[{"name":"School of Software Engineering, Xi\u2019an Jiaotong University, Xi\u2019an 710049, China"}]}],"member":"1968","published-online":{"date-parts":[[2020,9,14]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"866","DOI":"10.1109\/TVCG.2012.160","article-title":"State of the \u201cart\u201d: A taxonomy of artistic stylization techniques for images and video","volume":"19","author":"Kyprianidis","year":"2013","journal-title":"IEEE Trans. Vis. Comput. Graph."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Semmo, A., Isenberg, T., and D\u00f6llner, J. (2017, January 29\u201330). Neural style transfer: A paradigm shift for image-based artistic rendering?. Proceedings of the Symposium on Non-Photorealistic Animation and Rendering, Los Angeles, CA, USA.","DOI":"10.1145\/3092919.3092920"},{"key":"ref_3","unstructured":"Jing, Y., Yang, Y., Feng, Z., Ye, J., Yu, Y., and Song, M. (2019). Neural style transfer: A review. IEEE Trans. Vis. Comput. Graph."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Rosin, P., and Collomosse, J. (2012). Image and Video-Based Artistic Stylization, Springer.","DOI":"10.1007\/978-1-4471-4519-6"},{"key":"ref_5","unstructured":"Krizhevsky, A., Sutskever, I., and Hinton, G.E. (2012, January 3\u20136). Imagenet classification with deep convolutional neural networks. Proceedings of the 26th Annual Conference on Neural Information Processing Systems, Lake Tahoe, NV, USA."},{"key":"ref_6","unstructured":"Simonyan, K., and Zisserman, A. (2016). Very deep convolutional networks for large-scale image recognition. arXiv."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"436","DOI":"10.1038\/nature14539","article-title":"Deep learning","volume":"521","author":"Bengio","year":"2015","journal-title":"Nature"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Gatys, L.A., Ecker, A.S., and Bethge, M. (2016, January 27\u201330). Image style transfer using convolutional neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.265"},{"key":"ref_9","unstructured":"Li, Y., Fang, C., Yang, J., Wang, Z., Lu, X., and Yang, M.H. (2017). Universal style transfer via feature transforms. arXiv."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Shen, F., Yan, S., and Zeng, G. (2018, January 18\u201322). Neural style transfer via meta networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00841"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Huang, X., and Belongie, S. (2017, January 22\u201329). Arbitrary style transfer in real-time with adaptive instance normalization. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.167"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Karayev, S., Trentacoste, M., Han, H., Agarwala, A., Darrell, T., Hertzmann, A., and Winnemoeller, H. (2013). Recognizing image style. arXiv.","DOI":"10.5244\/C.28.122"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Yoo, J., Uh, Y., Chun, S., Kang, B., and Ha, J.W. (2019). Photorealistic style transfer via wavelet transforms. arXiv.","DOI":"10.1109\/ICCV.2019.00913"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Li, X., Liu, S., Kautz, J., and Yang, M.H. (2019, January 15\u201321). Learning linear transformations for fast image and video style transfer. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00393"},{"key":"ref_15","unstructured":"Song, Y.Z., Rosin, P.L., Hall, P.M., and Collomosse, J.P. (2008). Arty shapes. Computational Aesthetics, The Eurographics Association."},{"key":"ref_16","unstructured":"Kolliopoulos, A. (2005). Image Segmentation for Stylized Non-Photorealistic Rendering and Animation, University of Toronto."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Hertzmann, A. (1998, January 19\u201324). Painterly rendering with curved brush strokes of multiplesizes. Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, Orlando, FL, USA.","DOI":"10.1145\/280814.280951"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Efros, A.A., and Freeman, W.T. (2001, January 12\u201317). Image quilting for texture synthesis and transfer. Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA.","DOI":"10.1145\/383259.383296"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Li, Y., Wang, N., Liu, J., and Hou, X. (2017). Demystifying neural style transfer. arXiv.","DOI":"10.24963\/ijcai.2017\/310"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Li, C., and Wand, M. (2016, January 27\u201330). Combining markov random fields and convolutional neural networks for image synthesis. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.272"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Liao, J., Yao, Y., Yuan, L., Hua, G., and Kang, S.B. (2017). Visual attribute transfer through deep image analogy. arXiv.","DOI":"10.1145\/3072959.3073683"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Hertzmann, A., Jacobs, C.E., Oliver, N., Curless, B., and Salesin, D.H. (2001, January 12\u201317). Image analogies. Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA.","DOI":"10.1145\/383259.383295"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Liu, X.C., Cheng, M.M., Lai, Y.K., and Rosin, P.L. (2017, January 29\u201330). Depth-aware neural style transfer. Proceedings of the Symposium on Non-Photorealistic Animation and Rendering, Los Angeles, CA, USA.","DOI":"10.1145\/3092919.3092924"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Johnson, J., Alahi, A., and Li, F.-F. (2016). Perceptual losses for real-time style transfer and super-resolution. European Conference on Computer Vision, Springer.","DOI":"10.1007\/978-3-319-46475-6_43"},{"key":"ref_25","unstructured":"Champandard, A.J. (2016). Semantic style transfer and turning two-bit doodles into fine artworks. arXiv."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Li, S., Xu, X., Nie, L., and Chua, T.S. (2017, January 23\u201327). Laplacian-steered neural style transfer. Proceedings of the 25th ACM International Conference on Multimedia, Mountain View, CA, USA.","DOI":"10.1145\/3123266.3123425"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Gatys, L.A., Ecker, A.S., Bethge, M., Hertzmann, A., and Shechtman, E. (2017, January 21\u201326). Controlling perceptual factors in neural style transfer. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.397"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Wang, X., Oxholm, G., Zhang, D., and Wang, Y.F. (2017, January 21\u201326). Multimodal transfer: A hierarchical deep convolutional neural network for fast artistic style transfer. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.759"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Luan, F., Paris, S., Shechtman, E., and Bala, K. (2017, January 21\u201326). Deep photo style transfer. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.740"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Mechrez, R., Shechtman, E., and Zelnik-Manor, L. (2017). Photorealistic style transfer with screened poisson equation. arXiv.","DOI":"10.5244\/C.31.153"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Chen, D., Yuan, L., Liao, J., Yu, N., and Hua, G. (2018, January 18\u201322). Stereoscopic neural style transfer. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00696"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Ruder, M., Dosovitskiy, A., and Brox, T. (2016). Artistic style transfer for videos. German Conference on Pattern Recognition, Springer.","DOI":"10.1007\/978-3-319-45886-1_3"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"1199","DOI":"10.1007\/s11263-018-1089-z","article-title":"Artistic style transfer for videos and spherical images","volume":"126","author":"Ruder","year":"2018","journal-title":"Int. J. Comput. Vis."},{"key":"ref_34","unstructured":"Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, S., and Bengio, Y. (2014, January 8\u201313). Generative adversarial nets. Proceedings of the Annual Conference on Neural Information Processing Systems, Montreal, QC, Canada."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J.Y., Zhou, T., and Efros, A.A. (2017, January 21\u201326). Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.632"},{"key":"ref_36","unstructured":"Mirza, M., and Osindero, S. (2014). Conditional generative adversarial nets. arXiv."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Zhu, J.Y., Park, T., Isola, P., and Efros, A.A. (2017, January 22\u201329). Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.244"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Choi, Y., Choi, M., Kim, M., Ha, J.M., Kim, S., and Choo, J. (2018, January 18\u201322). Stargan: Unified generative adversarial networks for multi-domain image-to-image translation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00916"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Chang, H., Lu, J., Yu, F., and Finkelstein, A. (2018, January 18\u201322). Pairedcyclegan: Asymmetric style transfer for applying and removing makeup. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00012"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Yi, Z., Zhang, H., Tan, P., and Gong, M. (2017, January 22\u201329). Dualgan: Unsupervised dual learning for image-to-image translation. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.310"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Liu, M.Y., Huang, X., Mallya, A., Karras, T., Aila, T., Lehtinen, J., and Kautz, J. (2019). Few-shot unsupervised image-to-image translation. arXiv.","DOI":"10.1109\/ICCV.2019.01065"},{"key":"ref_42","unstructured":"Huang, H., Yu, P.S., and Wang, C. (2018). An introduction to image synthesis with generative adversarial nets. arXiv."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"123","DOI":"10.1016\/j.cviu.2006.11.011","article-title":"Automated colour grading using colour distribution transfer","volume":"107","author":"Kokaram","year":"2007","journal-title":"Comput. Vis. Image. Underst."},{"key":"ref_44","unstructured":"Piti\u00e9, F., Kokaram, A.C., and Dahyot, R. (2010, January 24\u201328). N-dimensional probability density function transfer and its application to color transfer. Proceedings of the 10th IEEE International Conference on Computer Vision (ICCV\u201905), Beijing, China."},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"1397","DOI":"10.1109\/TPAMI.2012.213","article-title":"Guided image filtering","volume":"35","author":"He","year":"2012","journal-title":"IEEE Trans. Patern. Anal."},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Cho, S., Shrestha, B., Joo, H.J., and Hong, B. (2012). Improvement of retinex algorithm for backlight image efficiency. Computer Science and Convergence, Springer.","DOI":"10.1007\/978-94-007-2792-2_55"},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"1719","DOI":"10.1109\/TSMCB.2012.2228639","article-title":"A generalized laplacian of gaussian filter for blob detection and its applications","volume":"43","author":"Kong","year":"2013","journal-title":"IEEE Trans. Cybern."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Mould, D. (2014, January 8\u201310). Authorial subjective evaluation of non-photorealistic images. Proceedings of the Workshop on Non-Photorealistic Animation and Rendering, Vancouver, BC, Canada.","DOI":"10.1145\/2630397.2630400"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Isenberg, T., Neumann, P., Carpendale, S., Sousa, M.C., and Jorge, J.A. (2006, January 5\u20137). Non-photorealistic rendering in context: An observational study. Proceedings of the 4th International Symposium on Non-Photorealistic Animation and Rendering, Annecy, France.","DOI":"10.1145\/1124728.1124747"},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"34","DOI":"10.1109\/38.946629","article-title":"Color transfer between images","volume":"21","author":"Reinhard","year":"2001","journal-title":"IEEE Comput. Graph. Appl."},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"600","DOI":"10.1109\/TIP.2003.819861","article-title":"Image quality assessment: From error visibility to structural similarity","volume":"13","author":"Wang","year":"2004","journal-title":"IEEE Trans. Image Process."},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"Sanakoyeu, A., Kotovenko, D., Lang, S., and Ommer, B. (2018, January 8\u201314). A style-aware content loss for real-time hd style transfer. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01237-3_43"},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"546","DOI":"10.1109\/TIP.2018.2869695","article-title":"Gated-gan: Adversarial gated networks for multi-collection style transfer","volume":"28","author":"Chen","year":"2018","journal-title":"IEEE Trans. Image Process."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/18\/5232\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T10:09:42Z","timestamp":1760177382000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/18\/5232"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,9,14]]},"references-count":53,"journal-issue":{"issue":"18","published-online":{"date-parts":[[2020,9]]}},"alternative-id":["s20185232"],"URL":"https:\/\/doi.org\/10.3390\/s20185232","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2020,9,14]]}}}