{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,7]],"date-time":"2026-04-07T20:56:29Z","timestamp":1775595389050,"version":"3.50.1"},"reference-count":43,"publisher":"MDPI AG","issue":"7","license":[{"start":{"date-parts":[[2025,7,15]],"date-time":"2025-07-15T00:00:00Z","timestamp":1752537600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Ministry of Education Humanities and Social Science Project","award":["24YJC760083"],"award-info":[{"award-number":["24YJC760083"]}]},{"name":"Ministry of Education Humanities and Social Science Project","award":["AHSKQ2023D116"],"award-info":[{"award-number":["AHSKQ2023D116"]}]},{"name":"Ministry of Education Humanities and Social Science Project","award":["YK22-14-02"],"award-info":[{"award-number":["YK22-14-02"]}]},{"name":"Ministry of Education Humanities and Social Science Project","award":["QD202433"],"award-info":[{"award-number":["QD202433"]}]},{"name":"Philosophy and Social Science Planning Project of Anhui Province","award":["24YJC760083"],"award-info":[{"award-number":["24YJC760083"]}]},{"name":"Philosophy and Social Science Planning Project of Anhui Province","award":["AHSKQ2023D116"],"award-info":[{"award-number":["AHSKQ2023D116"]}]},{"name":"Philosophy and Social Science Planning Project of Anhui Province","award":["YK22-14-02"],"award-info":[{"award-number":["YK22-14-02"]}]},{"name":"Philosophy and Social Science Planning Project of Anhui Province","award":["QD202433"],"award-info":[{"award-number":["QD202433"]}]},{"DOI":"10.13039\/501100013802","name":"Nanjing Vocational University of Industry Technology","doi-asserted-by":"publisher","award":["24YJC760083"],"award-info":[{"award-number":["24YJC760083"]}],"id":[{"id":"10.13039\/501100013802","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100013802","name":"Nanjing Vocational University of Industry Technology","doi-asserted-by":"publisher","award":["AHSKQ2023D116"],"award-info":[{"award-number":["AHSKQ2023D116"]}],"id":[{"id":"10.13039\/501100013802","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100013802","name":"Nanjing Vocational University of Industry Technology","doi-asserted-by":"publisher","award":["YK22-14-02"],"award-info":[{"award-number":["YK22-14-02"]}],"id":[{"id":"10.13039\/501100013802","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100013802","name":"Nanjing Vocational University of Industry Technology","doi-asserted-by":"publisher","award":["QD202433"],"award-info":[{"award-number":["QD202433"]}],"id":[{"id":"10.13039\/501100013802","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Anhui University of Technology","award":["24YJC760083"],"award-info":[{"award-number":["24YJC760083"]}]},{"name":"Anhui University of Technology","award":["AHSKQ2023D116"],"award-info":[{"award-number":["AHSKQ2023D116"]}]},{"name":"Anhui University of Technology","award":["YK22-14-02"],"award-info":[{"award-number":["YK22-14-02"]}]},{"name":"Anhui University of Technology","award":["QD202433"],"award-info":[{"award-number":["QD202433"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Entropy"],"abstract":"<jats:p>Style transfer technology has seen substantial attention in image synthesis, notably in applications like oil painting, digital printing, and Chinese landscape painting. However, it is often difficult to generate migrated images that retain the essence of paper-cutting art and have strong visual appeal when trying to apply the unique style of Chinese paper-cutting art to style transfer. Therefore, this paper proposes a new method for Chinese paper-cutting style transformation based on the Transformer, aiming at realizing the efficient transformation of Chinese paper-cutting art styles. Specifically, the network consists of a frequency-domain mixture block and a multi-level feature contrastive learning module. The frequency-domain mixture block explores spatial and frequency-domain interaction information, integrates multiple attention windows along with frequency-domain features, preserves critical details, and enhances the effectiveness of style conversion. To further embody the symmetrical structures and hollowed hierarchical patterns intrinsic to Chinese paper-cutting, the multi-level feature contrastive learning module is designed based on a contrastive learning strategy. This module maximizes mutual information between multi-level transferred features and content features, improves the consistency of representations across different layers, and thus accentuates the unique symmetrical aesthetics and artistic expression of paper-cutting. Extensive experimental results demonstrate that the proposed method outperforms existing state-of-the-art approaches in both qualitative and quantitative evaluations. Additionally, we created a Chinese paper-cutting dataset that, although modest in size, represents an important initial step towards enriching existing resources. This dataset provides valuable training data and a reference benchmark for future research in this field.<\/jats:p>","DOI":"10.3390\/e27070754","type":"journal-article","created":{"date-parts":[[2025,7,15]],"date-time":"2025-07-15T11:52:58Z","timestamp":1752580378000},"page":"754","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Chinese Paper-Cutting Style Transfer via Vision Transformer"],"prefix":"10.3390","volume":"27","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-3652-8897","authenticated-orcid":false,"given":"Chao","family":"Wu","sequence":"first","affiliation":[{"name":"Engineering Training Center, Nanjing Vocational University of Industry Technology, Nanjing 210023, China"}]},{"given":"Yao","family":"Ren","sequence":"additional","affiliation":[{"name":"Academy of Art and Design, Anhui University of Technology, Ma\u2019anshan 243002, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0000-5279-8761","authenticated-orcid":false,"given":"Yuying","family":"Zhou","sequence":"additional","affiliation":[{"name":"Academy of Art and Design, Anhui University of Technology, Ma\u2019anshan 243002, China"}]},{"given":"Ming","family":"Lou","sequence":"additional","affiliation":[{"name":"Academy of Art and Design, Anhui University of Technology, Ma\u2019anshan 243002, China"}]},{"given":"Qing","family":"Zhang","sequence":"additional","affiliation":[{"name":"Academy of Art and Design, Anhui University of Technology, Ma\u2019anshan 243002, China"}]}],"member":"1968","published-online":{"date-parts":[[2025,7,15]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"3365","DOI":"10.1109\/TVCG.2019.2921336","article-title":"Neural style transfer: A review","volume":"26","author":"Jing","year":"2019","journal-title":"IEEE Trans. Vis. Comput. Graph."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Gatys, L.A., Ecker, A.S., and Bethge, M. (2016, January 27\u201330). Image style transfer using convolutional neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.265"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"104084","DOI":"10.1016\/j.cag.2024.104084","article-title":"ST2SI: Image Style Transfer via Vision Transformer using Spatial Interaction","volume":"124","author":"Li","year":"2024","journal-title":"Comput. Graph."},{"key":"ref_4","first-page":"262","article-title":"Texture synthesis using convolutional neural networks","volume":"28","author":"Gatys","year":"2015","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_5","unstructured":"Champandard, A.J. (2016). Semantic style transfer and turning two-bit doodles into fine artworks. arXiv."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Hertzmann, A. (1998, January 19\u201324). Painterly rendering with curved brush strokes of multiple sizes. Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, Orlando, FL, USA.","DOI":"10.1145\/280814.280951"},{"key":"ref_7","unstructured":"Johnson, J., Alahi, A., and Li, F.-F. (2016, January 11\u201314). Perceptual losses for real-time style transfer and super-resolution. Proceedings of the Computer Vision\u2013ECCV 2016: 14th European Conference, Amsterdam, The Netherlands. Proceedings, Part II 14."},{"key":"ref_8","unstructured":"Ulyanov, D., Lebedev, V., Vedaldi, A., and Lempitsky, V. (2016). Texture networks: Feed-forward synthesis of textures and stylized images. arXiv."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Zhang, H., and Dana, K. (2018, January 8\u201314). Multi-style generative network for real-time transfer. Proceedings of the European Conference on Computer Vision (ECCV) Workshops, Munich, Germany.","DOI":"10.1007\/978-3-030-11018-5_32"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Efros, A., and Freeman, W. (2001, January 12\u201317). Image Quilting for Texture Synthesis and Transfer. Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Technique, SIGGRAPH, Los Angeles, CA, USA.","DOI":"10.1145\/383259.383296"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Chen, D., Yuan, L., Liao, J., Yu, N., and Hua, G. (2017, January 21\u201326). Stylebank: An explicit representation for neural image style transfer. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.296"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Yin, W., Yin, H., Baraka, K., Kragic, D., and Bj\u00f6rkman, M. (2023, January 2\u20137). Dance style transfer with cross-modal transformer. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA.","DOI":"10.1007\/s00138-023-01399-x"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Tang, H., Liu, S., Lin, T., Huang, S., Li, F., He, D., and Wang, X. (2023, January 17\u201324). Master: Meta style transformer for controllable zero-shot and few-shot artistic style transfer. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.01758"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Zhang, C., Xu, X., Wang, L., Dai, Z., and Yang, J. (2024, January 26\u201327). S2wat: Image style transfer via hierarchical vision transformer using strips window attention. Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA.","DOI":"10.1609\/aaai.v38i7.28529"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Park, D.Y., and Lee, K.H. (2019, January 15\u201320). Arbitrary style transfer with style-attentional networks. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00603"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"An, J., Huang, S., Song, Y., Dou, D., Liu, W., and Luo, J. (2021, January 19\u201325). Artflow: Unbiased image style transfer via reversible neural flows. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00092"},{"key":"ref_17","first-page":"26561","article-title":"Artistic style transfer with internal-external learning and contrastive learning","volume":"34","author":"Chen","year":"2021","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Tang, F., Dong, W., Huang, H., Ma, C., Lee, T.Y., and Xu, C. (2022, January 7\u201311). Domain enhanced arbitrary image style transfer via contrastive learning. Proceedings of the ACM SIGGRAPH 2022 Conference Proceedings, Vancouver, BC, Canada.","DOI":"10.1145\/3528233.3530736"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Nguyen, V., Yago Vicente, T.F., Zhao, M., Hoai, M., and Samaras, D. (2017, January 22\u201329). Shadow detection with conditional generative adversarial networks. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.483"},{"key":"ref_20","unstructured":"Huang, S., Xiong, H., Wang, T., Wen, B., Wang, Q., Chen, Z., Huan, J., and Dou, D. (2020). Parameter-free style projection for arbitrary style transfer. arXiv."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"891","DOI":"10.1109\/LSP.2025.3540700","article-title":"KAN see in the dark","volume":"32","author":"Ning","year":"2025","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Pan, B., and Ke, Y. (2023, January 1\u20133). Efficient artistic image style transfer with large language model (LLM): A new perspective. Proceedings of the 2023 8th International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India.","DOI":"10.1109\/ICCES57224.2023.10192799"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Huang, N., Tang, F., Huang, H., Ma, C., Dong, W., and Xu, C. (2023, January 18\u201322). Inversion-based style transfer with diffusion models. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.00978"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"108723","DOI":"10.1016\/j.compeleceng.2023.108723","article-title":"Image neural style transfer: A review","volume":"108","author":"Cai","year":"2023","journal-title":"Comput. Electr. Eng."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Liu, K., Zhan, F., Chen, Y., Zhang, J., Yu, Y., El Saddik, A., Lu, S., and Xing, E.P. (2023, January 17\u201324). Stylerf: Zero-shot 3d style transfer of neural radiance fields. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.00806"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Woodland, M., Wood, J., Anderson, B.M., Kundu, S., Lin, E., Koay, E., Odisio, B., Chung, C., Kang, H.C., and Venkatesan, A.M. (2022, January 18). Evaluating the performance of StyleGAN2-ADA on medical images. Proceedings of the International Workshop on Simulation and Synthesis in Medical Imaging, Singapore.","DOI":"10.1007\/978-3-031-16980-9_14"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Zhang, Y., He, Z., Xing, J., Yao, X., and Jia, J. (2023, January 17\u201324). Ref-npr: Reference-based non-photorealistic radiance fields for controllable scene stylization. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.00413"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"454","DOI":"10.21744\/lingcure.v5nS2.1383","article-title":"The imagery and abstraction trend of Chinese contemporary oil painting","volume":"5","author":"Yang","year":"2021","journal-title":"Linguist. Cult. Rev."},{"key":"ref_29","first-page":"8","article-title":"Analysis on the Collision and Fusion of Eastern and Western Paintings in the Context of Globalization","volume":"7","author":"Liu","year":"2021","journal-title":"Thought"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Fan, Z., Zhu, Y., Yan, C., Li, Y., and Zhang, K. (2022, January 16\u201318). A comparative study of color between abstract paintings, oil paintings and Chinese ink paintings. Proceedings of the 15th International Symposium on Visual Information Communication and Interaction, Chur, Switzerland.","DOI":"10.1145\/3554944.3554951"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"022005","DOI":"10.1088\/1742-6596\/1915\/2\/022005","article-title":"Research on oil painting creation based on Computer Technology","volume":"1915","author":"Liu","year":"2021","journal-title":"J. Phys. Conf. Ser."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Wen, X., and White, P. (2020). The role of landscape art in cultural and national identity: Chinese and European comparisons. Sustainability, 12.","DOI":"10.3390\/su12135472"},{"key":"ref_33","first-page":"465","article-title":"The Developing Process of Ideological Trend of the Nationalization in Chinese Oil Painting","volume":"6","author":"Hongxian","year":"2024","journal-title":"Asian J. Res. Educ. Soc. Sci."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"77","DOI":"10.1186\/s40494-024-01195-4","article-title":"Impressions of Guangzhou city in Qing dynasty export paintings in the context of trade economy: A color analysis of paintings based on k-means clustering algorithm","volume":"12","author":"Ao","year":"2024","journal-title":"Herit. Sci."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Deng, Y., Tang, F., Dong, W., Ma, C., Pan, X., Wang, L., and Xu, C. (2022, January 18\u201324). Stytr2: Image style transfer with transformers. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01104"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Wang, P., Li, Y., and Vasconcelos, N. (2021, January 19\u201325). Rethinking and improving the robustness of image style transfer. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00019"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Feng, L., Geng, G., Ren, Y., Li, Z., Liu, Y., and Li, K. (2024, January 14\u201319). CReStyler: Text-Guided Single Image Style Transfer Method Based on CNN and Restormer. Proceedings of the ICASSP 2024\u20142024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Seoul, Republic of Korea.","DOI":"10.1109\/ICASSP48485.2024.10446192"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Liu, S., Lin, T., He, D., Li, F., Wang, M., Li, X., Sun, Z., Li, Q., and Ding, E. (2021, January 11\u201317). Adaattn: Revisit attention mechanism in arbitrary neural style transfer. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, BC, Canada.","DOI":"10.1109\/ICCV48922.2021.00658"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Park, J., and Kim, Y. (2022, January 18\u201324). Styleformer: Transformer based generative adversarial networks with style vector. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00878"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021, January 11\u201317). Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"ref_41","unstructured":"Oord, A.v.d., Li, Y., and Vinyals, O. (2018). Representation learning with contrastive predictive coding. arXiv."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Ghiasi, G., Lee, H., Kudlur, M., Dumoulin, V., and Shlens, J. (2017). Exploring the structure of a real-time, arbitrary neural artistic stylization network. arXiv.","DOI":"10.5244\/C.31.114"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Chung, J., Hyun, S., and Heo, J.P. (2024, January 17\u201318). Style injection in diffusion: A training-free approach for adapting large-scale diffusion models for style transfer. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR52733.2024.00840"}],"container-title":["Entropy"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1099-4300\/27\/7\/754\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,9]],"date-time":"2025-10-09T18:10:01Z","timestamp":1760033401000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1099-4300\/27\/7\/754"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,7,15]]},"references-count":43,"journal-issue":{"issue":"7","published-online":{"date-parts":[[2025,7]]}},"alternative-id":["e27070754"],"URL":"https:\/\/doi.org\/10.3390\/e27070754","relation":{},"ISSN":["1099-4300"],"issn-type":[{"value":"1099-4300","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,7,15]]}}}