{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,13]],"date-time":"2025-11-13T13:14:41Z","timestamp":1763039681377,"version":"3.45.0"},"reference-count":88,"publisher":"MDPI AG","issue":"11","license":[{"start":{"date-parts":[[2025,11,12]],"date-time":"2025-11-12T00:00:00Z","timestamp":1762905600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Centre of Excellence in Informatics and ICT","award":["BG16RFPR002-1.014-0018-C01"],"award-info":[{"award-number":["BG16RFPR002-1.014-0018-C01"]}]},{"name":"Research, Innovation and Digitalization for Smart Transformation Programme"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Future Internet"],"abstract":"<jats:p>Generative models can generate art within a single modality with high fidelity. However, translating a work of art from one domain to another (e.g., painting to music or poem to painting) in a meaningful way remains a longstanding, interdisciplinary challenge. We propose a novel approach combining a multi-agent system (MAS) architecture with an ontology-guided semantic representation to achieve cross-domain art translation while preserving the original artwork\u2019s meaning and emotional impact. In our concept, specialized agents decompose the task: a Perception Agent extracts symbolic descriptors from the source artwork, a Translation Agent maps these descriptors using shared knowledge base, a Generator Agent creates the target-modality artwork, and a Curator Agent evaluates and refines the output for coherence and style alignment. This modular design, inspired by human creative workflows, allows complex artistic concepts (themes, moods, motifs) to carry over across modalities in a consistent and interpretable way. We implemented a prototype supporting translations between painting and poetry, leveraging state-of-the-art generative models. Preliminary results indicate that our ontology-driven MAS produces cross-domain translations that preserve key semantic elements and affective tone of the input, offering a new path toward explainable and controllable creative AI. Finally, we discuss a case study and potential applications from educational tools to synesthetic VR experiences and outline future research directions for enhancing the realm of intelligent agents.<\/jats:p>","DOI":"10.3390\/fi17110517","type":"journal-article","created":{"date-parts":[[2025,11,13]],"date-time":"2025-11-13T12:57:13Z","timestamp":1763038633000},"page":"517","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Ontology-Driven Multi-Agent System for Cross-Domain Art Translation"],"prefix":"10.3390","volume":"17","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0634-9348","authenticated-orcid":false,"given":"Viktor","family":"Matanski","sequence":"first","affiliation":[{"name":"Department of Computer Technologies, University of Plovdiv \u201cPaisii Hilendarski\u201d, 4000 Plovdiv, Bulgaria"},{"name":"Centre of Excellence in Informatics and Information and Communication Technologies, 1113 Sofia, Bulgaria"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9796-8453","authenticated-orcid":false,"given":"Anton","family":"Iliev","sequence":"additional","affiliation":[{"name":"Department of Computer Technologies, University of Plovdiv \u201cPaisii Hilendarski\u201d, 4000 Plovdiv, Bulgaria"},{"name":"Centre of Excellence in Informatics and Information and Communication Technologies, 1113 Sofia, Bulgaria"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0650-3285","authenticated-orcid":false,"given":"Nikolay","family":"Kyurkchiev","sequence":"additional","affiliation":[{"name":"Department of Computer Technologies, University of Plovdiv \u201cPaisii Hilendarski\u201d, 4000 Plovdiv, Bulgaria"},{"name":"Centre of Excellence in Informatics and Information and Communication Technologies, 1113 Sofia, Bulgaria"},{"name":"Institute of Mathematics and Informatics, Bulgarian Academy of Sciences, Acad. G. Bonchev Str., Bl. 8, 1113 Sofia, Bulgaria"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2925-8534","authenticated-orcid":false,"given":"Todorka","family":"Terzieva","sequence":"additional","affiliation":[{"name":"Department of Computer Technologies, University of Plovdiv \u201cPaisii Hilendarski\u201d, 4000 Plovdiv, Bulgaria"}]}],"member":"1968","published-online":{"date-parts":[[2025,11,12]]},"reference":[{"key":"ref_1","unstructured":"Venkatesh, K., Dunlop, C., and Yanardag, P. (2025). CREA: A Collaborative Multi-Agent Framework for Creative Content Generation with Diffusion Models. arXiv."},{"key":"ref_2","unstructured":"Niu, B., Song, Y., Lian, K., Shen, Y., Yao, Y., Zhang, K., and Liu, T. (2025). Flow: Modularized Agentic Workflow Automation. arXiv."},{"key":"ref_3","unstructured":"Asgar, Z., Nguyen, M., and Katti, S. (2025). Efficient and Scalable Agentic AI with Heterogeneous Systems. arXiv."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"103599","DOI":"10.1016\/j.inffus.2025.103599","article-title":"AI Agents vs. Agentic AI: A Conceptual Taxonomy, Applications and Challenges","volume":"126","author":"Sapkota","year":"2026","journal-title":"Inf. Fusion"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1080\/14626268.2025.2491471","article-title":"Spectrum of Creative Agencies in AI-Based Art: Analysis of Art Reviews","volume":"36","author":"Loivaranta","year":"2025","journal-title":"Digit. Creat."},{"key":"ref_6","unstructured":"Zeng, P., Jiang, M., Wang, Z., Li, J., Yin, J., and Lu, S. (2024, January 22\u201323). CARD: Cross-Modal Agent Framework for Generative and Editable Residential Design. Proceedings of the NeurIPS 2024 Workshop on Open-World Agents (OWA), Vancouver, BC, Canada. Available online: https:\/\/openreview.net\/forum?id=cYQPfdMJHQ."},{"key":"ref_7","unstructured":"Yang, Y., Ma, M., Huang, Y., Chai, H., Gong, C., Geng, H., Zhou, Y., Wen, Y., Fang, M., and Chen, M. (2025). Agentic Web: Weaving the Next Web with AI Agents. arXiv."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Jabbar, M.S., Shin, J., and Cho, J.-D. (2022). AI Ekphrasis: Multi-Modal Learning with Foundation Models for Fine-Grained Poetry Retrieval. Electronics, 11.","DOI":"10.3390\/electronics11081275"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Cho, J.D. (2021). A Study of Multi-Sensory Experience and Color Recognition in Visual Arts Appreciation of People with Visual Impairment. Electronics, 10.","DOI":"10.3390\/electronics10040470"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Lee, K.-Y. (2021, January 21\u201323). The Technological Revolution of the Coloured Organ in Alexander Scriabin\u2019s Fifth Symphony, Prometheus, Poem of Fire. Proceedings of the 2nd International Conference on Language, Art and Cultural Exchange (ICLACE 2021), Dali, China.","DOI":"10.2991\/assehr.k.210609.022"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Avlonitou, C., and Papadaki, E. (2025). AI: An Active and Innovative Tool for Artistic Creation. Arts, 14.","DOI":"10.3390\/arts14030052"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"971","DOI":"10.3758\/s13414-010-0073-7","article-title":"Crossmodal Correspondences: A Tutorial Review","volume":"73","author":"Spence","year":"2011","journal-title":"Atten. Percept. Psychophys."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"264","DOI":"10.1016\/S0010-9452(08)70352-6","article-title":"Sound\u2013Colour Synaesthesia: To What Extent Does It Use Cross-Modal Mechanisms Common to Us All?","volume":"42","author":"Tsakanikos","year":"2006","journal-title":"Cortex"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"e30","DOI":"10.23915\/distill.00030","article-title":"Multimodal Neurons in Artificial Neural Networks","volume":"6","author":"Goh","year":"2021","journal-title":"Distill"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"406","DOI":"10.1162\/leon_a_01886","article-title":"Perception as Media: Reconsidering the Arts and Neurotechnology","volume":"54","author":"Rowland","year":"2021","journal-title":"Leonardo"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"93","DOI":"10.1145\/3130958","article-title":"Eyes-Free Art: Exploring Proxemic Audio Interfaces for Blind and Low Vision Art Engagement","volume":"1","author":"Rector","year":"2017","journal-title":"Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Murray, C.A., and Shams, L. (2023). Crossmodal Interactions in Human Learning and Memory. Front. Hum. Neurosci., 17.","DOI":"10.3389\/fnhum.2023.1181760"},{"key":"ref_18","unstructured":"Ramesh, A., Pavlov, M., Goh, G., Gray, S., Voss, C., Radford, A., Chen, M., and Sutskever, I. (2021). Zero-Shot Text-to-Image Generation. arXiv."},{"key":"ref_19","unstructured":"Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agarwal, S., Sastry, G., Askell, A., Mishkin, P., and Clark, J. (2021). Learning Transferable Visual Models from Natural Language Supervision. arXiv."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Wang, J., Zhang, O., and Jiang, Y. (2025). Multimodal Diffusion Framework for Collaborative Text\u2013Image\u2013Audio Generation and Applications. Sci. Rep., 15.","DOI":"10.1038\/s41598-025-05794-4"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. (2022, January 18\u201324). High-Resolution Image Synthesis with Latent Diffusion Models. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2022), New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01042"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"8","DOI":"10.1186\/s13636-025-00397-3","article-title":"AI-Based Chinese-Style Music Generation from Video Content: A Study on Cross-Modal Analysis and Generation Methods","volume":"2025","author":"Cao","year":"2025","journal-title":"EURASIP J. Audio Speech Music Process."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"18079","DOI":"10.1109\/ACCESS.2025.3531798","article-title":"Music Generation Using Deep Learning and Generative AI: A Systematic Review","volume":"13","author":"Mitra","year":"2025","journal-title":"IEEE Access"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"173","DOI":"10.1007\/978-3-031-92808-6_11","article-title":"Art2Mus: Bridging Visual Arts and Music through Cross-Modal Generation","volume":"Volume 15627","author":"Canton","year":"2025","journal-title":"Computer Vision\u2014ECCV 2024 Workshops"},{"key":"ref_25","unstructured":"Dzwonczyk, L., Cella, C.E., and Ban, D. (2024, January 2\u20136). Network Bending of Diffusion Models for Audio-Visual Generation. Proceedings of the 27th International Conference on Digital Audio Effects (DAFx24), Guildford, UK. Available online: https:\/\/www.dafx.de\/paper-archive\/2024\/papers\/DAFx24_paper_24.pdf."},{"key":"ref_26","unstructured":"Lee, C.-C., Lin, W.-Y., and Shih, Y.-T. (2020, January 12\u201316). Cross-Modal Style Transfer from Music to Visual Arts. Proceedings of the 28th ACM International Conference on Multimedia (MM \u201920), Seattle, WA, USA."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Huang, S., An, J., Wei, D., Luo, J., and Pfister, H. (2023, January 17\u201324). QuantArt: Quantizing Image Style Transfer Towards High Visual Fidelity. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR 2023), Vancouver, BC, Canada. Available online: https:\/\/openaccess.thecvf.com\/content\/CVPR2023\/papers\/Huang_QuantArt_Quantizing_Image_Style_Transfer_Towards_High_Visual_Fidelity_CVPR_2023_paper.pdf.","DOI":"10.1109\/CVPR52729.2023.00576"},{"key":"ref_28","unstructured":"Jamil, S., Reddy, B.A., Kumar, R., Saha, S., Joseph, K.J., and Goswami, K. (2025). Poetry in Pixels: Prompt Tuning for Poem Image Generation via Diffusion Models. arXiv."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"685","DOI":"10.1049\/cit2.12089","article-title":"Images2Poem in Different Contexts with Dual-CharRNN","volume":"7","author":"Yan","year":"2022","journal-title":"CAAI Trans. Intell. Technol."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Mart\u00edn, F., Rodr\u00edguez Lera, F.J., Gin\u00e9s, J., and Matell\u00e1n, V. (2020). Evolution of a Cognitive Architecture for Social Robots: Integrating Behaviors and Symbolic Knowledge. Appl. Sci., 10.","DOI":"10.3390\/app10176067"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"1259","DOI":"10.1057\/s41599-024-03611-3","article-title":"Large Language Models Empowered Agent-Based Modeling and Simulation: A Survey and Perspectives","volume":"11","author":"Gao","year":"2024","journal-title":"Humanit. Soc. Sci. Commun."},{"key":"ref_32","unstructured":"Park, J.S., O\u2019Brien, J.C., Cai, C.J., Morris, M.R., Liang, P., and Bernstein, M.S. (November, January 29). Generative Agents: Interactive Simulacra of Human Behavior. Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST \u201923), San Francisco, CA, USA. Article 2."},{"key":"ref_33","unstructured":"Shen, Y., Song, K., Tan, X., Li, D., Lu, W., and Zhuang, Y. (2023, January 10\u201316). HuggingGPT: Solving AI Tasks with ChatGPT and Its Friends in Hugging Face. Proceedings of the Advances in Neural Information Processing Systems 36 (NeurIPS 2023), New Orleans, LA, USA. Available online: https:\/\/proceedings.neurips.cc\/paper_files\/paper\/2023\/file\/77c33e6a367922d003ff102ffb92b658-Paper-Conference.pdf."},{"key":"ref_34","unstructured":"Talebirad, Y., and Nadiri, A. (2023). Multi-Agent Collaboration: Harnessing the Power of Intelligent LLM Agents. arXiv."},{"key":"ref_35","unstructured":"Chen, S., Liu, Y., Han, W., Zhang, W., and Liu, T. (2024). A Survey on LLM-Based Multi-Agent System: Recent Advances and New Frontiers in Application. arXiv."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Kannan, S.S., Venkatesh, V.L.N., and Min, B.-C. (2024, January 13\u201317). SMART-LLM: Smart Multi-Agent Robot Task Planning Using Large Language Models. Proceedings of the 2024 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS 2024), Abu Dhabi, United Arab Emirates.","DOI":"10.1109\/IROS58592.2024.10802322"},{"key":"ref_37","unstructured":"Feng, Z., Xue, R., Yuan, L., Yu, Y., Ding, N., Liu, M., Gao, B., Sun, J., Zheng, X., and Wang, G. (2025). Multi-Agent Embodied AI: Advances and Future Directions. arXiv."},{"key":"ref_38","first-page":"48","article-title":"Multi-Agent Systems for Collaborative Art Creation","volume":"7","author":"Luo","year":"2025","journal-title":"Front. Art Res."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Imasato, N., Miyazawa, K., Nagai, T., and Horii, T. (2024). Creative Agents: Simulating the Systems Model of Creativity with Generative Agents. arXiv.","DOI":"10.1109\/ACCESS.2025.3606498"},{"key":"ref_40","unstructured":"Tian, J., Sobczak, M.T., Patil, D., Hou, J., Pang, L., Ramanathan, A., Yang, L., Chen, X., Golan, Y., and Zhai, X. (2025). A Multi-Agent Framework Integrating Large Language Models and Generative AI for Accelerated Metamaterial Design. arXiv."},{"key":"ref_41","unstructured":"Luo, G., Dou, W., Li, W., Wang, Z., Yang, X., Tian, C., Li, H., Wang, W., Wang, W., and Zhu, X. (2025). Mono-InternVL-1.5: Towards Cheaper and Faster Monolithic Multimodal Large Language Models. arXiv."},{"key":"ref_42","unstructured":"Du, Y., and Kaelbling, L.P. (2024). Compositional Generative Modeling: A Single Model Is Not All You Need. arXiv."},{"key":"ref_43","unstructured":"Zhang, Z., Zhang, A., Li, M., Zhao, H., Karypis, G., and Smola, A. (2023). Multimodal Chain-of-Thought Reasoning in Language Models. arXiv."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Borghoff, U.M., Bottoni, P., and Pareschi, R. (2025). Beyond Prompt Chaining: The TB-CSPN Architecture for Agentic AI. Future Internet, 17.","DOI":"10.20944\/preprints202507.1294.v1"},{"key":"ref_45","unstructured":"Getty Research Institute (2025, September 24). The Getty Art & Architecture Thesaurus (AAT). Getty Vocabularies, Linked Open Data. Available online: https:\/\/www.getty.edu\/research\/tools\/vocabularies\/aat\/."},{"key":"ref_46","unstructured":"Jiang, Y., Ehinger, K.A., and Lau, J.H. (2024, January 3\u20139). KALE: An Artwork Image Captioning System Augmented with Heterogeneous Graph. Proceedings of the 33rd International Joint Conference on Artificial Intelligence (IJCAI 2024), Jeju Island, Republic of Korea."},{"key":"ref_47","unstructured":"Raimond, Y., Abdallah, S., Sandler, M., and Giasson, F. (2007, January 23\u201327). The Music Ontology. Proceedings of the 8th International Society for Music Information Retrieval Conference (ISMIR 2007), Vienna, Austria. Available online: https:\/\/archives.ismir.net\/ismir2007\/paper\/000417.pdf."},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"114","DOI":"10.1080\/13556509.2024.2428050","article-title":"Thinking Multimodal Translation through Relational Ontology","volume":"31","author":"Kokkola","year":"2025","journal-title":"Translator"},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"77","DOI":"10.1080\/02666286.2024.2330335","article-title":"Operative Ekphrasis: The Collapse of the Text\/Image Distinction in Multimodal AI","volume":"40","author":"Bajohr","year":"2024","journal-title":"Word Image"},{"key":"ref_50","first-page":"59","article-title":"Ekphrasis and Prompt Engineering: A Comparison in the Era of Generative AI","volume":"52","author":"Verdicchio","year":"2024","journal-title":"Studi Estet."},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"729","DOI":"10.1007\/978-3-030-30645-8_66","article-title":"Artpedia: A New Visual\u2013Semantic Dataset with Visual and Contextual Sentences in the Artistic Domain","volume":"Volume 11752","author":"Battiato","year":"2019","journal-title":"Proceedings of the Image Analysis and Processing\u2014ICIAP 2019"},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"676","DOI":"10.1007\/978-3-030-11012-3_52","article-title":"How to Read Paintings: Semantic Art Understanding with Multi-Modal Retrieval","volume":"Volume 11130","author":"Garcia","year":"2018","journal-title":"Proceedings of the Computer Vision\u2014ECCV 2018 Workshops"},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Tabaza, A., Quishawi, O., Yaghi, A., and Qawasmeh, O. (2024, January 25\u201326). Binding Text, Images, Graphs and Audio for Music Representation Learning. Proceedings of the Cognitive Models and Artificial Intelligence Conference (AICCONF 2024), Istanbul, Turkiye.","DOI":"10.1145\/3660853.3660886"},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"4","DOI":"10.1145\/2757001.2757003","article-title":"The Prot\u00e9g\u00e9 Project: A Look Back and a Look Forward","volume":"1","author":"Musen","year":"2015","journal-title":"AI Matters"},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"0076","DOI":"10.34133\/icomputing.0076","article-title":"Affective Computing: Recent Advances, Challenges, and Future Trends","volume":"3","author":"Pei","year":"2024","journal-title":"Intell. Comput."},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"100862","DOI":"10.1016\/j.websem.2025.100862","article-title":"Accelerating Knowledge Graph and Ontology Engineering with Large Language Models","volume":"85","author":"Shimizu","year":"2025","journal-title":"J. Web Semant."},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"35","DOI":"10.1016\/j.websem.2013.05.001","article-title":"Key Choices in the Design of Simple Knowledge Organization System (SKOS)","volume":"20","author":"Baker","year":"2013","journal-title":"J. Web Semant."},{"key":"ref_58","unstructured":"Crofts, N., Doerr, M., Gill, T., Stead, S., and Stiff, M. (2025, September 24). CIDOC Conceptual Reference Model (CRM). International Committee for Documentation (CIDOC). Available online: https:\/\/cidoc-crm.org\/."},{"key":"ref_59","unstructured":"Strapparava, C., and Valitutti, A. (2004, January 26\u201328). WordNet-Affect: An Affective Extension of WordNet. Proceedings of the Fourth International Conference on Language Resources and Evaluation (LREC 2004), Lisbon, Portugal. Available online: https:\/\/aclanthology.org\/L04-1208\/."},{"key":"ref_60","unstructured":"Burkhardt, F., and Schr\u00f6der, M. (2025, September 24). Emotion Markup Language (EmotionML) 1.0. W3C Recommendation, 22 May 2014. Available online: https:\/\/www.w3.org\/TR\/emotionml\/."},{"key":"ref_61","doi-asserted-by":"crossref","unstructured":"Kolev, I. (2023). Defining Art as Phenomenal Being. Arts, 12.","DOI":"10.3390\/arts12030100"},{"key":"ref_62","unstructured":"Guo, T., Chen, X., Wang, Y., Chang, R., Pei, S., Chawla, N.V., Wiest, O., and Zhang, X. (2024, January 3\u20139). Large Language Model Based Multi-Agents: A Survey of Progress and Challenges. Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence (IJCAI 2024), Jeju Island, Republic of Korea."},{"key":"ref_63","unstructured":"Li, J., Khandelwal, U., Lu, N., Ramesh, A., Ghosh, S., Zhang, X., Sahai, A., Madotto, A., Choi, Y., and Cao, Z. (2025). AgentBridge: Grounding LLMs to the World with Agents and Tools. arXiv."},{"key":"ref_64","first-page":"21","article-title":"Computational Creativity: The Final Frontier?","volume":"Volume 242","author":"Colton","year":"2012","journal-title":"Proceedings of the ECAI 2012: 20th European Conference on Artificial Intelligence"},{"key":"ref_65","doi-asserted-by":"crossref","unstructured":"Boden, M.A. (2004). The Creative Mind: Myths and Mechanisms, Routledge. [2nd ed.].","DOI":"10.4324\/9780203508527"},{"key":"ref_66","unstructured":"Copet, J., Kreuk, F., Gat, I., Remez, T., Kant, D., Synnaeve, G., Adi, Y., and D\u00e9fossez, A. (2023). Simple and controllable music generation. NIPS \u201923, Proceedings of the 37th International Conference on Neural Information Processing Systems, New Orleans, LA, USA, 10\u201316 December 2023, Curran Associates Inc.. Article No.: 2066."},{"key":"ref_67","unstructured":"Zhou, D., Sch\u00e4rli, N., Hou, L., Wei, J., Scales, N., Wang, X., Schuurmans, D., Cui, C., Bousquet, O., and Le, Q. (2023, January 1\u20135). Least-to-Most Prompting Enables Complex Reasoning in Large Language Models. Proceedings of the 11th International Conference on Learning Representations (ICLR 2023), Kigali, Rwanda. Available online: https:\/\/openreview.net\/forum?id=WZH7099tgGJ."},{"key":"ref_68","unstructured":"Keazim Issinov Fine Art (2025, September 24). \u201cTreasures of Earth\u201d (\u201c\u0411o\u0433\u0430\u0442\u0441\u0442\u0432\u0430\u0442\u0430 \u043d\u0430 \u0437\u0435\u043c\u044f\u0442\u0430\u201d)\u2014Catalog Page. Available online: https:\/\/keazimissinov.com\/bg-catalog-details-9.html."},{"key":"ref_69","unstructured":"Tolkien, J.R.R., and Swann, D. (1967). The Road Goes Ever on: A Song Cycle, Houghton Mifflin. [1st ed.]."},{"key":"ref_70","unstructured":"Mayer, R.E. (2009). Multimedia Learning, Cambridge University Press. [2nd ed.]."},{"key":"ref_71","doi-asserted-by":"crossref","first-page":"413","DOI":"10.3102\/00346543211052329","article-title":"Multimedia Design for Learning: An Overview of Reviews with Meta-Meta-Analysis","volume":"92","author":"Noetel","year":"2022","journal-title":"Rev. Educ. Res."},{"key":"ref_72","first-page":"1","article-title":"Generative Visual AI in News Organizations: Challenges, Opportunities, Perceptions and Policies","volume":"17","author":"Thomson","year":"2024","journal-title":"Digit. J."},{"key":"ref_73","first-page":"12","article-title":"The Art of Positive Emotions: Expressing Positive Emotions Within the Intersubjective Art Making Process (L\u2019art des \u00e9motions positives: Exprimer des \u00e9motions positives \u00e0 travers le processus artistique intersubjectif)","volume":"28","author":"Chilton","year":"2015","journal-title":"Can. Art Ther. Assoc. J."},{"key":"ref_74","doi-asserted-by":"crossref","unstructured":"Azofeifa, J.D., Noguez, J., Ruiz, S., Molina-Espinosa, J.M., Magana, A.J., and Benes, B. (2022). Systematic Review of Multimodal Human\u2013Computer Interaction. Informatics, 9.","DOI":"10.3390\/informatics9010013"},{"key":"ref_75","doi-asserted-by":"crossref","unstructured":"Kazashka, T., Madanska, S., Tabakova-Komsalova, V., Djeneva, D., and Nedelchev, I. (2024, January 26\u201328). Development of an Ontology of Bulgarian Dance Folklore. Proceedings of the 14th International Conference on Digital Presentation and Preservation of Cultural and Scientific Heritage (DiPP 2024), Burgas, Bulgaria.","DOI":"10.55630\/dipp.2024.14.25"},{"key":"ref_76","doi-asserted-by":"crossref","unstructured":"Pareschi, E. (2024). Centaur Art: The Future of Art in the Age of Generative AI, Springer. Chapter 5, The Power of Language.","DOI":"10.1007\/978-3-031-69063-1"},{"key":"ref_77","doi-asserted-by":"crossref","unstructured":"Goodman, N. (1976). Languages of Art: An Approach to a Theory of Symbols, Hackett Publishing.","DOI":"10.5040\/9781350928541"},{"key":"ref_78","doi-asserted-by":"crossref","unstructured":"Kyurkchiev, N., Zaevski, T., Iliev, A., Kyurkchiev, V., and Rahnev, A. (2024). Generating chaos in dynamical systems: Applications, symmetry results, and stimulating examples. Symmetry, 16.","DOI":"10.3390\/sym16080938"},{"key":"ref_79","unstructured":"Keats, J. (1820). Ode on a Grecian Urn. Lamia, Isabella, The Eve of St. Agnes, and Other Poems, Taylor and Hessey."},{"key":"ref_80","unstructured":"Auden, W.H. (1940). Mus\u00e9e des Beaux Arts. Another Time, Faber & Faber."},{"key":"ref_81","unstructured":"Rilke, R.M. (1908). Archaic Torso of Apollo. New Poems, Insel Verlag."},{"key":"ref_82","unstructured":"Blake, W. (1827). Illustrations to Dante\u2019s Divine Comedy, The British Museum."},{"key":"ref_83","unstructured":"Chagall, M. (1975). The Song of Songs Lithographs, Mourlot Editions."},{"key":"ref_84","unstructured":"Mussorgsky, M. (1874). Pictures at an Exhibition."},{"key":"ref_85","unstructured":"Kandinsky, W. (1911). Concerning the Spiritual in Art, Piper Verlag."},{"key":"ref_86","doi-asserted-by":"crossref","unstructured":"Pareschi, R. (2024). Beyond Human and Machine: An Architecture and Methodology Guideline for Centaurian Design. Sci, 6.","DOI":"10.3390\/sci6040071"},{"key":"ref_87","doi-asserted-by":"crossref","unstructured":"Borghoff, U.M., Bottoni, P., and Pareschi, R. (2025). Human-Artificial Interaction in the Age of Agentic AI: A System-Theoretical Approach. Front. Hum. Dyn., 7.","DOI":"10.3389\/fhumd.2025.1579166"},{"key":"ref_88","doi-asserted-by":"crossref","unstructured":"Saghafian, S., and Idan, L. (2024). Effective Generative AI: The Human-Algorithm Centaur. arXiv.","DOI":"10.2139\/ssrn.4594780"}],"container-title":["Future Internet"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1999-5903\/17\/11\/517\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,11,13]],"date-time":"2025-11-13T13:11:23Z","timestamp":1763039483000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1999-5903\/17\/11\/517"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,11,12]]},"references-count":88,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2025,11]]}},"alternative-id":["fi17110517"],"URL":"https:\/\/doi.org\/10.3390\/fi17110517","relation":{},"ISSN":["1999-5903"],"issn-type":[{"value":"1999-5903","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,11,12]]}}}