{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,15]],"date-time":"2026-04-15T10:52:20Z","timestamp":1776250340063,"version":"3.50.1"},"reference-count":67,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2025,1,15]],"date-time":"2025-01-15T00:00:00Z","timestamp":1736899200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,1,15]],"date-time":"2025-01-15T00:00:00Z","timestamp":1736899200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001866","name":"Fonds National de la Recherche Luxembourg","doi-asserted-by":"publisher","award":["PEARL 16544475"],"award-info":[{"award-number":["PEARL 16544475"]}],"id":[{"id":"10.13039\/501100001866","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100011033","name":"Agencia Estatal de Investigaci\u00f3n","doi-asserted-by":"publisher","award":["PID2020-114615RB-I00\/AEI\/10.13039\/501100011033"],"award-info":[{"award-number":["PID2020-114615RB-I00\/AEI\/10.13039\/501100011033"]}],"id":[{"id":"10.13039\/501100011033","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100011688","name":"Electronic Components and Systems for European Leadership","doi-asserted-by":"publisher","award":["101007260"],"award-info":[{"award-number":["101007260"]}],"id":[{"id":"10.13039\/501100011688","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100011688","name":"Electronic Components and Systems for European Leadership","doi-asserted-by":"publisher","award":["101007350"],"award-info":[{"award-number":["101007350"]}],"id":[{"id":"10.13039\/501100011688","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Softw Syst Model"],"published-print":{"date-parts":[[2025,12]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>\n                    Generative artificial intelligence (AI) systems are capable of synthesizing complex artifacts such as text, source code or images according to the instructions provided in a natural language prompt. The quality of the input prompt, in terms of both content and structure, has a large impact on the quality of the output. This has given rise to\n                    <jats:italic>prompt engineering<\/jats:italic>\n                    , the process of designing natural language prompts to best take advantage of the capabilities of generative AI systems. This paper describes , a model-driven engineering framework to support the creation, management and reuse of prompts for generative AI. \u00a0offers a domain-specific language (DSL) to define multimodal prompts in a modular and tool-independent way. The language offers additional features such as versioning, prompt chaining and multi-language support. Moreover, it provides tool support to adapt prompts for specific generative AI systems, execute those prompts on a generative AI system and validate the quality of the response that is generated. \u00a0is available as a Langium-based Visual Studio Code plugin.\n                  <\/jats:p>","DOI":"10.1007\/s10270-024-01235-4","type":"journal-article","created":{"date-parts":[[2025,1,15]],"date-time":"2025-01-15T03:26:39Z","timestamp":1736911599000},"page":"1627-1645","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":10,"title":["Impromptu: a framework for model-driven prompt engineering"],"prefix":"10.1007","volume":"24","author":[{"given":"Sergio","family":"Morales","sequence":"first","affiliation":[]},{"given":"Robert","family":"Claris\u00f3","sequence":"additional","affiliation":[]},{"given":"Jordi","family":"Cabot","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,1,15]]},"reference":[{"key":"1235_CR1","unstructured":"Mullen, A., Greene, N., Stewart, B., Halpern, M., Barot, S.: Top Strategic Technology Trends for 2022: Generative AI. Gartner report (2021)"},{"key":"1235_CR2","unstructured":"Wiles, J.: Beyond ChatGPT: the future of generative AI for enterprises. https:\/\/www.gartner.com\/en\/articles\/beyond-chatgpt-the-future-of-generative-ai-for-enterprises"},{"key":"1235_CR3","unstructured":"O\u2019Grady, M., Gualtieri, M.: Global AI software forecast. Forrester Report (2022)"},{"key":"1235_CR4","unstructured":"Brown, T.B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D.M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., Amodei, D.: Language Models are Few-Shot Learners. arXiv preprint arXiv:2005.14165 (2020) [cs.CL]"},{"key":"1235_CR5","unstructured":"Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., Neubig, G.: Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. arXiv preprint arXiv:2107.13586 (2021)"},{"key":"1235_CR6","unstructured":"Zhao, W.X., Zhou, K., Li, J., Tang, T., Wang, X., Hou, Y., Min, Y., Zhang, B., Zhang, J., Dong, Z., Du, Y., Yang, C., Chen, Y., Chen, Z., Jiang, J., Ren, R., Li, Y., Tang, X., Liu, Z., Liu, P., Nie, J.-Y., Wen, J.-R.: A survey of large language models. arXiv preprint arXiv:2303.18223 (2023) [cs.CL]"},{"key":"1235_CR7","unstructured":"Ye, J., Chen, X., Xu, N., Zu, C., Shao, Z., Liu, S., Cui, Y., Zhou, Z., Gong, C., Shen, Y., Zhou, J., Chen, S., Gui, T., Zhang, Q., Huang, X.: A comprehensive capability analysis of GPT-3 and GPT-3.5 series models. arXiv preprint arXiv:2303.10420 (2023) [cs.CL]"},{"key":"1235_CR8","doi-asserted-by":"crossref","unstructured":"Reynolds, L., McDonell, K.: Prompt programming for large language models: Beyond the few-shot paradigm. In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. CHI EA \u201921. Association for Computing Machinery, New York, NY, USA (2021)","DOI":"10.1145\/3411763.3451760"},{"key":"1235_CR9","unstructured":"Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., Zhou, D.: Chain-of-thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903 (2022)"},{"key":"1235_CR10","unstructured":"Inspired Cognition: Prompt Gym (2023). https:\/\/github.com\/inspired-cognition\/critique-apps\/tree\/main\/prompt-gym"},{"key":"1235_CR11","unstructured":"Witteveen, S., Andrews, M.: Investigating prompt engineering in diffusion models. arXiv preprint arXiv:2211.15462 (2022)"},{"key":"1235_CR12","unstructured":"Zhao, T.Z., Wallace, E., Feng, S., Klein, D., Singh, S.: Calibrate before use: Improving few-shot performance of language models. arXiv preprint arXiv:2102.09690 (2021) [cs.CL]"},{"key":"1235_CR13","unstructured":"Petsiuk, V., Siemenn, A.E., Surbehera, S., Chin, Z., Tyser, K., Hunter, G., Raghavan, A., Hicke, Y., Plummer, B.A., Kerret, O., Buonassisi, T., Saenko, K., Solar-Lezama, A., Drori, I.: Human evaluation of text-to-image models on a multi-task benchmark. arXiv preprint arXiv:2211.12112 (2022) [cs.CV]"},{"key":"1235_CR14","unstructured":"Borji, A.: Generated faces in the wild: quantitative comparison of stable diffusion, midjourney and DALL-E 2. arXiv preprint arXiv:2210.00586 (2022) [cs.CV]"},{"key":"1235_CR15","unstructured":"Romero, A.: Stable diffusion 2 is not what users expected \u2013 or wanted (2022). https:\/\/thealgorithmicbridge.substack.com\/p\/stable-diffusion-2-is-not-what-users"},{"key":"1235_CR16","first-page":"46595","volume":"36","author":"L Zheng","year":"2023","unstructured":"Zheng, L., Chiang, W.-L., Sheng, Y., Zhuang, S., Wu, Z., Zhuang, Y., Lin, Z., Li, Z., Li, D., Xing, E., Zhang, H., Gonzalez, J.E., Stoica, I.: Judging LLM-as-a-judge with MT-bench and Chatbot arena. Adv. Neural Inf. Process. Syst. 36, 46595\u201346623 (2023)","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"1235_CR17","unstructured":"Kojima, T., Gu, S.S., Reid, M., Matsuo, Y., Iwasawa, Y.: Large language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916 (2022)"},{"key":"1235_CR18","unstructured":"DAIR.AI: Prompt engineering guide (2024). https:\/\/www.promptingguide.ai\/"},{"key":"1235_CR19","unstructured":"OpenAI: Best practices for prompt engineering with OpenAI API (2023). https:\/\/help.openai.com\/en\/articles\/6654000-best-practices-for-prompt-engineering-with-openai-api"},{"key":"1235_CR20","unstructured":"Dallery.Gallery: DALL-E 2 prompt book (2022). https:\/\/dallery.gallery\/the-dalle-2-prompt-book\/"},{"key":"1235_CR21","doi-asserted-by":"crossref","unstructured":"Liu, V., Chilton, L.B.: Design guidelines for prompt engineering text-to-image generative models. In: CHI Conference on Human Factors in Computing Systems, pp. 1\u201323. ACM, New York, NY, USA (2022)","DOI":"10.1145\/3491102.3501825"},{"key":"1235_CR22","doi-asserted-by":"crossref","unstructured":"Cheng, Z., Kasai, J., Yu, T.: Batch prompting: efficient inference with large language model APIs. arXiv preprint arXiv:2301.08721 (2023)","DOI":"10.18653\/v1\/2023.emnlp-industry.74"},{"key":"1235_CR23","doi-asserted-by":"crossref","unstructured":"Wu, T., Terry, M., Cai, C.J.: AI chains: Transparent and controllable human-AI interaction by chaining large language model prompts. In: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. CHI \u201922. Association for Computing Machinery, New York, NY, USA (2022)","DOI":"10.1145\/3491102.3517582"},{"issue":"1","key":"1235_CR24","first-page":"1146","volume":"29","author":"H Strobelt","year":"2022","unstructured":"Strobelt, H., Webson, A., Sanh, V., Hoover, B., Beyer, J., Pfister, H., Rush, A.M.: Interactive and visual prompt engineering for ad-hoc task adaptation with large language models. IEEE Trans. Vis. Comput. Graph. 29(1), 1146\u20131156 (2022)","journal-title":"IEEE Trans. Vis. Comput. Graph."},{"key":"1235_CR25","unstructured":"PromptArray: A prompting language for neural text generators (2023). https:\/\/github.com\/jeffbinder\/promptarray"},{"key":"1235_CR26","doi-asserted-by":"crossref","unstructured":"Bach, S.H., Sanh, V., Yong, Z.-X., Webson, A., Raffel, C., Nayak, N.V., Sharma, A., Kim, T., Bari, M.S., Fevry, T., Alyafeai, Z., Dey, M., Santilli, A., Sun, Z., Ben-David, S., Xu, C., Chhablani, G., Wang, H., Fries, J.A., Al-shaibani, M.S., Sharma, S., Thakker, U., Almubarak, K., Tang, X., Tang, X., Jiang, M.T.-J., Rush, A.M.: PromptSource: an integrated development environment and repository for natural language prompts. arXiv preprint arXiv:2202.01279 (2022) [cs.LG]","DOI":"10.18653\/v1\/2022.acl-demo.9"},{"key":"1235_CR27","unstructured":"Promptable: workspace for prompt engineering (2024). https:\/\/promptable.ai\/"},{"key":"1235_CR28","doi-asserted-by":"crossref","unstructured":"Jiang, E., Olson, K., Toh, E., Molina, A., Donsbach, A., Terry, M., Cai, C.J.: PromptMaker: Prompt-based prototyping with large language models. In: Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems. CHI EA \u201922. Association for Computing Machinery, New York, NY, USA (2022)","DOI":"10.1145\/3491101.3503564"},{"key":"1235_CR29","unstructured":"Promptify (2024). https:\/\/github.com\/promptslab\/Promptify"},{"key":"1235_CR30","unstructured":"PromptBuilder: AI text prompt generator (2024). https:\/\/aitextpromptgenerator.com\/builder"},{"key":"1235_CR31","unstructured":"PromptGen: a tool for AI art generation (2024). https:\/\/promptgen.vercel.app\/"},{"key":"1235_CR32","unstructured":"Promptmakr (2024). https:\/\/promptmakr.com\/"},{"key":"1235_CR33","unstructured":"PromptMaker (2024). https:\/\/promptmaker.com\/"},{"key":"1235_CR34","unstructured":"DiffusionBee (2024). https:\/\/diffusionbee.com\/"},{"key":"1235_CR35","doi-asserted-by":"crossref","unstructured":"Liu, V., Vermeulen, J., Fitzmaurice, G., Matejka, J.: 3DALL-E: Integrating text-to-image AI in 3D design workflows. arXiv preprint arXiv:2210.11603 (2022)","DOI":"10.1145\/3563657.3596098"},{"issue":"9","key":"1235_CR36","doi-asserted-by":"publisher","first-page":"2337","DOI":"10.1007\/s11263-022-01653-1","volume":"130","author":"K Zhou","year":"2022","unstructured":"Zhou, K., Yang, J., Loy, C.C., Liu, Z.: Learning to prompt for vision-language models. Int. J. Comput. Vis. 130(9), 2337\u20132348 (2022)","journal-title":"Int. J. Comput. Vis."},{"key":"1235_CR37","doi-asserted-by":"crossref","unstructured":"Eye for AI (2024). https:\/\/eyeforai.xyz\/","DOI":"10.12968\/S1356-9252(24)40068-3"},{"key":"1235_CR38","unstructured":"Oppenlaender, J.: A taxonomy of prompt modifiers for text-to-image generation. arXiv preprint arXiv:2204.13988 (2022)"},{"key":"1235_CR39","unstructured":"White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., Elnashar, A., Spencer-Smith, J., Schmidt, D.C.: A prompt pattern catalog to enhance prompt engineering with ChatGPT. arXiv preprint arXiv:2302.11382 (2023)"},{"key":"1235_CR40","doi-asserted-by":"crossref","unstructured":"Wu, T., Jiang, E., Donsbach, A., Gray, J., Molina, A., Terry, M., Cai, C.J.: PromptChainer: Chaining large language model prompts through visual programming. In: Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems. CHI EA \u201922. Association for Computing Machinery, New York, NY, USA (2022)","DOI":"10.1145\/3491101.3519729"},{"key":"1235_CR41","unstructured":"LangChain (2024). https:\/\/python.langchain.com\/"},{"key":"1235_CR42","unstructured":"Liu, J.: LlamaIndex (2022). https:\/\/github.com\/jerryjliu\/llama_index"},{"key":"1235_CR43","doi-asserted-by":"crossref","unstructured":"Beurer-Kellner, L., Fischer, M., Vechev, M.: Prompting is programming: a query language for large language models. arXiv preprint arXiv:2212.06094 (2022)","DOI":"10.1145\/3591300"},{"key":"1235_CR44","doi-asserted-by":"publisher","unstructured":"Arawjo, I., Swoopes, C., Vaithilingam, P., Wattenberg, M., Glassman, E.L.: ChainForge: a visual toolkit for prompt engineering and LLM hypothesis testing. In: Proceedings of the CHI Conference on Human Factors in Computing Systems. CHI \u201924. Association for Computing Machinery, New York, NY, USA (2024). https:\/\/doi.org\/10.1145\/3613904.3642016","DOI":"10.1145\/3613904.3642016"},{"key":"1235_CR45","unstructured":"Ma\u00f1as, O., Astolfi, P., Hall, M., Ross, C., Urbanek, J., Williams, A., Agrawal, A., Romero-Soriano, A., Drozdzal, M.: Improving text-to-image consistency via automatic prompt optimization. arXiv preprint arXiv:2403.17804 (2024)"},{"key":"1235_CR46","unstructured":"PromptPerfect (2024). https:\/\/promptperfect.jina.ai\/"},{"key":"1235_CR47","unstructured":"Promptist (2024). https:\/\/huggingface.co\/spaces\/microsoft\/Promptist"},{"key":"1235_CR48","doi-asserted-by":"crossref","unstructured":"Shin, T., Razeghi, Y., Logan, R.L., Wallace, E., Singh, S.: AutoPrompt: eliciting knowledge from language models with automatically generated prompts. arXiv preprint arXiv:2010.15980 (2020)","DOI":"10.18653\/v1\/2020.emnlp-main.346"},{"key":"1235_CR49","unstructured":"Gao, Y., Xiong, Y., Gao, X., Jia, K., Pan, J., Bi, Y., Dai, Y., Sun, J., Wang, H.: Retrieval-augmented generation for large language models: a survey. arXiv preprint arXiv:2312.10997 (2023)"},{"key":"1235_CR50","doi-asserted-by":"crossref","unstructured":"Besta, M., Blach, N., Kubicek, A., Gerstenberger, R., Podstawski, M., Gianinazzi, L., Gajda, J., Lehmann, T., Niewiadomski, H., Nyczyk, P., et\u00a0al.: Graph of thoughts: solving elaborate problems with large language models. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, pp. 17682\u201317690 (2024)","DOI":"10.1609\/aaai.v38i16.29720"},{"key":"1235_CR51","unstructured":"Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., Cao, Y.: React: synergizing reasoning and acting in language models. arXiv preprint arXiv:2210.03629 (2022)"},{"key":"1235_CR52","unstructured":"Khattab, O., Singhvi, A., Maheshwari, P., Zhang, Z., Santhanam, K., Vardhamanan, S., Haq, S., Sharma, A., Joshi, T.T., Moazam, H., Miller, H., Zaharia, M., Potts, C.: DSPy: Compiling declarative language model calls into self-improving pipelines. arXiv preprint arXiv:2310.03714 (2023)"},{"key":"1235_CR53","unstructured":"DSPy (2024). https:\/\/github.com\/stanfordnlp\/dspy"},{"key":"1235_CR54","unstructured":"Semantic Kernel (2024). https:\/\/github.com\/microsoft\/semantic-kernel"},{"key":"1235_CR55","unstructured":"Suzgun, M., Kalai, A.T.: Meta-prompting: enhancing language models with task-agnostic scaffolding (2024). https:\/\/arxiv.org\/abs\/2401.12954"},{"key":"1235_CR56","unstructured":"Guardrails AI (2024). https:\/\/www.guardrailsai.com\/"},{"key":"1235_CR57","unstructured":"NeMo Guardrails (2024). https:\/\/github.com\/NVIDIA\/NeMo-Guardrails"},{"key":"1235_CR58","unstructured":"Reliable AI markup language (RAIL) (2024). https:\/\/www.guardrailsai.com\/docs\/how_to_guides\/rail"},{"key":"1235_CR59","unstructured":"Driess, D., Xia, F., Sajjadi, M.S.M., Lynch, C., Chowdhery, A., Ichter, B., Wahid, A., Tompson, J., Vuong, Q., Yu, T., Huang, W., Chebotar, Y., Sermanet, P., Duckworth, D., Levine, S., Vanhoucke, V., Hausman, K., Toussaint, M., Greff, K., Zeng, A., Mordatch, I., Florence, P.: PaLM-E: An embodied multimodal language model. arXiv preprint arXiv:2303.03378 (2023) [cs.LG]"},{"key":"1235_CR60","unstructured":"Huang, S., Dong, L., Wang, W., Hao, Y., Singhal, S., Ma, S., Lv, T., Cui, L., Mohammed, O.K., Patra, B., Liu, Q., Aggarwal, K., Chi, Z., Bjorck, J., Chaudhary, V., Som, S., Song, X., Wei, F.: Language is not all you need: aligning perception with language models. arXiv preprint arXiv:2302.14045 (2023) [cs.CL]"},{"key":"1235_CR61","unstructured":"Alayrac, J.-B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, Y., Lenc, K., Mensch, A., Millican, K., Reynolds, M., Ring, R., Rutherford, E., Cabi, S., Han, T., Gong, Z., Samangooei, S., Monteiro, M., Menick, J., Borgeaud, S., Brock, A., Nematzadeh, A., Sharifzadeh, S., Binkowski, M., Barreira, R., Vinyals, O., Zisserman, A., Simonyan, K.: Flamingo: a visual language model for few-shot learning. arXiv preprint arXiv:2204.14198 (2022) [cs.CV]"},{"key":"1235_CR62","unstructured":"Claris\u00f3, R., Morales, S., Cabot, J.: Impromptu DSL and toolkit: GitHub repository (2024). https:\/\/github.com\/SOM-Research\/Impromptu"},{"key":"1235_CR63","unstructured":"Langium (2024). https:\/\/langium.org\/"},{"key":"1235_CR64","unstructured":"Midjourney (2024). https:\/\/www.midjourney.com\/"},{"key":"1235_CR65","unstructured":"AUTOMATIC1111 Stable Diffusion Web UI (2023). https:\/\/github.com\/AUTOMATIC1111\/stable-diffusion-webui"},{"key":"1235_CR66","doi-asserted-by":"crossref","unstructured":"Morales, S., Claris\u00f3, R., Cabot, J.: A DSL for testing LLMs for fairness and bias. In: ACM\/IEEE 27th International Conference on Model Driven Engineering Languages and Systems (MODELS) (2024)","DOI":"10.1145\/3640310.3674093"},{"key":"1235_CR67","unstructured":"Gomez-Vazquez, M., Morales, S., Castignani, G., Claris\u00f3, R., Conrardy, A., Deladiennee, L., Renault, S., Cabot, J.: A leaderboard to benchmark ethical biases in LLMs. In: Proceedings of the 1st Workshop on AI Bias: Measurements, Mitigation, Explanation Strategies. CEUR Workshop Proceedings, vol. 3744 (2024). https:\/\/ceur-ws.org\/Vol-3744\/paper1.pdf"}],"container-title":["Software and Systems Modeling"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10270-024-01235-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10270-024-01235-4\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10270-024-01235-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,11,15]],"date-time":"2025-11-15T02:20:09Z","timestamp":1763173209000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10270-024-01235-4"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,1,15]]},"references-count":67,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2025,12]]}},"alternative-id":["1235"],"URL":"https:\/\/doi.org\/10.1007\/s10270-024-01235-4","relation":{},"ISSN":["1619-1366","1619-1374"],"issn-type":[{"value":"1619-1366","type":"print"},{"value":"1619-1374","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,1,15]]},"assertion":[{"value":"30 April 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"2 September 2024","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"29 October 2024","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 January 2025","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}