{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,12]],"date-time":"2026-02-12T14:05:56Z","timestamp":1770905156938,"version":"3.50.1"},"reference-count":74,"publisher":"Wiley","issue":"7","license":[{"start":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:00:00Z","timestamp":1760140800000},"content-version":"vor","delay-in-days":10,"URL":"http:\/\/onlinelibrary.wiley.com\/termsAndConditions#vor"}],"content-domain":{"domain":["onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["Computer Graphics Forum"],"published-print":{"date-parts":[[2025,10]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Parametric CAD systems use domain\u2010specific languages (DSLs) to represent geometry as programs, enabling both flexible modeling and structured editing. With the rise of large language models (LLMs), there is growing interest in generating such programs from natural language. This raises a key question: what kind of DSL best supports both CAD generation and editing, whether performed by a human or an AI? In this work, we introduce AIDL, a hierarchical, solver\u2010aided DSL designed to align with the strengths of LLMs while remaining interpretable and editable by humans. AIDL enables high\u2010level reasoning by breaking problems into abstract components and structural relationships, while offloading low\u2010level geometric reasoning to a constraint solver. We evaluate AIDL in a 2D text\u2010to\u2010CAD setting using a zero\u2010shot prompt\u2010based interface and compare it to OpenSCAD, a widely used CAD DSL that appears in LLM training data. AIDL produces results that are visually competitive and significantly easier to edit. Our findings suggest that language design is a powerful complement to model training and prompt engineering for building collaborative AI\u2013human tools in CAD. Code is available at <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"https:\/\/github.com\/deGravity\/aidl\">https:\/\/github.com\/deGravity\/aidl<\/jats:ext-link>.<\/jats:p>","DOI":"10.1111\/cgf.70250","type":"journal-article","created":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T12:17:44Z","timestamp":1760185064000},"update-policy":"https:\/\/doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["A Solver\u2010Aided Hierarchical Language for LLM\u2010Driven CAD Design"],"prefix":"10.1111","volume":"44","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8524-4730","authenticated-orcid":false,"given":"B. T.","family":"Jones","sequence":"first","affiliation":[{"name":"MIT CSAIL  USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5084-3319","authenticated-orcid":false,"given":"Z.","family":"Zhang","sequence":"additional","affiliation":[{"name":"Department of Computer Science University of Washington  Seattle WA USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3484-4004","authenticated-orcid":false,"given":"F.","family":"H\u00e4hnlein","sequence":"additional","affiliation":[{"name":"Department of Computer Science University of Washington  Seattle WA USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0212-5643","authenticated-orcid":false,"given":"W.","family":"Matusik","sequence":"additional","affiliation":[{"name":"MIT CSAIL  USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8113-7580","authenticated-orcid":false,"given":"M.","family":"Ahmad","sequence":"additional","affiliation":[{"name":"Adobe  Seattle USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3996-6588","authenticated-orcid":false,"given":"V.","family":"Kim","sequence":"additional","affiliation":[{"name":"Adobe  Seattle USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2464-0876","authenticated-orcid":false,"given":"A.","family":"Schulz","sequence":"additional","affiliation":[{"name":"Department of Computer Science University of Washington  Seattle WA USA"}]}],"member":"311","published-online":{"date-parts":[[2025,10,11]]},"reference":[{"key":"e_1_2_6_2_2","unstructured":"BubeckS. ChandrasekaranV. EldanR. GehrkeJ. HorvitzE. KamarE. LeeP. LeeY. T. LiY. LundbergS. et al.: Sparks of artificial general intelligence: Early experiments with gpt-4.arXiv preprint arXiv:2303.12712(2023). 3 4"},{"key":"e_1_2_6_3_2","unstructured":"BairiR. SonwaneA. KanadeA. VageeshDC. IyerA. S. ParthasarathyS. RajamaniS. AshokB. ShetS. P.:CodePlan: Repository-level Coding using LLMs and Planning. URL:https:\/\/www.semanticscholar.org\/paper\/f81a1b4510631d14b5b565c4701ee056f8d5c72f. 3"},{"key":"e_1_2_6_4_2","unstructured":"CadQuery:Cadquery.https:\/\/github.com\/CadQuery\/cadquery 2024. 3"},{"key":"e_1_2_6_5_2","doi-asserted-by":"publisher","DOI":"10.1145\/3591223"},{"key":"e_1_2_6_6_2","doi-asserted-by":"publisher","DOI":"10.1145\/3272127.3275006"},{"key":"e_1_2_6_6_3","doi-asserted-by":"crossref","unstructured":"doi:10.1145\/3272127.3275006. 3","DOI":"10.1145\/3272127.3275006"},{"key":"e_1_2_6_7_2","unstructured":"DongY. JiangX. JinZ. LiG.:Self-collaboration Code Generation via ChatGPT. Publisher: arXiv Version Number: 2. URL:https:\/\/arxiv.org\/abs\/2304.07590"},{"key":"e_1_2_6_7_3","unstructured":"doi:10.48550\/ARXIV.2304.07590. 3"},{"key":"e_1_2_6_8_2","unstructured":"EllisK. RitchieD. Solar-LezamaA. TenenbaumJ. B.: Learning to Infer Graphics Programs from Hand-Drawn Images.arXiv:1707.09627 [cs](July2017). arXiv: 1707.09627. URL:http:\/\/arxiv.org\/abs\/1707.09627. 2"},{"key":"e_1_2_6_9_2","volume-title":"Advances in Neural Information Processing Systems","author":"Ellis K.","year":"2018"},{"key":"e_1_2_6_10_2","first-page":"5885","volume-title":"Advances in Neural Information Processing Systems","author":"Ganin Y.","year":"2021"},{"key":"e_1_2_6_11_2","unstructured":"GrattafioriW. X. D\u00e9fossezA. CopetJ. AzharF. TouvronH. MartinL. UsunierN. ScialomT. SynnaeveG.: Code llama: Open foundation models for code.arXiv preprint arXiv:2308.12950(2023). 3"},{"key":"e_1_2_6_12_2","doi-asserted-by":"publisher","DOI":"10.1145\/3528223.3530078"},{"key":"e_1_2_6_12_3","doi-asserted-by":"crossref","unstructured":"doi:10.1145\/3528223.3530078. 2","DOI":"10.1145\/3528223.3530078"},{"key":"e_1_2_6_13_2","unstructured":"HongR. ZhangH. PanX. YuD. ZhangC.:Abstraction-of-thought makes language models better reasoners 2024. URL:https:\/\/arxiv.org\/abs\/2406.12442 arXiv:2406.12442. 2"},{"key":"e_1_2_6_14_2","doi-asserted-by":"publisher","DOI":"10.1145\/3414685.3417812"},{"key":"e_1_2_6_15_2","doi-asserted-by":"publisher","DOI":"10.1145\/3478513.3480562"},{"key":"e_1_2_6_15_3","doi-asserted-by":"crossref","unstructured":"doi:10.1145\/3478513.3480562. 2","DOI":"10.1145\/3478513.3480562"},{"key":"e_1_2_6_16_2","unstructured":"JayaramanP. K. LambourneJ. G. DesaiN. WillisK. SanghiA. MorrisN. J. W.: SolidGen: An Autoregressive Model for Direct B-rep Synthesis.Transactions on Machine Learning Research(Feb.2023). 2"},{"key":"e_1_2_6_17_2","doi-asserted-by":"crossref","unstructured":"KhanM. S. DupontE. AliS. A. CherenkovaK. KacemA. AouadaD.: Cad-signet: Cad language inference from point clouds using layer-wise sketch instance guided attention. InProceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition(2024) pp.4713\u20134722. 2","DOI":"10.1109\/CVPR52733.2024.00451"},{"key":"e_1_2_6_18_2","doi-asserted-by":"crossref","first-page":"9593","DOI":"10.1109\/CVPR.2019.00983","volume-title":"2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Koch S.","year":"2019"},{"key":"e_1_2_6_18_3","doi-asserted-by":"crossref","unstructured":"doi:10.1109\/CVPR.2019.00983. 2","DOI":"10.1109\/CVPR.2019.00983"},{"key":"e_1_2_6_19_2","first-page":"7552","article-title":"Text2cad: Generating sequential cad designs from beginner-to-expert level text prompts","volume":"37","author":"Khan M. S.","year":"2024","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_6_20_2","unstructured":"LiR. AllalL. B. ZiY. MuennighoffN. KocetkovD. MouC. MaroneM. AkikiC. LiJ. ChimJ. LiuQ. ZheltonozhskiiE. ZhuoT. Y. WangT. DehaeneO. DavaadorjM. Lamy-PoirierJ. MonteiroJ. ShliazhkoO. GontierN. MeadeN. ZebazeA. YeeM.-H. UmapathiL. K. ZhuJ. LipkinB. OblokulovM. WangZ. MurthyR. StillermanJ. PatelS. S. AbulkhanovD. ZoccaM. DeyM. ZhangZ. FahmyN. BhattacharyyaU. YuW. SinghS. LuccioniS. VillegasP. KunakovM. ZhdanovF. RomeroM. LeeT. TimorN. DingJ. SchlesingerC. SchoelkopfH. EbertJ. DaoT. MishraM. GuA. RobinsonJ. AndersonC. J. Dolan-GavittB. ContractorD. ReddyS. FriedD. BahdanauD. JerniteY. FerrandisC. M. HughesS. WolfT. GuhaA. vonWerraL. deVriesH.:StarCoder: may the source be with you! Dec.2023. arXiv:2305.06161 [cs]. URL:http:\/\/arxiv.org\/abs\/2305.06161"},{"key":"e_1_2_6_20_3","unstructured":"doi:10.48550\/arXiv.2305.06161. 3"},{"key":"e_1_2_6_21_2","doi-asserted-by":"crossref","unstructured":"LiP. GuoJ. LiH. BenesB. YanD.-M.: Sfmcad: Unsupervised cad reconstruction by learning sketch-based feature modeling operations. InProceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition(2024) pp.4671\u20134680. 2","DOI":"10.1109\/CVPR52733.2024.00447"},{"key":"e_1_2_6_22_2","unstructured":"LiP. GuoJ. ZhangX. YanD.-m.:SECAD-Net: Self-Supervised CAD Reconstruction by Learning Sketch-Extrude Operations. Publisher: arXiv Version Number: 1. URL:https:\/\/arxiv.org\/abs\/2303.10613"},{"key":"e_1_2_6_22_3","unstructured":"doi:10.48550\/ARXIV.2303.10613. 2"},{"key":"e_1_2_6_23_2","unstructured":"LozhkovA. LiR. AllalL. B. CassanoF. Lamy-PoirierJ. TaziN. TangA. PykhtarD. LiuJ. WeiY. LiuT. TianM. KocetkovD. ZuckerA. BelkadaY. WangZ. LiuQ. AbulkhanovD. PaulI. LiZ. LiW.-D. RisdalM. LiJ. ZhuJ. ZhuoT. Y. ZheltonozhskiiE. DadeN. O. O. YuW. KraussL. JainN. SuY. HeX. DeyM. AbatiE. ChaiY. MuennighoffN. TangX. OblokulovM. AkikiC. MaroneM. MouC. MishraM. GuA. HuiB. DaoT. ZebazeA. DehaeneO. PatryN. XuC. McAuleyJ. HuH. ScholakT. PaquetS. RobinsonJ. AndersonC. J. ChapadosN. PatwaryM. TajbakhshN. JerniteY. FerrandisC. M. ZhangL. HughesS. WolfT. GuhaA. vonWerraL. deVriesH.:StarCoder 2 and The Stack v2: The Next Generation Feb.2024. arXiv:2402.19173 [cs]. URL:http:\/\/arxiv.org\/abs\/2402.19173"},{"key":"e_1_2_6_23_3","unstructured":"doi:10.48550\/arXiv.2402.19173. 3"},{"key":"e_1_2_6_24_2","doi-asserted-by":"crossref","unstructured":"LiuY. ObukhovA. WegnerJ. D. SchindlerK.: Point2cad: Reverse engineering cad models from 3d point clouds. InProceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition(2024) pp.3763\u20133772. 2","DOI":"10.1109\/CVPR52733.2024.00361"},{"issue":"4","key":"e_1_2_6_25_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3528223.3530133","article-title":"Free2CAD: parsing freehand drawings into CAD commands","volume":"41","author":"Li C.","year":"2022","journal-title":"ACM Transactions on Graphics"},{"key":"e_1_2_6_25_3","doi-asserted-by":"crossref","unstructured":"doi:10.1145\/3528223.3530133. 2","DOI":"10.1145\/3528223.3530133"},{"key":"e_1_2_6_26_2","doi-asserted-by":"crossref","unstructured":"LiX. SongY. LouY. ZhouX.: Cad translator: An effective drive for text to 3d parametric computer-aided design generative modeling. InProceedings of the 32nd ACM International Conference on Multimedia(2024) pp.8461\u20138470. 2","DOI":"10.1145\/3664647.3681549"},{"key":"e_1_2_6_27_2","first-page":"1","volume-title":"SIGGRAPH Asia 2022 Conference Papers","author":"Lambourne J. G.","year":"2022"},{"key":"e_1_2_6_27_3","unstructured":"doi:10.1145\/3550469.3555424.2 3"},{"key":"e_1_2_6_28_2","unstructured":"LuoZ. XuC. ZhaoP. SunQ. GengX. HuW. TaoC. MaJ. LinQ. JiangD.:WizardCoder: Empowering Code Large Language Models with Evol-Instruct June2023. arXiv:2306.08568 [cs]. URL:http:\/\/arxiv.org\/abs\/2306.08568"},{"key":"e_1_2_6_28_3","unstructured":"doi:10.48550\/arXiv.2306.08568. 3"},{"key":"e_1_2_6_29_2","doi-asserted-by":"publisher","DOI":"10.1145\/3622758.3622895"},{"key":"e_1_2_6_29_3","doi-asserted-by":"crossref","unstructured":"doi:10.1145\/3622758.3622895. 3","DOI":"10.1145\/3622758.3622895"},{"key":"e_1_2_6_30_2","doi-asserted-by":"publisher","DOI":"10.1145\/3450626.3459823"},{"key":"e_1_2_6_31_2","doi-asserted-by":"crossref","unstructured":"MaW. ChenS. LouY. LiX. ZhouX.: Draw step by step: Reconstructing cad construction sequences from point clouds via multimodal diffusion. InProceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition(2024) pp.27154\u201327163. 2","DOI":"10.1109\/CVPR52733.2024.02564"},{"key":"e_1_2_6_32_2","unstructured":"MakaturaL. FosheyM. WangB. H\u00e4hnLeinF. MaP. DengB. TjandrasuwitaM. SpielbergA. OwensC. E. ChenP. Y. et al.: How can large language models help humans in design and manufacturing?arXiv preprint arXiv:2307.14377(2023). 3 4"},{"key":"e_1_2_6_33_2","first-page":"7220","volume-title":"Proceedings of the 37th International Conference on Machine Learning","author":"Nash C.","year":"2020"},{"key":"e_1_2_6_34_2","doi-asserted-by":"crossref","unstructured":"NandiC. WillseyM. AndersonA. WilcoxJ. R. DarulovaE. GrossmanD. TatlockZ.: Synthesizing structured cad models with equality saturation and inverse transformations. InProceedings of the 41st ACM SIGPLAN Conference on Programming Language Design and Implementation(2020) pp.31\u201344. 3","DOI":"10.1145\/3385412.3386012"},{"key":"e_1_2_6_35_2","unstructured":"OCCT3D:OpenCascade Feb.2021. URL:https:\/\/occt3d.com\/. 6"},{"key":"e_1_2_6_36_2","unstructured":"Onshape: Featurescript.https:\/\/cad.onshape.com\/FsDoc\/ 2024. 3"},{"key":"e_1_2_6_37_2","unstructured":"ParaW. BhatS. GuerreroP. KellyT. MitraN. GuibasL. WonkaP.:SketchGen: Generating Constrained CAD Sketches. 2"},{"key":"e_1_2_6_38_2","first-page":"5077","article-title":"Sketchgen: Generating constrained cad sketches","volume":"34","author":"Para W.","year":"2021","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_6_39_2","unstructured":"PearlO. LangI. HuY. YehR. A. HanockaR.: Geocode: Interpretable shape programs.arXiv preprint arXiv:2212.11715(2022). 3"},{"key":"e_1_2_6_40_2","doi-asserted-by":"crossref","unstructured":"RenD. ZhengJ. CaiJ. LiJ. ZhangJ.:ExtrudeNet: Unsupervised Inverse Sketch-and-Extrude for Shape Parsing Sept.2022. arXiv:2209.15632 [cs]. URL:http:\/\/arxiv.org\/abs\/2209.15632","DOI":"10.1007\/978-3-031-20086-1_28"},{"key":"e_1_2_6_40_3","unstructured":"doi:10.48550\/arXiv.2209.15632. 2"},{"key":"e_1_2_6_41_2","unstructured":"SilverT. DanS. SrinivasK. TenenbaumJ. B. KaelblingL. P. KatzM.:Generalized Planning in PDDL Domains with Pretrained Large Language Models. Publisher: arXiv Version Number: 1. URL:https:\/\/arxiv.org\/abs\/2305.11014"},{"key":"e_1_2_6_41_3","unstructured":"doi:10.48550\/ARXIV.2305.11014. 3"},{"key":"e_1_2_6_42_2","unstructured":"SeffA. ZhouW. RichardsonN. AdamsR. P.:Vitruvion: A Generative Model of Parametric CAD Sketches. Tech. Rep. arXiv:2109.14124 arXiv Apr.2022. arXiv:2109.14124 [cs] type: article. URL:http:\/\/arxiv.org\/abs\/2109.14124. 2"},{"key":"e_1_2_6_43_2","unstructured":"WesthuesJ.:SolveSpace Nov.2022. URL:https:\/\/solvespace.com\/. 6"},{"key":"e_1_2_6_44_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01539"},{"key":"e_1_2_6_44_3","doi-asserted-by":"crossref","unstructured":"doi:10.1109\/CVPR52688.2022.01539. 2","DOI":"10.1109\/CVPR52688.2022.01539"},{"key":"e_1_2_6_45_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPRW53098.2021.00239"},{"key":"e_1_2_6_45_3","doi-asserted-by":"crossref","unstructured":"doi:10.1109\/CVPRW53098.2021.00239. 2","DOI":"10.1109\/CVPRW53098.2021.00239"},{"key":"e_1_2_6_46_2","unstructured":"WillisK. KatzM. JayaramanP. K. CaseyE.:Using AI to Power AutoConstrain in Fusion Automated Sketching Jan.2025. URL:https:\/\/www.research.autodesk.com\/blog\/using-ai-to-power-autoconstrain-in-fusion-automated-sketching\/. 2"},{"key":"e_1_2_6_47_2","doi-asserted-by":"publisher","DOI":"10.1145\/3450626.3459818"},{"key":"e_1_2_6_47_3","doi-asserted-by":"crossref","unstructured":"doi:10.1145\/3450626.3459818. 2 3","DOI":"10.1145\/3450626.3459818"},{"key":"e_1_2_6_48_2","doi-asserted-by":"crossref","unstructured":"WuR. SuW. LiaoJ.: Chat2svg: Vector graphics generation with large language models and image diffusion models.2025 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)(2025). 2","DOI":"10.1109\/CVPR52734.2025.02206"},{"key":"e_1_2_6_49_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00670"},{"key":"e_1_2_6_49_3","doi-asserted-by":"crossref","unstructured":"doi:10.1109\/ICCV48922.2021.00670. 2 3","DOI":"10.1109\/ICCV48922.2021.00670"},{"key":"e_1_2_6_50_2","unstructured":"WangR. YuanY. SunS. BianJ.: Text-to-cad generation through infusing visual feedback in large language models.arXiv preprint arXiv:2501.19054(2025). 2"},{"issue":"4","key":"e_1_2_6_51_2","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3658129","article-title":"Brepgen: A b-rep generative diffusion model with structured latent geometry","volume":"43","author":"Xu X.","year":"2024","journal-title":"ACM Transactions on Graphics (TOG)"},{"key":"e_1_2_6_52_2","unstructured":"XuC. SunQ. ZhengK. GengX. ZhaoP. FengJ. TaoC. JiangD.:WizardLM: Empowering Large Language Models to Follow Complex Instructions June2023. arXiv:2304.12244 [cs]. URL:http:\/\/arxiv.org\/abs\/2304.12244"},{"key":"e_1_2_6_52_3","unstructured":"doi:10.48550\/arXiv.2304.12244. 3"},{"key":"e_1_2_6_53_2","first-page":"24698","volume-title":"Proceedings of the 39th International Conference on Machine Learning","author":"Xu X.","year":"2022"},{"key":"e_1_2_6_54_2","doi-asserted-by":"crossref","unstructured":"YuF. ChenZ. LiM. SanghiA. ShayaniH. Mahdavi-AmiriA. ZhangH.: Capri-net: Learning compact cad shapes with adaptive primitive assembly. InProceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition(2022) pp.11768\u201311778. 3","DOI":"10.1109\/CVPR52688.2022.01147"},{"key":"e_1_2_6_55_2","unstructured":"ZhangZ. ChenC. LiuB. LiaoC. GongZ. YuH. LiJ. WangR.: Unifying the perspectives of NLP and software engineering: A survey on language models for code.Transactions on Machine Learning Research(2024). URL:https:\/\/openreview.net\/forum?id=hkNnGqZnpa. 3"},{"key":"e_1_2_6_56_2","unstructured":"ZhangZ. SunS. WangW. CaiD. BianJ.:FlexCAD: Unified and Versatile Controllable CAD Generation with Fine-tuned Large Language Models. URL:https:\/\/openreview.net\/forum?id=Z0eiiV3Yyh. 2"}],"container-title":["Computer Graphics Forum"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1111\/cgf.70250","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,16]],"date-time":"2025-10-16T21:36:27Z","timestamp":1760650587000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1111\/cgf.70250"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10]]},"references-count":74,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2025,10]]}},"alternative-id":["10.1111\/cgf.70250"],"URL":"https:\/\/doi.org\/10.1111\/cgf.70250","archive":["Portico"],"relation":{},"ISSN":["0167-7055","1467-8659"],"issn-type":[{"value":"0167-7055","type":"print"},{"value":"1467-8659","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10]]},"assertion":[{"value":"2025-10-11","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}],"article-number":"e70250"}}