{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,16]],"date-time":"2025-10-16T00:22:43Z","timestamp":1760574163075,"version":"build-2065373602"},"reference-count":77,"publisher":"Association for Computing Machinery (ACM)","issue":"10","funder":[{"DOI":"10.13039\/501100004731","name":"Natural Science Foundation of Zhejiang Province","doi-asserted-by":"crossref","award":["LDT23F02023F02"],"award-info":[{"award-number":["LDT23F02023F02"]}],"id":[{"id":"10.13039\/501100004731","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100012226","name":"Fundamental Research Funds for the Central Universities","doi-asserted-by":"crossref","award":["226-2025-00055"],"award-info":[{"award-number":["226-2025-00055"]}],"id":[{"id":"10.13039\/501100012226","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Earth System Big Data Platform of the School of Earth Sciences, Zhejiang University"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Multimedia Comput. Commun. Appl."],"published-print":{"date-parts":[[2025,10,31]]},"abstract":"<jats:p>In this article, we focus on the one-shot novel view synthesis task which targets synthesizing photo-realistic novel views given only one reference image per scene. Previous One-shot Generalizable Neural Radiance Field (OG-NeRF) methods solve this task in a finetuning-free manner, yet suffer from the blurry issue due to the encoder-only architecture that highly relies on the limited reference image. On the other hand, recent diffusion-based image-to-3D methods show vivid plausible results via distilling pre-trained 2D diffusion models, yet require tedious per-scene optimization. Targeting these issues, we propose GD-NeRF, a generative detail compensation framework that is both capable of producing vivid plausible details and is finetuning-free. Following a coarse-to-fine strategy, it is mainly composed of a One-stage Parallel Pipeline (OPP) and a Diffusion-based 3D-consistent Enhancer (Diff3DE). At the coarse stage, OPP first efficiently integrates the GAN model into the existing OG-NeRF pipeline for injecting primary in-distribution details. Then, at the fine stage, Diff3DE further leverages the pre-trained diffusion models to complement rich out-distribution details while maintaining decent 3D consistency. Extensive experiments on both the synthetic and real-world datasets show that GD-NeRF noticeably improves the vivid details while eliminating the need for per-scene finetuning.<\/jats:p>","DOI":"10.1145\/3748331","type":"journal-article","created":{"date-parts":[[2025,7,23]],"date-time":"2025-07-23T16:19:08Z","timestamp":1753287548000},"page":"1-24","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["GD-NeRF: Generative Detail Compensation for One-shot Generalizable Neural Radiance Fields"],"prefix":"10.1145","volume":"21","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8571-3055","authenticated-orcid":false,"given":"Xiao","family":"Pan","sequence":"first","affiliation":[{"name":"The ReLER Lab, CCAI, Zhejiang University, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8783-8313","authenticated-orcid":false,"given":"Zongxin","family":"Yang","sequence":"additional","affiliation":[{"name":"DBMI, HMS, Harvard University, Cambridge, Massachusetts, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6896-8590","authenticated-orcid":false,"given":"Shuai","family":"Bai","sequence":"additional","affiliation":[{"name":"Alibaba Group, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0512-880X","authenticated-orcid":false,"given":"Yi","family":"Yang","sequence":"additional","affiliation":[{"name":"The ReLER Lab, CCAI, Zhejiang University, Hangzhou, China"}]}],"member":"320","published-online":{"date-parts":[[2025,10,14]]},"reference":[{"key":"e_1_3_3_2_2","unstructured":"Andrew Brock Jeff Donahue and Karen Simonyan. 2018. Large scale GAN training for high fidelity natural image synthesis. arXiv:1809.11096. Retrieved from https:\/\/arxiv.org\/abs\/1809.11096"},{"key":"e_1_3_3_3_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.00395"},{"key":"e_1_3_3_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00951"},{"key":"e_1_3_3_5_2","doi-asserted-by":"crossref","unstructured":"Duygu Ceylan Chun-Hao Paul Huang and Niloy J. Mitra. 2023. Pix2Video: Video editing using image diffusion. arXiv:2303.12688. Retrieved from https:\/\/arxiv.org\/abs\/2303.12688","DOI":"10.1109\/ICCV51070.2023.02121"},{"key":"e_1_3_3_6_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01565"},{"key":"e_1_3_3_7_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00574"},{"key":"e_1_3_3_8_2","unstructured":"Angel X. Chang Thomas Funkhouser Leonidas Guibas Pat Hanrahan Qixing Huang Zimo Li Silvio Savarese Manolis Savva Shuran Song Hao Su et al. 2015. Shapenet: An information-rich 3D model repository. arXiv:1512.03012. Retrieved from https:\/\/arxiv.org\/abs\/1512.03012"},{"key":"e_1_3_3_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.01840"},{"key":"e_1_3_3_10_2","first-page":"370","volume-title":"Proceedings of the European Conference on Computer Vision","author":"Chen Yuedong","year":"2024","unstructured":"Yuedong Chen, Haofei Xu, Chuanxia Zheng, Bohan Zhuang, Marc Pollefeys, Andreas Geiger, Tat-Jen Cham, and Jianfei Cai. 2024. Mvsplat: Efficient 3D Gaussian splatting from sparse multi-view images. In Proceedings of the European Conference on Computer Vision. Springer, 370\u2013386."},{"key":"e_1_3_3_11_2","doi-asserted-by":"publisher","DOI":"10.1145\/3635717"},{"key":"e_1_3_3_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00916"},{"volume-title":"Proceedings of the IEEE Transactions on Pattern Analysis and Machine Intelligence","author":"Croitoru Florinel-Alin","key":"e_1_3_3_13_2","unstructured":"Florinel-Alin Croitoru, Vlad Hondru, Radu Tudor Ionescu, and Mubarak Shah. 2023. Diffusion models in vision: A survey. In Proceedings of the IEEE Transactions on Pattern Analysis and Machine Intelligence."},{"key":"e_1_3_3_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.01977"},{"key":"e_1_3_3_15_2","first-page":"8780","article-title":"Diffusion models beat GANs on image synthesis","volume":"34","author":"Dhariwal Prafulla","year":"2021","unstructured":"Prafulla Dhariwal and Alexander Nichol. 2021. Diffusion models beat GANs on image synthesis. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 34, 8780\u20138794.","journal-title":"Proceedings of the Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_16_2","unstructured":"Michal Geyer Omer Bar-Tal Shai Bagon and Tali Dekel. 2023. Tokenflow: Consistent diffusion features for consistent video editing. arXiv:2307.10373. Retrieved from https:\/\/arxiv.org\/abs\/2307.10373"},{"key":"e_1_3_3_17_2","article-title":"Generative adversarial nets","author":"Goodfellow Ian","year":"2014","unstructured":"Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. 2014. Generative adversarial nets. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 27.","journal-title":"Proceedings of the Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_18_2","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Gu Jiatao","year":"2023","unstructured":"Jiatao Gu, Alex Trevithick, Kai-En Lin, Josh Susskind, Christian Theobalt, Lingjie Liu, and Ravi Ramamoorthi. 2023. NeRFDiff: Single-image view synthesis with NeRF-guided distillation from 3D-aware diffusion. In Proceedings of the International Conference on Machine Learning."},{"key":"e_1_3_3_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/WACV51458.2022.00009"},{"key":"e_1_3_3_20_2","doi-asserted-by":"publisher","DOI":"10.1145\/3640015"},{"key":"e_1_3_3_21_2","article-title":"GANs trained by a two time-scale update rule converge to a local NASH equilibrium","author":"Heusel Martin","year":"2017","unstructured":"Martin Heusel, Hubert Ramsauer, Thomas Unterthiner, Bernhard Nessler, and Sepp Hochreiter. 2017. GANs trained by a two time-scale update rule converge to a local NASH equilibrium. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 30.","journal-title":"Proceedings of the Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_22_2","unstructured":"Yicong Hong Kai Zhang Jiuxiang Gu Sai Bi Yang Zhou Difan Liu Feng Liu Kalyan Sunkavalli Trung Bui and Hao Tan. 2023. LRM: Large reconstruction model for single image to 3D. arXiv:2311.04400. Retrieved from https:\/\/arxiv.org\/abs\/2311.04400"},{"key":"e_1_3_3_23_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.00583"},{"key":"e_1_3_3_24_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV48922.2021.01271"},{"key":"e_1_3_3_25_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2014.59"},{"key":"e_1_3_3_26_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00453"},{"key":"e_1_3_3_27_2","doi-asserted-by":"crossref","unstructured":"Levon Khachatryan Andranik Movsisyan Vahram Tadevosyan Roberto Henschel Zhangyang Wang Shant Navasardyan and Humphrey Shi. 2023. Text2Video-Zero: Text-to-image diffusion models are zero-shot video generators. arXiv:2303.13439. Retrieved from https:\/\/arxiv.org\/abs\/2303.13439","DOI":"10.1109\/ICCV51070.2023.01462"},{"key":"e_1_3_3_28_2","unstructured":"Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv:1412.6980. Retrieved from https:\/\/arxiv.org\/abs\/1412.6980"},{"key":"e_1_3_3_29_2","unstructured":"Junnan Li Dongxu Li Silvio Savarese and Steven Hoi. 2023. Blip-2: Bootstrapping language-image pre-training with frozen image encoders and large language models. arXiv:2301.12597. Retrieved from https:\/\/arxiv.org\/abs\/2301.12597"},{"key":"e_1_3_3_30_2","doi-asserted-by":"publisher","DOI":"10.1145\/3581783.3611724"},{"key":"e_1_3_3_31_2","doi-asserted-by":"publisher","DOI":"10.1145\/3568678"},{"key":"e_1_3_3_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSVT.2025.3557012"},{"key":"e_1_3_3_33_2","doi-asserted-by":"publisher","DOI":"10.1109\/WACV56688.2023.00087"},{"key":"e_1_3_3_34_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.00853"},{"key":"e_1_3_3_35_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.00951"},{"key":"e_1_3_3_36_2","first-page":"3481","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Mescheder Lars","year":"2018","unstructured":"Lars Mescheder, Andreas Geiger, and Sebastian Nowozin. 2018. Which training methods for GANs do actually converge? In Proceedings of the International Conference on Machine Learning. PMLR, 3481\u20133490."},{"key":"e_1_3_3_37_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58452-8_24"},{"issue":"3","key":"e_1_3_3_38_2","doi-asserted-by":"crossref","first-page":"209","DOI":"10.1109\/LSP.2012.2227726","article-title":"Making a \u201ccompletely blind\u201d image quality analyzer","volume":"20","author":"Mittal Anish","year":"2012","unstructured":"Anish Mittal, Rajiv Soundararajan, and Alan C. Bovik. 2012. Making a \u201ccompletely blind\u201d image quality analyzer. IEEE Signal Processing Letters 20, 3 (2012), 209\u2013212.","journal-title":"IEEE Signal Processing Letters"},{"key":"e_1_3_3_39_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.01129"},{"key":"e_1_3_3_40_2","doi-asserted-by":"publisher","DOI":"10.1145\/3503161.3547909"},{"key":"e_1_3_3_41_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.00328"},{"key":"e_1_3_3_42_2","unstructured":"Chenyang Qi Xiaodong Cun Yong Zhang Chenyang Lei Xintao Wang Ying Shan and Qifeng Chen. 2023. FateZero: Fusing attentions for zero-shot text-based video editing. arXiv:2303.09535. Retrieved from https:\/\/arxiv.org\/abs\/2303.09535"},{"key":"e_1_3_3_43_2","first-page":"8748","volume-title":"Proceedings of the International Conference on Machine Learning","author":"Radford Alec","year":"2021","unstructured":"Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the International Conference on Machine Learning. PMLR, 8748\u20138763."},{"key":"e_1_3_3_44_2","unstructured":"Konstantinos Rematas Ricardo Martin-Brualla and Vittorio Ferrari. 2021. ShaRF: Shape-conditioned radiance fields from a single view. arXiv:2102.08860. Retrieved from https:\/\/arxiv.org\/abs\/2102.08860"},{"key":"e_1_3_3_45_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.01042"},{"key":"e_1_3_3_46_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"e_1_3_3_47_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52688.2022.00613"},{"key":"e_1_3_3_48_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.00900"},{"key":"e_1_3_3_49_2","first-page":"20154","article-title":"GRAF: Generative radiance fields for 3D-aware image synthesis","volume":"33","author":"Schwarz Katja","year":"2020","unstructured":"Katja Schwarz, Yiyi Liao, Michael Niemeyer, and Andreas Geiger. 2020. GRAF: Generative radiance fields for 3D-aware image synthesis. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 33, 20154\u201320166.","journal-title":"Proceedings of the Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_50_2","unstructured":"Yichun Shi Peng Wang Jianglong Ye Mai Long Kejie Li and Xiao Yang. 2023. MVDream: Multi-view diffusion for 3D generation. arXiv:2308.16512. Retrieved from https:\/\/arxiv.org\/abs\/2308.16512"},{"key":"e_1_3_3_51_2","first-page":"19313","article-title":"Light field networks: Neural scene representations with single-evaluation rendering","volume":"34","author":"Sitzmann Vincent","year":"2021","unstructured":"Vincent Sitzmann, Semon Rezchikov, Bill Freeman, Josh Tenenbaum, and Fredo Durand. 2021. Light field networks: Neural scene representations with single-evaluation rendering. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 34, 19313\u201319325.","journal-title":"Proceedings of the Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_52_2","article-title":"Scene representation networks: Continuous 3D-structure-aware neural scene representations","author":"Sitzmann Vincent","year":"2019","unstructured":"Vincent Sitzmann, Michael Zollh\u00f6fer, and Gordon Wetzstein. 2019. Scene representation networks: Continuous 3D-structure-aware neural scene representations. In Proceedings of the Advances in Neural Information Processing Systems, Vol. 32.","journal-title":"Proceedings of the Advances in Neural Information Processing Systems"},{"key":"e_1_3_3_53_2","unstructured":"Jiaming Song Chenlin Meng and Stefano Ermon. 2020. Denoising diffusion implicit models. arXiv:2010.02502. Retrieved from https:\/\/arxiv.org\/abs\/2010.02502"},{"key":"e_1_3_3_54_2","doi-asserted-by":"crossref","unstructured":"Jiaxiang Tang Zhaoxi Chen Xiaokang Chen Tengfei Wang Gang Zeng and Ziwei Liu. 2024. LGM: Large multi-view Gaussian model for high-resolution 3D content creation. arXiv:2402.05054. Retrieved from https:\/\/arxiv.org\/abs\/2402.05054","DOI":"10.1007\/978-3-031-73235-5_1"},{"key":"e_1_3_3_55_2","unstructured":"Jiaxiang Tang Jiawei Ren Hang Zhou Ziwei Liu and Gang Zeng. 2023. DreamGaussian: Generative Gaussian splatting for efficient 3D content creation. arXiv:2309.16653. Retrieved from https:\/\/arxiv.org\/abs\/2309.16653"},{"key":"e_1_3_3_56_2","doi-asserted-by":"crossref","unstructured":"Junshu Tang Tengfei Wang Bo Zhang Ting Zhang Ran Yi Lizhuang Ma and Dong Chen. 2023. Make-it-3D: High-fidelity 3D creation from a single image with diffusion prior. arXiv:2303.14184. Retrieved from https:\/\/arxiv.org\/abs\/2303.14184","DOI":"10.1109\/ICCV51070.2023.02086"},{"key":"e_1_3_3_57_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58536-5_24"},{"key":"e_1_3_3_58_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52729.2023.00191"},{"key":"e_1_3_3_59_2","unstructured":"Anwaar Ulhaq Naveed Akhtar and Ganna Pogrebna. 2022. Efficient diffusion models for vision: A survey. arXiv:2210.09292. Retrieved from https:\/\/arxiv.org\/abs\/2210.09292"},{"key":"e_1_3_3_60_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2003.819861"},{"key":"e_1_3_3_61_2","volume-title":"Proceedings of the International Conference on Learning Representations","author":"Watson Daniel","year":"2023","unstructured":"Daniel Watson, William Chan, Ricardo Martin Brualla, Jonathan Ho, Andrea Tagliasacchi, and Mohammad Norouzi. 2023. Novel view synthesis with diffusion models. In Proceedings of the International Conference on Learning Representations. Retrieved from https:\/\/openreview.net\/forum?id=HtoA0oT30jC"},{"key":"e_1_3_3_62_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.00701"},{"key":"e_1_3_3_63_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR42600.2020.00008"},{"key":"e_1_3_3_64_2","doi-asserted-by":"publisher","DOI":"10.1145\/3614425"},{"key":"e_1_3_3_65_2","unstructured":"Dejia Xu Yifan Jiang Peihao Wang Zhiwen Fan Humphrey Shi and Zhangyang Wang. 2022. SinNeRF: Training neural radiance fields on complex scenes from a single image arXiv:2204.00928. Retrieved from https:\/\/arxiv.org\/abs\/2204.00928"},{"key":"e_1_3_3_66_2","unstructured":"Dejia Xu Yifan Jiang Peihao Wang Zhiwen Fan Yi Wang and Zhangyang Wang. 2022. NeuralLift-360: Lifting an in-the-wild 2D photo to a 3D object with 360 views. arXiv:2211.16431. Retrieved from https:\/\/arxiv.org\/abs\/2211.16431"},{"key":"e_1_3_3_67_2","doi-asserted-by":"crossref","unstructured":"Shuai Yang Yifan Zhou Ziwei Liu and Chen Change Loy. 2023. Rerender a video: Zero-shot text-guided video-to-video translation. arXiv:2306.07954. Retrieved from https:\/\/arxiv.org\/abs\/2306.07954","DOI":"10.1145\/3610548.3618160"},{"key":"e_1_3_3_68_2","doi-asserted-by":"publisher","DOI":"10.1631\/FITEE.2100463"},{"key":"e_1_3_3_69_2","volume-title":"Proceedings of the 41st International Conference on Machine Learning","author":"Yang Zongxin","year":"2024","unstructured":"Zongxin Yang, Guikun Chen, Xiaodi Li, Wenguan Wang, and Yi Yang. 2024. DoraemonGPT: Toward understanding dynamic scenes with large language models (exemplified as a video agent). In Proceedings of the 41st International Conference on Machine Learning."},{"key":"e_1_3_3_70_2","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2024.3383592"},{"key":"e_1_3_3_71_2","unstructured":"Taoran Yi Jiemin Fang Guanjun Wu Lingxi Xie Xiaopeng Zhang Wenyu Liu Qi Tian and Xinggang Wang. 2023. Gaussiandreamer: Fast generation from text to 3D Gaussian splatting with point cloud priors. arXiv:2310.08529. Retrieved from https:\/\/arxiv.org\/abs\/2310.08529"},{"key":"e_1_3_3_72_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR46437.2021.00455"},{"key":"e_1_3_3_73_2","doi-asserted-by":"publisher","DOI":"10.1145\/3582694"},{"key":"e_1_3_3_74_2","first-page":"1","volume-title":"Proceedings of the European Conference on Computer Vision","author":"Zhang Kai","year":"2024","unstructured":"Kai Zhang, Sai Bi, Hao Tan, Yuanbo Xiangli, Nanxuan Zhao, Kalyan Sunkavalli, and Zexiang Xu. 2024. GS-LRM: Large reconstruction model for 3D Gaussian splatting. In Proceedings of the European Conference on Computer Vision. Springer, 1\u201319."},{"key":"e_1_3_3_75_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV51070.2023.00355"},{"key":"e_1_3_3_76_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00068"},{"key":"e_1_3_3_77_2","doi-asserted-by":"publisher","DOI":"10.1145\/3664199"},{"key":"e_1_3_3_78_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR52733.2024.00983"}],"container-title":["ACM Transactions on Multimedia Computing, Communications, and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3748331","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,15]],"date-time":"2025-10-15T06:23:51Z","timestamp":1760509431000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3748331"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,14]]},"references-count":77,"journal-issue":{"issue":"10","published-print":{"date-parts":[[2025,10,31]]}},"alternative-id":["10.1145\/3748331"],"URL":"https:\/\/doi.org\/10.1145\/3748331","relation":{},"ISSN":["1551-6857","1551-6865"],"issn-type":[{"type":"print","value":"1551-6857"},{"type":"electronic","value":"1551-6865"}],"subject":[],"published":{"date-parts":[[2025,10,14]]},"assertion":[{"value":"2024-10-28","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-06-08","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-10-14","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}