{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T21:16:49Z","timestamp":1764969409212,"version":"3.46.0"},"reference-count":40,"publisher":"Association for Computing Machinery (ACM)","issue":"6","funder":[{"name":"Key R&D Program of Zhejiang","award":["No. 2024C01069"],"award-info":[{"award-number":["No. 2024C01069"]}]},{"DOI":"10.13039\/501100001809","name":"the National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["Grant No. 62036010"],"award-info":[{"award-number":["Grant No. 62036010"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2025,12]]},"abstract":"<jats:p>Supersampling has proven highly effective in enhancing visual fidelity by reducing aliasing, increasing resolution, and generating interpolated frames. It has become a standard component of modern real-time rendering pipelines. However, on mobile platforms, deep learning-based supersampling methods remain impractical due to stringent hardware constraints, while non-neural supersampling techniques often fall short in delivering perceptually high-quality results. In particular, producing visually pleasing reconstructions and temporally coherent interpolations is still a significant challenge in mobile settings. In this work, we present a novel, lightweight supersampling framework tailored for mobile devices. Our approach substantially improves both image reconstruction quality and temporal consistency while maintaining real-time performance. For super-resolution, we propose an intra-pixel object coverage estimation method for reconstructing high-quality anti-aliased pixels in edge regions, a gradient-guided strategy for non-edge areas, and a temporal sample accumulation approach to improve overall image quality. For frame interpolation, we develop an efficient motion estimation module coupled with a lightweight fusion scheme that integrates both estimated optical flow and rendered motion vectors, enabling temporally coherent interpolation of object dynamics and lighting variations. Extensive experiments demonstrate that our method consistently outperforms existing baselines in both perceptual image quality and temporal smoothness, while maintaining real-time performance on mobile GPUs. A demo application and supplementary materials are available on the project page.<\/jats:p>","DOI":"10.1145\/3763348","type":"journal-article","created":{"date-parts":[[2025,12,4]],"date-time":"2025-12-04T17:15:39Z","timestamp":1764868539000},"page":"1-12","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Lightweight, Edge-Aware, and Temporally Consistent Supersampling for Mobile Real-Time Rendering"],"prefix":"10.1145","volume":"44","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8141-2335","authenticated-orcid":false,"given":"Sipeng","family":"Yang","sequence":"first","affiliation":[{"name":"State Key Laboratory of CAD &amp; CG, Zhejiang University, Hangzhou, China"},{"name":"Hangzhou Research Institute of AI and Holographic Technology, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-8342-2107","authenticated-orcid":false,"given":"Jiayu","family":"Ji","sequence":"additional","affiliation":[{"name":"State Key Laboratory of CAD &amp; CG, Zhejiang University, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0007-6592-7463","authenticated-orcid":false,"given":"Junhao","family":"Zhuge","sequence":"additional","affiliation":[{"name":"State Key Laboratory of CAD &amp; CG, Zhejiang University, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0007-9740-2688","authenticated-orcid":false,"given":"Jinzhe","family":"Zhao","sequence":"additional","affiliation":[{"name":"State Key Laboratory of CAD &amp; CG, Zhejiang University, Hangzhou, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0008-6423-9922","authenticated-orcid":false,"given":"Qiang","family":"Qiu","sequence":"additional","affiliation":[{"name":"OPPO Computing &amp; Graphics Research Institute, Bellevue, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0002-6140-9216","authenticated-orcid":false,"given":"Chen","family":"Li","sequence":"additional","affiliation":[{"name":"OPPO Computing &amp; Graphics Research Institute, Bellevue, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0006-4029-4615","authenticated-orcid":false,"given":"Yuzhong","family":"Yan","sequence":"additional","affiliation":[{"name":"OPPO Computing &amp; Graphics Research Institute, Bellevue, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3633-2261","authenticated-orcid":false,"given":"Kerong","family":"Wang","sequence":"additional","affiliation":[{"name":"OPPO Computing &amp; Graphics Research Institute, Bellevue, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9379-094X","authenticated-orcid":false,"given":"Lingqi","family":"Yan","sequence":"additional","affiliation":[{"name":"Mohamed bin Zayed University of Artificial Intelligence, Abu Dhabi, United Arab Emirates"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7339-2920","authenticated-orcid":false,"given":"Xiaogang","family":"Jin","sequence":"additional","affiliation":[{"name":"State Key Laboratory of CAD &amp; CG, Zhejiang University, Hangzhou, China"}]}],"member":"320","published-online":{"date-parts":[[2025,12,4]]},"reference":[{"key":"e_1_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1145\/166117.166131"},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCE.2012.6227481"},{"key":"e_1_2_2_3_1","unstructured":"AMD. 2011. EQAA modes for AMD 6900 series graphics cards. https:\/\/www.geeks3d.com\/forums\/index.php?topic=2302.0."},{"key":"e_1_2_2_4_1","unstructured":"AMD. 2021. FidelityFX Super Resolution. https:\/\/gpuopen.com\/fidelityfx-superresolution\/."},{"key":"e_1_2_2_5_1","unstructured":"AMD. 2022. FidelityFX Super Resolution 2.0. https:\/\/gpuopen.com\/fidelityfx-superresolution-2\/."},{"key":"e_1_2_2_6_1","unstructured":"AMD. 2023. AMD FSR 3 Now Available. https:\/\/community.amd.com\/t5\/gaming\/amd-fsr-3-now-available\/ba-p\/634265."},{"key":"e_1_2_2_7_1","unstructured":"AMD. 2025a. AMD FidelityFX Optical Flow. https:\/\/gpuopen.com\/manuals\/fidelityfx_sdk\/fidelityfx_sdk-page_techniques_optical-flow\/."},{"key":"e_1_2_2_8_1","unstructured":"AMD. 2025b. Game-Changing Updates: FSR 4 AFMF 2.1 AI-Powered Features and More! https:\/\/community.amd.com\/t5\/gaming\/game-changing-updates-fsr-4-afmf-2-1-ai-powered-features-amp\/ba-p\/748504."},{"key":"e_1_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.1023\/B:VISI.0000011205.11775.fd"},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/7529.8927"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1016\/0923-5965(94)90027-2"},{"key":"e_1_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/76.246088"},{"key":"e_1_2_2_13_1","unstructured":"Epic. 2022. Temporal Super Resolution. https:\/\/docs.unrealengine.com\/5.2\/en-US\/temporal-super-resolution-in-unreal-engine\/."},{"key":"e_1_2_2_14_1","unstructured":"Epic Games. 2025. The most powerful real-time 3D creation tool - Unreal Engine. www.unrealengine.com\/en-US\/."},{"key":"e_1_2_2_15_1","first-page":"1","article-title":"ExtraNet: Real-time extrapolated rendering for low-latency temporal supersampling","volume":"40","author":"Guo Jie","year":"2021","unstructured":"Jie Guo, Xihao Fu, Liqiang Lin, Hengjun Ma, Yanwen Guo, Shiqiu Liu, and Ling-Qi Yan. 2021. ExtraNet: Real-time extrapolated rendering for low-latency temporal supersampling. ACM Transactions on Graphics 40, 6 (2021), 1\u201316.","journal-title":"ACM Transactions on Graphics"},{"key":"e_1_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1109\/JSTSP.2010.2063014"},{"key":"e_1_2_2_17_1","volume-title":"Multilayer feedforward networks are universal approximators. Neural networks 2, 5","author":"Hornik Kurt","year":"1989","unstructured":"Kurt Hornik, Maxwell Stinchcombe, and Halbert White. 1989. Multilayer feedforward networks are universal approximators. Neural networks 2, 5 (1989), 359\u2013366."},{"key":"e_1_2_2_18_1","unstructured":"Intel. 2025. Intel Xe Super Sampling 2 for Developers. https:\/\/www.intel.com\/content\/www\/us\/en\/developer\/topic-technology\/gamedev\/xess2.html."},{"key":"e_1_2_2_19_1","unstructured":"IQOO. 2025. Supercomputing Chip Q2. https:\/\/www.iqoo.com\/ae\/en\/support\/questionByTitle?title=Supercomputing%20Chip%20Q2."},{"key":"e_1_2_2_20_1","volume-title":"SMAA: Enhanced subpixel morphological antialiasing. Computer Graphics Forum 31, 2pt1","author":"Jimenez Jorge","year":"2012","unstructured":"Jorge Jimenez, Jose I Echevarria, Tiago Sousa, and Diego Gutierrez. 2012. SMAA: Enhanced subpixel morphological antialiasing. Computer Graphics Forum 31, 2pt1 (2012), 355\u2013364."},{"key":"e_1_2_2_21_1","volume-title":"Proceedings of National Telecommunications Conference. G5\u20133.","author":"Koga Toshio","year":"1981","unstructured":"Toshio Koga. 1981. Motion-compensated interframe coding for video conferencing. In Proceedings of National Telecommunications Conference. G5\u20133."},{"key":"e_1_2_2_22_1","volume-title":"Nvidia, Febuary 2","author":"Lottes Timothy","year":"2009","unstructured":"Timothy Lottes. 2009. Fxaa. White paper, Nvidia, Febuary 2 (2009)."},{"key":"e_1_2_2_23_1","unstructured":"Nvidia. 2006. Coverage sampled antialiasing. https:\/\/developer.download.nvidia.com\/assets\/gamedev\/docs\/CSAA_Tutorial.pdf."},{"key":"e_1_2_2_24_1","unstructured":"Nvidia. 2023. DLSS 3: AI-Powered Neural Graphics Innovations. https:\/\/www.nvidia.com\/en-sg\/geforce\/news\/dlss3-ai-powered-neural-graphics-innovations\/."},{"key":"e_1_2_2_25_1","unstructured":"Nvidia. 2025a. NVIDIA DLSS 4: Supreme Speed Superior Visuals Powered by AI. https:\/\/www.nvidia.com\/en-us\/geforce\/technologies\/dlss\/."},{"key":"e_1_2_2_26_1","unstructured":"Nvidia. 2025b. NVIDIA Optical Flow SDK. https:\/\/developer.nvidia.com\/optical-flow-sdk\/."},{"key":"e_1_2_2_27_1","unstructured":"Pixelworks. 2025. Pixelworks X7 Gen 2 Processor. https:\/\/www.pixelworks.com\/media\/products."},{"key":"e_1_2_2_28_1","unstructured":"Qualcomm. 2023. Snapdragon 8 Gen 3 Mobile Platform. https:\/\/www.qualcomm.com\/products\/mobile\/snapdragon\/smartphones\/snapdragon-8-series-mobile-platforms\/snapdragon-8-gen-3-mobile-platform."},{"key":"e_1_2_2_29_1","unstructured":"Qualcomm. 2024. Snapdragon 8 Elite Mobile Platform. https:\/\/www.qualcomm.com\/products\/mobile\/snapdragon\/smartphones\/snapdragon-8-series-mobile-platforms\/snapdragon-8-elite-mobile-platform. (2024)."},{"key":"e_1_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/1572769.1572787"},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1117\/12.138613"},{"key":"e_1_2_2_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.1998.710815"},{"key":"e_1_2_2_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2003.819861"},{"key":"e_1_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/3610548.3618224"},{"key":"e_1_2_2_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/3687923"},{"key":"e_1_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/3386569.3392376"},{"key":"e_1_2_2_37_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.14018"},{"key":"e_1_2_2_38_1","volume-title":"Mob-FGSR: Frame Generation and Super Resolution for Mobile Real-Time Rendering. In ACM SIGGRAPH 2024 Conference Papers. 1\u201311","author":"Yang Sipeng","year":"2024","unstructured":"Sipeng Yang, Qingchuan Zhu, Junhao Zhuge, Qiang Qiu, Chen Li, Yuzhong Yan, Huihui Xu, Ling-Qi Yan, and Xiaogang Jin. 2024. Mob-FGSR: Frame Generation and Super Resolution for Mobile Real-Time Rendering. In ACM SIGGRAPH 2024 Conference Papers. 1\u201311."},{"key":"e_1_2_2_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3641519.3657439"},{"key":"e_1_2_2_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3610548.3618209"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3763348","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T21:13:25Z","timestamp":1764969205000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3763348"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12]]},"references-count":40,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2025,12]]}},"alternative-id":["10.1145\/3763348"],"URL":"https:\/\/doi.org\/10.1145\/3763348","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"type":"print","value":"0730-0301"},{"type":"electronic","value":"1557-7368"}],"subject":[],"published":{"date-parts":[[2025,12]]},"assertion":[{"value":"2025-05-23","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-08-09","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-12-04","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}