{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,14]],"date-time":"2025-12-14T03:18:33Z","timestamp":1765682313523,"version":"3.41.0"},"reference-count":62,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2024,12,20]],"date-time":"2024-12-20T00:00:00Z","timestamp":1734652800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["62472443"],"award-info":[{"award-number":["62472443"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100012166","name":"National Key R&D Program of China","doi-asserted-by":"crossref","award":["2023ZD0508201"],"award-info":[{"award-number":["2023ZD0508201"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Frontier Cross Project of Central South University","award":["2023QYJC008"],"award-info":[{"award-number":["2023QYJC008"]}]},{"name":"General Project of Xiangjiang Lab","award":["23XJ03008"],"award-info":[{"award-number":["23XJ03008"]}]},{"name":"Key Research and Development Plan of Hunan Province","award":["2023SK2027"],"award-info":[{"award-number":["2023SK2027"]}]},{"DOI":"10.13039\/501100004735","name":"Hunan Provincial Natural Science Foundation","doi-asserted-by":"crossref","award":["2022JJ30851, 2023JJ70061, 2024JJ3032"],"award-info":[{"award-number":["2022JJ30851, 2023JJ70061, 2024JJ3032"]}],"id":[{"id":"10.13039\/501100004735","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Multimedia Comput. Commun. Appl."],"published-print":{"date-parts":[[2025,1,31]]},"abstract":"<jats:p>\n            Guided depth super-resolution (GDSR) aims to enhance the level of detail in low-resolution depth images by utilizing the information present in the corresponding high-resolution RGB images. While existing methods utilize different approaches to guide the RGB image to the source image, they often ignore the texture similarity between these two images and usually suffer from unsatisfactory outline reconstruction of the depth map. In this article, we introduce an adaptive texture migration network (ATMNet) designed to mine rich feature information from RGB images and migrate them to the depth image. Specifically, we propose a multi-modal feature extractor (MMFE) to extract private and shared features between the depth map and RGB image. In addition, we present a texture migration module (TMM) to remap and fuse the features extracted from the raw image pairs. Last but not least, we develop a weighted adaptive loss to enhance the reconstruction of the edge areas in the depth map. Extensive experiments on public datasets such as Middlebury, NYUv2, and DIML demonstrate that our method outperforms the existing state-of-the-art GDSR methods and strikes a remarkable balance between performance and efficiency. The source code is available at\n            <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"url\" xlink:href=\"https:\/\/github.com\/MuggleTan\/ATMNet\">https:\/\/github.com\/MuggleTan\/ATMNet<\/jats:ext-link>\n            .\n          <\/jats:p>","DOI":"10.1145\/3702642","type":"journal-article","created":{"date-parts":[[2024,11,1]],"date-time":"2024-11-01T09:08:25Z","timestamp":1730452105000},"page":"1-21","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["ATMNet: Adaptive Texture Migration Network for Guided Depth Super-Resolution"],"prefix":"10.1145","volume":"21","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4143-6399","authenticated-orcid":false,"given":"Kehua","family":"Guo","sequence":"first","affiliation":[{"name":"School of Computer Science and Engineering, Central South University, Changsha, China and The Xiangjiang Laboratory, Changsha, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4595-1905","authenticated-orcid":false,"given":"Xuyang","family":"Tan","sequence":"additional","affiliation":[{"name":"School of Computer Science and Engineering, Central South University, Changsha, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1349-3399","authenticated-orcid":false,"given":"Xiangyuan","family":"Zhu","sequence":"additional","affiliation":[{"name":"School of Computer Science and Engineering, Central South University, Changsha, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7360-7928","authenticated-orcid":false,"given":"Shaojun","family":"Guo","sequence":"additional","affiliation":[{"name":"National Institution of Defense Technology Innovation, Academy of Military Sciences of PLA China, Beijing, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8046-2816","authenticated-orcid":false,"given":"Zhipeng","family":"Xi","sequence":"additional","affiliation":[{"name":"National Institution of Defense Technology Innovation, Academy of Military Sciences of PLA China, Beijing, China"}]}],"member":"320","published-online":{"date-parts":[[2024,12,20]]},"reference":[{"key":"e_1_3_1_2_2","first-page":"391","volume-title":"Proc. IEEE Conf. Comput. Vis. Pattern Recognit.","author":"Bristow Hilton","year":"2013","unstructured":"Hilton Bristow, Anders Eriksson, and Simon Lucey. 2013. Fast convolutional sparse coding. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 391\u2013398."},{"doi-asserted-by":"publisher","key":"e_1_3_1_3_2","DOI":"10.1109\/CVPR.2005.38"},{"doi-asserted-by":"publisher","key":"e_1_3_1_4_2","DOI":"10.1145\/3232678"},{"key":"e_1_3_1_5_2","first-page":"1","article-title":"MSDformer: Multiscale deformable transformer for hyperspectral image super-resolution","volume":"61","author":"Chen Shi","year":"2023","unstructured":"Shi Chen, Lefei Zhang, and Liangpei Zhang. 2023. MSDformer: Multiscale deformable transformer for hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 61 (2023), 1\u201314.","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"e_1_3_1_6_2","first-page":"4193","volume-title":"Proc. IEEE Conf. Comput. Vis. Pattern Recognit.","author":"Chen Xiaokang","year":"2020","unstructured":"Xiaokang Chen, Kwan-Yee Lin, Chen Qian, Gang Zeng, and Hongsheng Li. 2020. 3D sketch-aware semantic scene completion via semi-supervised structure prior. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 4193\u20134202."},{"key":"e_1_3_1_7_2","doi-asserted-by":"crossref","first-page":"114877","DOI":"10.1016\/j.eswa.2021.114877","article-title":"Deep monocular depth estimation leveraging a large-scale outdoor stereo dataset","volume":"178","author":"Cho Jaehoon","year":"2021","unstructured":"Jaehoon Cho, Dongbo Min, Youngjung Kim, and Kwanghoon Sohn. 2021. Deep monocular depth estimation leveraging a large-scale outdoor stereo dataset. Expert Syst. Appl. 178 (2021), 114877.","journal-title":"Expert Syst. Appl."},{"doi-asserted-by":"publisher","key":"e_1_3_1_8_2","DOI":"10.1109\/TIM.2024.3381168"},{"doi-asserted-by":"publisher","key":"e_1_3_1_9_2","DOI":"10.1109\/CVPR52688.2022.00202"},{"key":"e_1_3_1_10_2","first-page":"248","volume-title":"Proc. IEEE Conf. Comput. Vis. Pattern Recognit","author":"Deng Jia","year":"2009","unstructured":"Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. ImageNet: A large-scale hierarchical image database. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. IEEE, 248\u2013255."},{"doi-asserted-by":"publisher","key":"e_1_3_1_11_2","DOI":"10.1109\/TPAMI.2021.3104172"},{"issue":"5","key":"e_1_3_1_12_2","first-page":"2545","article-title":"Hierarchical features driven residual learning for depth map super-resolution","volume":"28","author":"Guo Chunle","year":"2018","unstructured":"Chunle Guo, Chongyi Li, Jichang Guo, Runmin Cong, Huazhu Fu, and Ping Han. 2018. Hierarchical features driven residual learning for depth map super-resolution. IEEE Trans. Image Process. 28, 5 (2018), 2545\u20132557.","journal-title":"IEEE Trans. Image Process."},{"issue":"1","key":"e_1_3_1_13_2","first-page":"192","article-title":"Robust guided image filtering using nonconvex potentials","volume":"40","author":"Ham Bumsub","year":"2017","unstructured":"Bumsub Ham, Minsu Cho, and Jean Ponce. 2017. Robust guided image filtering using nonconvex potentials. IEEE Trans. Pattern Anal. Mach. Intell. 40, 1 (2017), 192\u2013207.","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"doi-asserted-by":"publisher","key":"e_1_3_1_14_2","DOI":"10.1109\/TPAMI.2019.2954885"},{"doi-asserted-by":"publisher","key":"e_1_3_1_15_2","DOI":"10.1109\/TPAMI.2012.213"},{"doi-asserted-by":"publisher","key":"e_1_3_1_16_2","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_1_17_2","first-page":"9229","volume-title":"Proc. IEEE Conf. Comput. Vis. Pattern Recognit.","author":"He Lingzhi","year":"2021","unstructured":"Lingzhi He, Hongguang Zhu, Feng Li, Huihui Bai, Runmin Cong, Chunjie Zhang, Chunyu Lin, Meiqin Liu, and Yao Zhao. 2021. Towards fast and accurate real-world depth super-resolution: Benchmark dataset and baseline. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 9229\u20139238."},{"key":"e_1_3_1_18_2","first-page":"1","volume-title":"Proc. IEEE Conf. Comput. Vis. Pattern Recognit","author":"Hirschmuller Heiko","year":"2007","unstructured":"Heiko Hirschmuller and Daniel Scharstein. 2007. Evaluation of cost functions for stereo matching. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. IEEE, 1\u20138."},{"doi-asserted-by":"publisher","key":"e_1_3_1_19_2","DOI":"10.1016\/j.cviu.2009.11.004"},{"key":"e_1_3_1_20_2","first-page":"353","volume-title":"Proc. Eur. Conf. Comput. Vis.","author":"Hui Tak-Wai","year":"2016","unstructured":"Tak-Wai Hui, Chen Change Loy, and Xiaoou Tang. 2016. Depth map super-resolution by deep multi-scale guidance. In Proc. Eur. Conf. Comput. Vis. Springer, 353\u2013369."},{"key":"e_1_3_1_21_2","first-page":"21983","volume-title":"Proc. IEEE Conf. Comput. Vis. Pattern Recognit.","author":"Jia Xiaosong","year":"2023","unstructured":"Xiaosong Jia, Penghao Wu, Li Chen, Jiangwei Xie, Conghui He, Junchi Yan, and Hongyang Li. 2023. Think twice before driving: Towards scalable decoders for end-to-end autonomous driving. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 21983\u201321994."},{"doi-asserted-by":"publisher","key":"e_1_3_1_22_2","DOI":"10.1016\/j.image.2020.116040"},{"key":"e_1_3_1_23_2","volume-title":"Proc. Adv. Neural Inf. Process. Syst.","volume":"30","author":"Kendall Alex","year":"2017","unstructured":"Alex Kendall and Yarin Gal. 2017. What uncertainties do we need in Bayesian deep learning for computer vision? Proc. Adv. Neural Inf. Process. Syst. 30 (2017)."},{"doi-asserted-by":"publisher","key":"e_1_3_1_24_2","DOI":"10.1007\/s11263-020-01386-z"},{"key":"e_1_3_1_25_2","first-page":"992","volume-title":"Proc. IEEE Int. Conf. Image Process","author":"Kim Sunok","year":"2017","unstructured":"Sunok Kim, Dongbo Min, Bumsub Ham, Seungryong Kim, and Kwanghoon Sohn. 2017. Deep stereo confidence prediction for depth estimation. In Proc. IEEE Int. Conf. Image Process. IEEE, 992\u2013996."},{"doi-asserted-by":"publisher","key":"e_1_3_1_26_2","DOI":"10.1109\/TIP.2016.2601262"},{"doi-asserted-by":"publisher","key":"e_1_3_1_27_2","DOI":"10.1109\/TIP.2018.2836318"},{"key":"e_1_3_1_28_2","volume-title":"Proc. Int. Conf. Learn. Represent","author":"Kingma Diederik P.","year":"2014","unstructured":"Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. In Proc. Int. Conf. Learn. Represent."},{"issue":"3","key":"e_1_3_1_29_2","doi-asserted-by":"crossref","first-page":"96","DOI":"10.1145\/1276377.1276497","article-title":"Joint bilateral upsampling","volume":"26","author":"Kopf Johannes","year":"2007","unstructured":"Johannes Kopf, Michael F. Cohen, Dani Lischinski, and Matt Uyttendaele. 2007. Joint bilateral upsampling. ACM Trans. Graph. 26, 3 (2007), 96\u2013es.","journal-title":"ACM Trans. Graph."},{"doi-asserted-by":"publisher","key":"e_1_3_1_30_2","DOI":"10.1145\/3568678"},{"doi-asserted-by":"publisher","key":"e_1_3_1_31_2","DOI":"10.1016\/j.patcog.2020.107513"},{"key":"e_1_3_1_32_2","first-page":"154","volume-title":"Proc. Eur. Conf. Comput. Vis.","author":"Li Yijun","year":"2016","unstructured":"Yijun Li, Jia-Bin Huang, Narendra Ahuja, and Ming-Hsuan Yang. 2016. Deep joint image filtering. In Proc. Eur. Conf. Comput. Vis. Springer, 154\u2013169."},{"doi-asserted-by":"publisher","key":"e_1_3_1_33_2","DOI":"10.1109\/TPAMI.2018.2890623"},{"key":"e_1_3_1_34_2","first-page":"169","volume-title":"Proc. IEEE Conf. Comput. Vis. Pattern Recognit.","author":"Liu Ming-Yu","year":"2013","unstructured":"Ming-Yu Liu, Oncel Tuzel, and Yuichi Taguchi. 2013. Joint geodesic upsampling of depth images. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 169\u2013176."},{"key":"e_1_3_1_35_2","first-page":"6368","volume-title":"Proc. IEEE Conf. Comput. Vis. Pattern Recognit.","author":"Lu Liying","year":"2021","unstructured":"Liying Lu, Wenbo Li, Xin Tao, Jiangbo Lu, and Jiaya Jia. 2021. MASA-SR: Matching acceleration and spatial adaptation for reference-based image super-resolution. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 6368\u20136377."},{"key":"e_1_3_1_36_2","first-page":"8829","volume-title":"Proc. IEEE Int. Conf. Comput. Vis.","author":"Lutio Riccardo de","year":"2019","unstructured":"Riccardo de Lutio, Stefano D\u2019aronco, Jan Dirk Wegner, and Konrad Schindler. 2019. Guided super-resolution as pixel-to-pixel transformation. In Proc. IEEE Int. Conf. Comput. Vis., 8829\u20138837."},{"key":"e_1_3_1_37_2","first-page":"18237","volume-title":"Proc. IEEE Conf. Comput. Vis. Pattern Recognit.","author":"Metzger Nando","year":"2023","unstructured":"Nando Metzger, Rodrigo Caye Daudt, and Konrad Schindler. 2023. Guided depth super-resolution by deep anisotropic diffusion. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 18237\u201318246."},{"key":"e_1_3_1_38_2","first-page":"16398","volume-title":"Proc. Adv. Neural Inf. Process. Syst.","volume":"34","author":"Ning Qian","year":"2021","unstructured":"Qian Ning, Weisheng Dong, Xin Li, Jinjian Wu, and Guangming Shi. 2021. Uncertainty-driven loss for single image super-resolution. Proc. Adv. Neural Inf. Process. Syst. 34 (2021), 16398\u201316409."},{"key":"e_1_3_1_39_2","volume-title":"Proc. Adv. Neural Inf. Process. Syst.","volume":"32","author":"Paszke Adam","year":"2019","unstructured":"Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, et al. 2019. PyTorch: An imperative style, high-performance deep learning library. Proc. Adv. Neural Inf. Process. Syst. 32 (2019)."},{"key":"e_1_3_1_40_2","first-page":"13015","volume-title":"Proc. IEEE Conf. Comput. Vis. Pattern Recognit","author":"Peng Wanli","year":"2020","unstructured":"Wanli Peng, Hao Pan, He Liu, and Yi Sun. 2020. IDA-3D: Instance-depth-aware 3D object detection from stereo vision for autonomous driving. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit, 13015\u201313024."},{"key":"e_1_3_1_41_2","first-page":"234","volume-title":"Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent","author":"Ronneberger Olaf","year":"2015","unstructured":"Olaf Ronneberger, Philipp Fischer, and Thomas Brox. 2015. U-Net: Convolutional networks for biomedical image segmentation. In Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent. Springer, 234\u2013241."},{"key":"e_1_3_1_42_2","doi-asserted-by":"crossref","first-page":"31","DOI":"10.1007\/978-3-319-11752-2_3","volume-title":"Proc. German Conf. Pattern Recognit.","author":"Scharstein Daniel","year":"2014","unstructured":"Daniel Scharstein, Heiko Hirschm\u00fcller, York Kitajima, Greg Krathwohl, Nera Ne\u0161i\u0107, Xi Wang, and Porter Westling. 2014. High-resolution stereo datasets with subpixel-accurate ground truth. In Proc. German Conf. Pattern Recognit. Springer, 31\u201342."},{"key":"e_1_3_1_43_2","first-page":"1","volume-title":"Proc. IEEE Conf. Comput. Vis. Pattern Recognit","author":"Scharstein Daniel","year":"2007","unstructured":"Daniel Scharstein and Chris Pal. 2007. Learning conditional random fields for stereo. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. IEEE, 1\u20138."},{"key":"e_1_3_1_44_2","first-page":"746","volume-title":"Proc. Eur. Conf. Comput. Vis.","volume":"7576","author":"Silberman Nathan","year":"2012","unstructured":"Nathan Silberman, Derek Hoiem, Pushmeet Kohli, and Rob Fergus. 2012. Indoor segmentation and support inference from RGBD images. Proc. Eur. Conf. Comput. Vis. 7576 (2012), 746\u2013760."},{"doi-asserted-by":"publisher","key":"e_1_3_1_45_2","DOI":"10.1145\/3429285"},{"issue":"4","key":"e_1_3_1_46_2","first-page":"1","article-title":"Characters link shots: Character attention network for movie scene segmentation","volume":"20","author":"Tan Jiawei","year":"2023","unstructured":"Jiawei Tan, Hongxing Wang, and Junsong Yuan. 2023. Characters link shots: Character attention network for movie scene segmentation. ACM Trans. Multimedia Comput. Commun. Appl. 20, 4 (2023), 1\u201323.","journal-title":"ACM Trans. Multimedia Comput. Commun. Appl."},{"key":"e_1_3_1_47_2","first-page":"4390","volume-title":"Proc. ACM Int. Conf. Multimed.","author":"Tang Jiaxiang","year":"2021","unstructured":"Jiaxiang Tang, Xiaokang Chen, and Gang Zeng. 2021. Joint implicit image function for guided depth super-resolution. In Proc. ACM Int. Conf. Multimed., 4390\u20134399."},{"key":"e_1_3_1_48_2","first-page":"2148","volume-title":"Proc. ACM Int. Conf. Multimed.","author":"Tang Qi","year":"2021","unstructured":"Qi Tang, Runmin Cong, Ronghui Sheng, Lingzhi He, Dan Zhang, Yao Zhao, and Sam Kwong. 2021. BridgeNet: A joint learning network of depth map super-resolution and monocular depth estimation. In Proc. ACM Int. Conf. Multimed., 2148\u20132157."},{"doi-asserted-by":"publisher","key":"e_1_3_1_49_2","DOI":"10.1109\/TIP.2022.3140606"},{"issue":"6","key":"e_1_3_1_50_2","doi-asserted-by":"crossref","first-page":"3304","DOI":"10.1109\/TCSVT.2021.3104151","article-title":"Depth map super-resolution based on dual normal-depth regularization and graph Laplacian prior","volume":"32","author":"Wang Jin","year":"2021","unstructured":"Jin Wang, Longhua Sun, Ruiqin Xiong, Yunhui Shi, Qing Zhu, and Baocai Yin. 2021. Depth map super-resolution based on dual normal-depth regularization and graph Laplacian prior. IEEE Trans. Circuits Syst. Video Technol. 32, 6 (2021), 3304\u20133318.","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"e_1_3_1_51_2","first-page":"5823","article-title":"SGNet: Structure guided network via gradient-frequency awareness for depth map super-resolution","author":"Wang Zhengxue","year":"2024","unstructured":"Zhengxue Wang, Zhiqiang Yan, and Jian Yang. 2024. SGNet: Structure guided network via gradient-frequency awareness for depth map super-resolution. In Proc. AAAI Conf. Artif. Intell., 5823\u20135831.","journal-title":"Proc. AAAI Conf. Artif. Intell."},{"doi-asserted-by":"publisher","key":"e_1_3_1_52_2","DOI":"10.1016\/j.patcog.2020.107274"},{"issue":"2","key":"e_1_3_1_53_2","first-page":"994","article-title":"Deep color guided coarse-to-fine convolutional network cascade for depth image super-resolution","volume":"28","author":"Wen Yang","year":"2018","unstructured":"Yang Wen, Bin Sheng, Ping Li, Weiyao Lin, and David Dagan Feng. 2018. Deep color guided coarse-to-fine convolutional network cascade for depth image super-resolution. IEEE Trans. on Image Process. 28, 2 (2018), 994\u20131006.","journal-title":"IEEE Trans. on Image Process."},{"key":"e_1_3_1_54_2","first-page":"1","volume-title":"Proc. IEEE Conf. Comput. Vis. Pattern Recognit","author":"Yang Qingxiong","year":"2007","unstructured":"Qingxiong Yang, Ruigang Yang, James Davis, and David Nist\u00e9r. 2007. Spatial-depth super resolution for range images. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit. IEEE, 1\u20138."},{"key":"e_1_3_1_55_2","doi-asserted-by":"crossref","first-page":"7427","DOI":"10.1109\/TIP.2020.3002664","article-title":"PMBANet: Progressive multi-branch aggregation network for scene depth super-resolution","volume":"29","author":"Ye Xinchen","year":"2020","unstructured":"Xinchen Ye, Baoli Sun, Zhihui Wang, Jingyu Yang, Rui Xu, Haojie Li, and Baopu Li. 2020. PMBANet: Progressive multi-branch aggregation network for scene depth super-resolution. IEEE Trans. Image Process. 29 (2020), 7427\u20137442.","journal-title":"IEEE Trans. Image Process"},{"doi-asserted-by":"publisher","key":"e_1_3_1_56_2","DOI":"10.1109\/ICASSP.2018.8461357"},{"doi-asserted-by":"publisher","key":"e_1_3_1_57_2","DOI":"10.1145\/3279952"},{"key":"e_1_3_1_58_2","first-page":"5697","volume-title":"Proc. IEEE Conf. Comput. Vis. Pattern Recognit.","author":"Zhao Zixiang","year":"2022","unstructured":"Zixiang Zhao, Jiangshe Zhang, Shuang Xu, Zudi Lin, and Hanspeter Pfister. 2022. Discrete cosine transform network for guided depth map super-resolution. In Proc. IEEE Conf. Comput. Vis. Pattern Recognit., 5697\u20135707."},{"key":"e_1_3_1_59_2","first-page":"3","volume-title":"Proc. Int. Workshop Deep Learn. Med. Image Anal.","author":"Zhou Zongwei","year":"2018","unstructured":"Zongwei Zhou, Md Mahfuzur Rahman Siddiquee, Nima Tajbakhsh, and Jianming Liang. 2018. Unet++: A nested U-Net architecture for medical image segmentation. In Proc. Int. Workshop Deep Learn. Med. Image Anal. Springer, 3\u201311."},{"doi-asserted-by":"publisher","key":"e_1_3_1_60_2","DOI":"10.1109\/TMM.2020.2987706"},{"doi-asserted-by":"publisher","key":"e_1_3_1_61_2","DOI":"10.1109\/TCSVT.2019.2962867"},{"doi-asserted-by":"publisher","key":"e_1_3_1_62_2","DOI":"10.1109\/TMM.2021.3100766"},{"doi-asserted-by":"publisher","key":"e_1_3_1_63_2","DOI":"10.1109\/TIP.2018.2828335"}],"container-title":["ACM Transactions on Multimedia Computing, Communications, and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3702642","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3702642","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,19]],"date-time":"2025-06-19T01:18:03Z","timestamp":1750295883000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3702642"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,12,20]]},"references-count":62,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2025,1,31]]}},"alternative-id":["10.1145\/3702642"],"URL":"https:\/\/doi.org\/10.1145\/3702642","relation":{},"ISSN":["1551-6857","1551-6865"],"issn-type":[{"type":"print","value":"1551-6857"},{"type":"electronic","value":"1551-6865"}],"subject":[],"published":{"date-parts":[[2024,12,20]]},"assertion":[{"value":"2023-11-03","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-10-27","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-12-20","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}