{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,27]],"date-time":"2026-01-27T21:22:41Z","timestamp":1769548961636,"version":"3.49.0"},"reference-count":64,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2024,7,4]],"date-time":"2024-07-04T00:00:00Z","timestamp":1720051200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,7,4]],"date-time":"2024-07-04T00:00:00Z","timestamp":1720051200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001871","name":"Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia","doi-asserted-by":"publisher","award":["UIDB\/50021\/2020"],"award-info":[{"award-number":["UIDB\/50021\/2020"]}],"id":[{"id":"10.13039\/501100001871","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001871","name":"Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia","doi-asserted-by":"publisher","award":["UIDB\/50021\/2020"],"award-info":[{"award-number":["UIDB\/50021\/2020"]}],"id":[{"id":"10.13039\/501100001871","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001871","name":"Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia","doi-asserted-by":"publisher","award":["UIDB\/50021\/2020"],"award-info":[{"award-number":["UIDB\/50021\/2020"]}],"id":[{"id":"10.13039\/501100001871","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001871","name":"Funda\u00e7\u00e3o para a Ci\u00eancia e a Tecnologia","doi-asserted-by":"publisher","award":["UIDB\/50021\/2020"],"award-info":[{"award-number":["UIDB\/50021\/2020"]}],"id":[{"id":"10.13039\/501100001871","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100005243","name":"United Nations Educational, Scientific and Cultural Organization","doi-asserted-by":"publisher","award":["Chair on AI & XR"],"award-info":[{"award-number":["Chair on AI & XR"]}],"id":[{"id":"10.13039\/100005243","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100005243","name":"United Nations Educational, Scientific and Cultural Organization","doi-asserted-by":"publisher","award":["Chair on AI & XR"],"award-info":[{"award-number":["Chair on AI & XR"]}],"id":[{"id":"10.13039\/100005243","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100005243","name":"United Nations Educational, Scientific and Cultural Organization","doi-asserted-by":"publisher","award":["Chair on AI & XR"],"award-info":[{"award-number":["Chair on AI & XR"]}],"id":[{"id":"10.13039\/100005243","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100005765","name":"Universidade de Lisboa","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100005765","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Real-Time Image Proc"],"published-print":{"date-parts":[[2024,8]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Depth maps produced by consumer-grade sensors suffer from inaccurate measurements and missing data from either system or scene-specific sources. Data-driven denoising algorithms can mitigate such problems; however, they require vast amounts of ground truth depth data. Recent research has tackled this limitation using self-supervised learning techniques, but it requires multiple RGB-D sensors. Moreover, most existing approaches focus on denoising single isolated depth maps or specific subjects of interest highlighting a need for methods that can effectively denoise depth maps in real-time dynamic environments. This paper extends state-of-the-art approaches for depth-denoising commodity depth devices, proposing SelfReDepth, a self-supervised deep learning technique for depth restoration, via denoising and hole-filling by inpainting of full-depth maps captured with RGB-D sensors. The algorithm targets depth data in video streams, utilizing multiple sequential depth frames coupled with color data to achieve high-quality depth videos with temporal coherence. Finally, SelfReDepth is designed to be compatible with various RGB-D sensors and usable in real-time scenarios as a pre-processing step before applying other depth-dependent algorithms. Our results demonstrate our approach\u2019s real-time performance on real-world datasets shows that it outperforms state-of-the-art methods in denoising and restoration performance at over 30 fps on Commercial Depth Cameras, with potential benefits for augmented and mixed-reality applications.<\/jats:p>","DOI":"10.1007\/s11554-024-01491-z","type":"journal-article","created":{"date-parts":[[2024,7,4]],"date-time":"2024-07-04T07:01:54Z","timestamp":1720076514000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["Selfredepth"],"prefix":"10.1007","volume":"21","author":[{"given":"Alexandre","family":"Duarte","sequence":"first","affiliation":[]},{"given":"Francisco","family":"Fernandes","sequence":"additional","affiliation":[]},{"given":"Jo\u00e3o M.","family":"Pereira","sequence":"additional","affiliation":[]},{"given":"Catarina","family":"Moreira","sequence":"additional","affiliation":[]},{"given":"Jacinto C.","family":"Nascimento","sequence":"additional","affiliation":[]},{"given":"Joaquim","family":"Jorge","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,7,4]]},"reference":[{"issue":"5","key":"1491_CR1","doi-asserted-by":"publisher","first-page":"1315","DOI":"10.1109\/TRO.2018.2853742","volume":"34","author":"F Basso","year":"2018","unstructured":"Basso, F., Menegatti, E., Pretto, A.: Robust intrinsic and extrinsic calibration of rgb-d cameras. IEEE Trans. Rob. 34(5), 1315\u20131332 (2018)","journal-title":"IEEE Trans. Rob."},{"key":"1491_CR2","unstructured":"Batson, J., Royer, L.: Noise2self: blind denoising by self-supervision. In: Proceedings of the 36th International Conference on Machine Learning, pp. 524\u2013533 (2019)"},{"key":"1491_CR3","doi-asserted-by":"crossref","unstructured":"Bertalmio, M., Bertozzi, A.L., Sapiro, G.: Navier\u2013stokes, fluid dynamics, and image and video inpainting. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, vol.\u00a01, p. I (2001)","DOI":"10.1109\/CVPR.2001.990497"},{"key":"1491_CR4","doi-asserted-by":"crossref","unstructured":"Calvarons, A.F.: Improved noise2noise denoising with limited data. In: IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 796\u2013805 (2021)","DOI":"10.1109\/CVPRW53098.2021.00089"},{"key":"1491_CR5","doi-asserted-by":"crossref","unstructured":"Capecci, M., Ceravolo, M.G., Ferracuti, F., Iarlori, S., Kyrki, V., Longhi, S., Romeo, L., Verdini, F.: Physical rehabilitation exercises assessment based on hidden semi-Markov model by kinect v2. In: IEEE-EMBS International Conference on Biomedical and Health Informatics, pp. 256\u2013259 (2016)","DOI":"10.1109\/BHI.2016.7455883"},{"key":"1491_CR6","unstructured":"Cha, S., Park, T., Kim, B., Baek, J., Moon, T.: Gan2gan: generative noise learning for blind denoising with single noisy images. arXiv preprint arXiv:1905.10488 (2019)"},{"key":"1491_CR7","doi-asserted-by":"publisher","first-page":"89","DOI":"10.1023\/B:JMIV.0000011321.19549.88","volume":"20","author":"A Chambolle","year":"2004","unstructured":"Chambolle, A.: An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 20, 89\u201397 (2004)","journal-title":"J. Math. Imaging Vis."},{"key":"1491_CR8","doi-asserted-by":"crossref","unstructured":"Chang, A., Dai, A., Funkhouser, T., Halber, M., Niessner, M., Savva, M., Song, S., Zeng, A., Zhang, Y.: Matterport3d: learning from rgb-d data in indoor environments. In: 2017 International Conference on 3D Vision (2017)","DOI":"10.1109\/3DV.2017.00081"},{"key":"1491_CR9","unstructured":"Chaudhary, R., Dasgupta, H.: An approach for noise removal on depth images. arXiv preprint arXiv:1602.05168 (2016)"},{"key":"1491_CR10","unstructured":"Chen, L., Lin, H., Li, S.: Depth image enhancement for kinect using region growing and bilateral filter. In: ICPR2012, pp. 3070\u20133073 (2012)"},{"key":"1491_CR11","doi-asserted-by":"crossref","unstructured":"Choi, J., Jung, D., Lee, Y., Kim, D., Manocha, D., Lee, D.: Selfdeco: self-supervised monocular depth completion in challenging indoor environments. In: IEEE International Conference on Robotics and Automation, pp. 467\u2013474 (2021)","DOI":"10.1109\/ICRA48506.2021.9560831"},{"issue":"16","key":"1491_CR12","doi-asserted-by":"publisher","first-page":"3460","DOI":"10.3390\/s19163460","volume":"19","author":"Y Dai","year":"2019","unstructured":"Dai, Y., Fu, Y., Li, B., Zhang, X., Yu, T., Wang, W.: A new filtering system for using a consumer depth camera at close range. Sensors 19(16), 3460 (2019)","journal-title":"Sensors"},{"key":"1491_CR13","doi-asserted-by":"crossref","unstructured":"Dewil, V., Anger, J., Davy, A., Ehret, T., Facciolo, G., Arias, P.: Self-supervised training for blind multi-frame video denoising. In: IEEE Winter Conference on Applications of Computer Vision, pp. 2724\u20132734 (2021)","DOI":"10.1109\/WACV48630.2021.00277"},{"key":"1491_CR14","doi-asserted-by":"crossref","unstructured":"Essmaeel, K., Gallo, L., Damiani, E., De\u00a0Pietro, G., Dipanda, A.: Temporal denoising of kinect depth data. In: Eighth International Conference on Signal Image Technology and Internet Based Systems, pp. 47\u201352. IEEE (2012)","DOI":"10.1109\/SITIS.2012.18"},{"key":"1491_CR15","doi-asserted-by":"crossref","unstructured":"Feng, D., Rosenbaum, L., Dietmayer, K.: Towards safe autonomous driving: capture uncertainty in the deep neural network for lidar 3d vehicle detection. In: 2018 21st International Conference on Intelligent Transportation Systems, pp. 3266\u20133273 (2018)","DOI":"10.1109\/ITSC.2018.8569814"},{"key":"1491_CR16","unstructured":"Feng, Z., Jing, L., Yin, P., Tian, Y., Li, B.: Advancing self-supervised monocular depth learning with sparse lidar. arXiv preprint arXiv:2109.09628 (2021)"},{"key":"1491_CR17","doi-asserted-by":"crossref","unstructured":"Gabel, M., Gilad-Bachrach, R., Renshaw, E., Schuster, A.: Full body gait analysis with kinect. In: Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 1964\u20131967 (2012)","DOI":"10.1109\/EMBC.2012.6346340"},{"issue":"9","key":"1491_CR18","doi-asserted-by":"publisher","first-page":"24297","DOI":"10.3390\/s150924297","volume":"15","author":"Z Gao","year":"2015","unstructured":"Gao, Z., Yu, Y., Zhou, Y., Du, S.: Leveraging two kinect sensors for accurate full-body motion capture. Sensors 15(9), 24297\u201324317 (2015)","journal-title":"Sensors"},{"key":"1491_CR19","doi-asserted-by":"crossref","unstructured":"Handa, A., Whelan, T., McDonald, J., Davison, A.J.: A benchmark for rgb-d visual odometry, 3d reconstruction and slam. In: IEEE international conference on Robotics and automation, pp. 1524\u20131531. IEEE (2014)","DOI":"10.1109\/ICRA.2014.6907054"},{"key":"1491_CR20","doi-asserted-by":"crossref","unstructured":"Jiang, L., Xiao, S., He, C.: Kinect depth map inpainting using a multi-scale deep convolutional neural network. In: Proceedings of the 2018 International Conference on Image and Graphics Processing, pp. 91\u201495 (2018)","DOI":"10.1145\/3191442.3191464"},{"key":"1491_CR21","doi-asserted-by":"crossref","unstructured":"Jorge, J., Anjos, R.K.D., Silva, R.: Dynamic occlusion handling for real-time ar applications. In: Proceedings of the 17th International Conference on Virtual-Reality Continuum and Its Applications in Industry (2019)","DOI":"10.1145\/3359997.3365700"},{"key":"1491_CR22","doi-asserted-by":"crossref","unstructured":"Kong, X., Li, K., Yang, Q., Wenyin, L., Yang, M.H.: A new image quality metric for image auto-denoising. In: IEEE International Conference on Computer Vision, pp. 2888\u20132895 (2013)","DOI":"10.1109\/ICCV.2013.359"},{"key":"1491_CR23","doi-asserted-by":"crossref","unstructured":"Krull, A., Buchholz, T.O., Jug, F.: Noise2void\u2014learning denoising from single noisy images. In: IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 2124\u20132132 (2019)","DOI":"10.1109\/CVPR.2019.00223"},{"key":"1491_CR24","doi-asserted-by":"publisher","first-page":"5","DOI":"10.3389\/fcomp.2020.00005","volume":"2","author":"A Krull","year":"2020","unstructured":"Krull, A., Vi\u010dar, T., Prakash, M., Lalit, M., Jug, F.: Probabilistic noise2void: unsupervised content-aware denoising. Front. Comput. Sci. 2, 5 (2020)","journal-title":"Front. Comput. Sci."},{"key":"1491_CR25","unstructured":"Kweon, I.S., Jung, J., Lee, J.Y.: Noise aware depth denoising for a time-of-flight camera. In: 20th Korea\u2013Japan Joint Workshop on Frontiers of Computer Vision (2014)"},{"issue":"10","key":"1491_CR26","doi-asserted-by":"publisher","first-page":"13070","DOI":"10.3390\/rs71013070","volume":"7","author":"E Lachat","year":"2015","unstructured":"Lachat, E., Macher, H., Landes, T., Grussenmeyer, P.: Assessment and calibration of a rgb-d camera (kinect v2 sensor) towards a potential use for close-range 3d modeling. Remote Sens. 7(10), 13070\u201313097 (2015)","journal-title":"Remote Sens."},{"key":"1491_CR27","unstructured":"Lehtinen, J., Munkberg, J., Hasselgren, J., Laine, S., Karras, T., Aittala, M., Aila, T.: Noise2noise: learning image restoration without clean data. In: International Conference on Machine Learning, pp. 2965\u20132974. PMLR (2018)"},{"key":"1491_CR28","doi-asserted-by":"crossref","unstructured":"Lemarchand, F., Findeli, T., Nogues, E., Pelcat, M.: Noisebreaker: gradual image denoising guided by noise analysis. In: IEEE 22nd International Workshop on Multimedia Signal Processing, pp. 1\u20136 (2020)","DOI":"10.1109\/MMSP48831.2020.9287095"},{"key":"1491_CR29","doi-asserted-by":"crossref","unstructured":"Li, A., Yuan, Z., Ling, Y., Chit, W., Zhang, S., Zhang, C.: Fastcompletion: a cascade network with multiscale group-fused inputs for real-time depth completion. In: 25th International Conference on Pattern Recognition, pp. 866\u2013872 (2021)","DOI":"10.1109\/ICPR48806.2021.9412753"},{"key":"1491_CR30","doi-asserted-by":"crossref","unstructured":"Li, L., Wu, H., Chen, Z.: Depth image restoration method based on improved fmm algorithm. In: 2021 13th International Conference on Machine Learning and Computing, ICMLC 2021, pp. 349\u2013355 (2021)","DOI":"10.1145\/3457682.3457732"},{"key":"1491_CR31","unstructured":"Li, W., Saeedi, S., McCormac, J., Clark, R., Tzoumanikas, D., Ye, Q., Huang, Y., Tang, R., Leutenegger, S.: Interiornet: mega-scale multi-sensor photo-realistic indoor scenes dataset. arXiv preprint arXiv:1809.00716 (2018)"},{"key":"1491_CR32","doi-asserted-by":"crossref","unstructured":"Liu, G., Reda, F.A., Shih, K.J., Wang, T.C., Tao, A., Catanzaro, B.: Image inpainting for irregular holes using partial convolutions. In: Computer Vision\u2014ECCV 2018, pp. 89\u2013105 (2018)","DOI":"10.1007\/978-3-030-01252-6_6"},{"key":"1491_CR33","unstructured":"Liu, J., Gong, X., Liu, J.: Guided inpainting and filtering for kinect depth maps. In: Proceedings of the 21st International Conference on Pattern Recognition, pp. 2055\u20132058 (2012)"},{"key":"1491_CR34","doi-asserted-by":"publisher","first-page":"16","DOI":"10.1016\/j.patrec.2014.09.013","volume":"53","author":"J Liu","year":"2015","unstructured":"Liu, J., Liu, Y., Zhang, G., Zhu, P., Chen, Y.Q.: Detecting and tracking people in real time with rgb-d camera. Pattern Recogn. Lett. 53, 16\u201323 (2015)","journal-title":"Pattern Recogn. Lett."},{"key":"1491_CR35","doi-asserted-by":"crossref","unstructured":"Liu, S., Chen, C., Kehtarnavaz, N.: A computationally efficient denoising and hole-filling method for depth image enhancement. In: Real-time image and video processing 2016, vol. 9897, pp. 235\u2013243. SPIE (2016)","DOI":"10.1117\/12.2230495"},{"key":"1491_CR36","doi-asserted-by":"crossref","unstructured":"Ma, F., Cavalheiro, G.V., Karaman, S.: Self-supervised sparse-to-dense: self-supervised depth completion from lidar and monocular camera. In: International Conference on Robotics and Automation, pp. 3288\u20133295 (2019)","DOI":"10.1109\/ICRA.2019.8793637"},{"issue":"7","key":"1491_CR37","doi-asserted-by":"publisher","first-page":"791","DOI":"10.1016\/j.cag.2012.04.011","volume":"36","author":"A Maimone","year":"2012","unstructured":"Maimone, A., Bidwell, J., Peng, K., Fuchs, H.: Enhanced personal autostereoscopic telepresence system using commodity depth cameras. Comput. Graph. 36(7), 791\u2013807 (2012)","journal-title":"Comput. Graph."},{"issue":"6","key":"1491_CR38","doi-asserted-by":"publisher","first-page":"1731","DOI":"10.1109\/JSEN.2014.2309987","volume":"14","author":"T Mallick","year":"2014","unstructured":"Mallick, T., Das, P.P., Majumdar, A.K.: Characterizations of noise in kinect depth images: a review. IEEE Sens. J. 14(6), 1731\u20131740 (2014)","journal-title":"IEEE Sens. J."},{"key":"1491_CR39","unstructured":"Metzler, C.A., Mousavi, A., Heckel, R., Baraniuk, R.G.: Unsupervised learning with stein\u2019s unbiased risk estimator. arXiv preprint arXiv:1805.10531 (2018)"},{"key":"1491_CR40","unstructured":"Mohan, S., Vincent, J.L., Manzorro, R., Crozier, P., Fernandez-Granda, C., Simoncelli, E.P.: Adaptive denoising via gaintuning. In: Thirty-Fifth Conference on Neural Information Processing Systems (2021)"},{"key":"1491_CR41","doi-asserted-by":"crossref","unstructured":"Moran, N., Schmidt, D., Zhong, Y., Coady, P.: Noisier2noise: learning to denoise from unpaired noisy data. In: IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 12061\u201312069 (2020)","DOI":"10.1109\/CVPR42600.2020.01208"},{"key":"1491_CR42","doi-asserted-by":"crossref","unstructured":"Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohi, P., Shotton, J., Hodges, S., Fitzgibbon, A.: Kinectfusion: real-time dense surface mapping and tracking. In: 10th international symposium on mixed and augmented reality, pp. 127\u2013136 (2011)","DOI":"10.1109\/ISMAR.2011.6092378"},{"key":"1491_CR43","doi-asserted-by":"crossref","unstructured":"Oyedotun, O.K., Demisse, G., El\u00a0Rahman\u00a0Shabayek, A., Aouada, D., Ottersten, B.: Facial expression recognition via joint deep learning of rgb-depth map latent representations. In: IEEE International Conference on Computer Vision Workshops, pp. 3161\u20133168 (2017)","DOI":"10.1109\/ICCVW.2017.374"},{"key":"1491_CR44","doi-asserted-by":"crossref","unstructured":"Papkov, M., Roberts, K., Madissoon, L.A., Shilts, J., Bayraktar, O., Fishman, D., Palo, K., Parts, L.: Noise2stack: improving image restoration by learning from volumetric data. In: International Workshop Machine Learning for Medical Image Reconstruction, pp. 99\u2013108 (2021)","DOI":"10.1007\/978-3-030-88552-6_10"},{"key":"1491_CR45","doi-asserted-by":"crossref","unstructured":"Quan, Y., Chen, M., Pang, T., Ji, H.: Self2self with dropout: learning self-supervised denoising from single image. In: IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 1887\u20131895 (2020)","DOI":"10.1109\/CVPR42600.2020.00196"},{"key":"1491_CR46","doi-asserted-by":"crossref","unstructured":"Ren, Z., Yuan, J., Zhang, Z.: Robust hand gesture recognition based on finger-earth mover\u2019s distance with a commodity depth camera. In: Proceedings of the 19th international conference on Multimedia, pp. 1093\u20131096 (2011)","DOI":"10.1145\/2072298.2071946"},{"key":"1491_CR47","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., Brox, T.: U-net: convolutional networks for biomedical image segmentation. In: International Conference on Medical image computing and computer-assisted intervention, pp. 234\u2013241 (2015)","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"1491_CR48","doi-asserted-by":"crossref","unstructured":"Sheth, D.Y., Mohan, S., Vincent, J.L., Manzorro, R., Crozier, P.A., Khapra, M.M., Simoncelli, E.P., Fernandez-Granda, C.: Unsupervised deep video denoising. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 1759\u20131768 (2021)","DOI":"10.1109\/ICCV48922.2021.00178"},{"key":"1491_CR49","unstructured":"Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)"},{"issue":"3","key":"1491_CR50","doi-asserted-by":"publisher","first-page":"4357","DOI":"10.1007\/s11042-016-3523-y","volume":"76","author":"W Song","year":"2017","unstructured":"Song, W., Le, A.V., Yun, S., Jung, S.W., Won, C.S.: Depth completion for kinect v2 sensor. Multimed. Tools Appl. 76(3), 4357\u20134380 (2017)","journal-title":"Multimed. Tools Appl."},{"key":"1491_CR51","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1\u20139 (2015)","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"1491_CR52","doi-asserted-by":"crossref","unstructured":"Tassano, M., Delon, J., Veit, T.: Fastdvdnet: Towards real-time deep video denoising without flow estimation. In: IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 1354\u20131363 (2020)","DOI":"10.1109\/CVPR42600.2020.00143"},{"issue":"1","key":"1491_CR53","doi-asserted-by":"publisher","first-page":"23","DOI":"10.1080\/10867651.2004.10487596","volume":"9","author":"A Telea","year":"2004","unstructured":"Telea, A.: An image inpainting technique based on the fast marching method. J. Graph. Tools 9(1), 23\u201334 (2004)","journal-title":"J. Graph. Tools"},{"issue":"2","key":"1491_CR54","doi-asserted-by":"publisher","first-page":"413","DOI":"10.3390\/s21020413","volume":"21","author":"M T\u00f6lgyessy","year":"2021","unstructured":"T\u00f6lgyessy, M., Dekan, M., Chovanec, L., Hubinsk\u1ef3, P.: Evaluation of the azure kinect and its comparison to kinect v1 and kinect v2. Sensors 21(2), 413 (2021)","journal-title":"Sensors"},{"key":"1491_CR55","doi-asserted-by":"crossref","unstructured":"Tomasi, C., Manduchi, R.: Bilateral filtering for gray and color images. In: 6th International Conference on Computer Vision (IEEE Cat. No. 98CH36271), pp. 839\u2013846 (1998)","DOI":"10.1109\/ICCV.1998.710815"},{"key":"1491_CR56","doi-asserted-by":"publisher","first-page":"e453","DOI":"10.7717\/peerj.453","volume":"2","author":"S Van der Walt","year":"2014","unstructured":"Van der Walt, S., Sch\u00f6nberger, J.L., Nunez-Iglesias, J., Boulogne, F., Warner, J.D., Yager, N., Gouillart, E., Yu, T.: scikit-image: image processing in python. PeerJ 2, e453 (2014)","journal-title":"PeerJ"},{"key":"1491_CR57","doi-asserted-by":"crossref","unstructured":"Wan, Y., Li, Y., Jiang, J., Xu, B.: Edge voxel erosion for noise removal in 3d point clouds collected by kinect. In: Proceedings of the 2020 2nd International Conference on Image, Video and Signal Processing, pp. 59\u201363 (2020)","DOI":"10.1145\/3388818.3388821"},{"key":"1491_CR58","doi-asserted-by":"crossref","unstructured":"Wasenm\u00fcller, O., Meyer, M., Stricker, D.: Corbs: comprehensive rgb-d benchmark for slam using kinect v2. In: IEEE Winter Conference on Applications of Computer Vision, pp. 1\u20137 (2016)","DOI":"10.1109\/WACV.2016.7477636"},{"key":"1491_CR59","doi-asserted-by":"crossref","unstructured":"Xiong, W., Yu, J., Lin, Z., Yang, J., Lu, X., Barnes, C., Luo, J.: Foreground-aware image inpainting. In: IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 5833\u20135841 (2019)","DOI":"10.1109\/CVPR.2019.00599"},{"key":"1491_CR60","doi-asserted-by":"crossref","unstructured":"Yu, J., Lin, Z., Yang, J., Shen, X., Lu, X., Huang, T.S.: Generative image inpainting with contextual attention. In: IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 5505\u20135514 (2018)","DOI":"10.1109\/CVPR.2018.00577"},{"key":"1491_CR61","doi-asserted-by":"crossref","unstructured":"Zennaro, S., Munaro, M., Milani, S., Zanuttigh, P., Bernardi, A., Ghidoni, S., Menegatti, E.: Performance evaluation of the 1st and 2nd generation kinect for multimedia applications. In: IEEE International Conference on Multimedia and Expo, pp. 1\u20136 (2015)","DOI":"10.1109\/ICME.2015.7177380"},{"key":"1491_CR62","first-page":"417","volume":"4","author":"B Zhang","year":"2007","unstructured":"Zhang, B., Allebach, J.P.: Adaptive bilateral filter for sharpness enhancement and noise removal. IEEE Int. Conf. Image Process. 4, 417\u2013420 (2007)","journal-title":"IEEE Int. Conf. Image Process."},{"key":"1491_CR63","doi-asserted-by":"crossref","unstructured":"Zhang, X., Yan, J., Feng, S., Lei, Z., Yi, D., Li, S.Z.: Water filling: Unsupervised people counting via vertical kinect sensor. In: IEEE 9th International Conference on Advanced Video and Signal-based Surveillance, pp. 215\u2013220 (2012)","DOI":"10.1109\/AVSS.2012.82"},{"key":"1491_CR64","unstructured":"Zhou, X.: A study of microsoft kinect calibration. Department of Comp. Science, George Mason University, Fairfax (2012)"}],"container-title":["Journal of Real-Time Image Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11554-024-01491-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11554-024-01491-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11554-024-01491-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,11,23]],"date-time":"2024-11-23T12:03:44Z","timestamp":1732363424000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11554-024-01491-z"}},"subtitle":["Self-supervised real-time depth restoration for consumer-grade sensors"],"short-title":[],"issued":{"date-parts":[[2024,7,4]]},"references-count":64,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2024,8]]}},"alternative-id":["1491"],"URL":"https:\/\/doi.org\/10.1007\/s11554-024-01491-z","relation":{},"ISSN":["1861-8200","1861-8219"],"issn-type":[{"value":"1861-8200","type":"print"},{"value":"1861-8219","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,7,4]]},"assertion":[{"value":"14 September 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 June 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"4 July 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"124"}}