{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,21]],"date-time":"2025-11-21T18:08:07Z","timestamp":1763748487334,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":63,"publisher":"ACM","license":[{"start":{"date-parts":[[2021,11,15]],"date-time":"2021-11-15T00:00:00Z","timestamp":1636934400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Hong Kong Research Grants Council through General Research Fund","award":["14203420"],"award-info":[{"award-number":["14203420"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2021,11,15]]},"DOI":"10.1145\/3485730.3485927","type":"proceedings-article","created":{"date-parts":[[2021,11,11]],"date-time":"2021-11-11T11:41:35Z","timestamp":1636630895000},"page":"302-315","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":12,"title":["UltraDepth"],"prefix":"10.1145","author":[{"given":"Zhiyuan","family":"Xie","sequence":"first","affiliation":[{"name":"The Chinese University of Hong Kong, Hong Kong SAR, China"}]},{"given":"Xiaomin","family":"Ouyang","sequence":"additional","affiliation":[{"name":"The Chinese University of Hong Kong, Hong Kong SAR, China"}]},{"given":"Xiaoming","family":"Liu","sequence":"additional","affiliation":[{"name":"Michigan State University, East Lansing, MI, USA"}]},{"given":"Guoliang","family":"Xing","sequence":"additional","affiliation":[{"name":"The Chinese University of Hong Kong, Hong Kong SAR, China"}]}],"member":"320","published-online":{"date-parts":[[2021,11,15]]},"reference":[{"key":"e_1_3_2_1_1_1","unstructured":"2021. 3D IMAGING WITH ADI TIME OF FLIGHT TECHNOLOGY. https:\/\/www.analog.com\/en\/applications\/technology\/3d-time-of-flight.html.  2021. 3D IMAGING WITH ADI TIME OF FLIGHT TECHNOLOGY. https:\/\/www.analog.com\/en\/applications\/technology\/3d-time-of-flight.html."},{"key":"e_1_3_2_1_2_1","unstructured":"2021. 3D Sensing TOF (Time of Flight) Product Solution. https:\/\/www.gigabyte.com\/Solutions\/3D-Depth-Sensing\/3d-sensing-product-solution.  2021. 3D Sensing TOF (Time of Flight) Product Solution. https:\/\/www.gigabyte.com\/Solutions\/3D-Depth-Sensing\/3d-sensing-product-solution."},{"key":"e_1_3_2_1_3_1","unstructured":"2021. Depth Sensors: Precision & Personal Privacy. https:\/\/www.terabee.com\/depth-sensors-precision-personal-privacy\/.  2021. Depth Sensors: Precision & Personal Privacy. https:\/\/www.terabee.com\/depth-sensors-precision-personal-privacy\/."},{"key":"e_1_3_2_1_4_1","unstructured":"2021. Helios2: The next generation of time-of-flight. https:\/\/thinklucid.com\/helios-time-of-flight-tof-camera\/.  2021. Helios2: The next generation of time-of-flight. https:\/\/thinklucid.com\/helios-time-of-flight-tof-camera\/."},{"key":"e_1_3_2_1_5_1","unstructured":"2021. Vzense DCAM500 ToF Camera User Manual. https:\/\/991ef858-2cfe-44ad-9f3b-5cd69ed0861f.filesusr.com\/ugd\/9c9dda_d442dc06c23e45c9944689b29932f7f6.pdf.  2021. Vzense DCAM500 ToF Camera User Manual. https:\/\/991ef858-2cfe-44ad-9f3b-5cd69ed0861f.filesusr.com\/ugd\/9c9dda_d442dc06c23e45c9944689b29932f7f6.pdf."},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073686"},{"key":"e_1_3_2_1_7_1","volume-title":"Fall detection based on body part tracking using a depth camera","author":"Bian Zhen-Peng","year":"2014","unstructured":"Zhen-Peng Bian , Junhui Hou , Lap-Pui Chau , and Nadia Magnenat-Thalmann . 2014. Fall detection based on body part tracking using a depth camera . IEEE journal of biomedical and health informatics 19, 2 ( 2014 ), 430--439. Zhen-Peng Bian, Junhui Hou, Lap-Pui Chau, and Nadia Magnenat-Thalmann. 2014. Fall detection based on body part tracking using a depth camera. IEEE journal of biomedical and health informatics 19, 2 (2014), 430--439."},{"key":"e_1_3_2_1_8_1","volume-title":"Vijay Bhaskar Semwal, and TK Mandal","author":"Bijalwan Vishwanath","year":"2021","unstructured":"Vishwanath Bijalwan , Vijay Bhaskar Semwal, and TK Mandal . 2021 . Fusion of Multi-sensor based Biomechanical Gait Analysis using Vision and Wearable Sensor. IEEE Sensors Journal ( 2021). Vishwanath Bijalwan, Vijay Bhaskar Semwal, and TK Mandal. 2021. Fusion of Multi-sensor based Biomechanical Gait Analysis using Vision and Wearable Sensor. IEEE Sensors Journal (2021)."},{"key":"e_1_3_2_1_9_1","volume-title":"Status of the CMOS Image Sensor Industry","author":"Richard LIU","year":"2020","unstructured":"Richard LIU Chenmeijing LIANG, Pierre CAMBOU. 2020. Status of the CMOS Image Sensor Industry 2020 . https:\/\/s3.i-micronews.com\/uploads\/2020\/11\/YDR20106-Status-of-the-CMOS-Image-Sensor-Industry-2020_sample.pdf. Richard LIU Chenmeijing LIANG, Pierre CAMBOU. 2020. Status of the CMOS Image Sensor Industry 2020. https:\/\/s3.i-micronews.com\/uploads\/2020\/11\/YDR20106-Status-of-the-CMOS-Image-Sensor-Industry-2020_sample.pdf."},{"key":"e_1_3_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1109\/LSENS.2018.2878572"},{"key":"e_1_3_2_1_11_1","unstructured":"DayDayNews. 2020. The civil war for ToF technology is far from over. https:\/\/daydaynews.cc\/en\/technology\/683608.html.  DayDayNews. 2020. The civil war for ToF technology is far from over. https:\/\/daydaynews.cc\/en\/technology\/683608.html."},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00482"},{"key":"e_1_3_2_1_13_1","volume-title":"Retinaface: Single-stage dense face localisation in the wild. arXiv preprint arXiv.1905.00641","author":"Deng Jiankang","year":"2019","unstructured":"Jiankang Deng , Jia Guo , Yuxiang Zhou , Jinke Yu , Irene Kotsia , and Stefanos Zafeiriou . 2019 . Retinaface: Single-stage dense face localisation in the wild. arXiv preprint arXiv.1905.00641 (2019). Jiankang Deng, Jia Guo, Yuxiang Zhou, Jinke Yu, Irene Kotsia, and Stefanos Zafeiriou. 2019. Retinaface: Single-stage dense face localisation in the wild. arXiv preprint arXiv.1905.00641 (2019)."},{"key":"e_1_3_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2012.08.007"},{"key":"e_1_3_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/IROS.2010.5649488"},{"key":"e_1_3_2_1_16_1","volume-title":"3-D mapping with an RGB-D camera","author":"Endres Felix","year":"2013","unstructured":"Felix Endres , J\u00fcrgen Hess , J\u00fcrgen Sturm , Daniel Cremers , and Wolfram Burgard . 2013. 3-D mapping with an RGB-D camera . IEEE transactions on robotics 30, 1 ( 2013 ), 177--187. Felix Endres, J\u00fcrgen Hess, J\u00fcrgen Sturm, Daniel Cremers, and Wolfram Burgard. 2013. 3-D mapping with an RGB-D camera. IEEE transactions on robotics 30, 1 (2013), 177--187."},{"key":"e_1_3_2_1_17_1","volume-title":"Rethinking RGB-D salient object detection: Models, data sets, and large-scale benchmarks","author":"Fan Deng-Ping","year":"2020","unstructured":"Deng-Ping Fan , Zheng Lin , Zhao Zhang , Menglong Zhu , and Ming-Ming Cheng . 2020. Rethinking RGB-D salient object detection: Models, data sets, and large-scale benchmarks . IEEE Transactions on neural networks and learning systems ( 2020 ). Deng-Ping Fan, Zheng Lin, Zhao Zhang, Menglong Zhu, and Ming-Ming Cheng. 2020. Rethinking RGB-D salient object detection: Models, data sets, and large-scale benchmarks. IEEE Transactions on neural networks and learning systems (2020)."},{"key":"e_1_3_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1117\/1.3070634"},{"key":"e_1_3_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/TCI.2015.2510506"},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.3390\/e20010019"},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICPR.2016.7899653"},{"key":"e_1_3_2_1_22_1","volume-title":"iToF2dToF: A Robust and Flexible Representation for Data-Driven Time-of-Flight Imaging. arXiv preprint arXiv:2103.07087","author":"Gutierrez-Barragan Felipe","year":"2021","unstructured":"Felipe Gutierrez-Barragan , Huaijin Chen , Mohit Gupta , Andreas Velten , and Jinwei Gu. 2021. iToF2dToF: A Robust and Flexible Representation for Data-Driven Time-of-Flight Imaging. arXiv preprint arXiv:2103.07087 ( 2021 ). Felipe Gutierrez-Barragan, Huaijin Chen, Mohit Gupta, Andreas Velten, and Jinwei Gu. 2021. iToF2dToF: A Robust and Flexible Representation for Data-Driven Time-of-Flight Imaging. arXiv preprint arXiv:2103.07087 (2021)."},{"volume-title":"Time-of-flight cameras:principles, methods and applications","author":"Hansard Miles","key":"e_1_3_2_1_23_1","unstructured":"Miles Hansard , Seungkyu Lee , Ouk Choi , and Radu Patrice Horaud . 2012. Time-of-flight cameras:principles, methods and applications . Springer Science & Business Media . Miles Hansard, Seungkyu Lee, Ouk Choi, and Radu Patrice Horaud. 2012. Time-of-flight cameras:principles, methods and applications. Springer Science & Business Media."},{"key":"e_1_3_2_1_24_1","volume-title":"An overview of depth cameras and range scanners based on time-of-flight technologies. Machine vision and applications 27, 7","author":"Horaud Radu","year":"2016","unstructured":"Radu Horaud , Miles Hansard , Georgios Evangelidis , and Cl\u00e9ment M\u00e9nier . 2016. An overview of depth cameras and range scanners based on time-of-flight technologies. Machine vision and applications 27, 7 ( 2016 ), 1005--1020. Radu Horaud, Miles Hansard, Georgios Evangelidis, and Cl\u00e9ment M\u00e9nier. 2016. An overview of depth cameras and range scanners based on time-of-flight technologies. Machine vision and applications 27, 7 (2016), 1005--1020."},{"key":"e_1_3_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2019.00455"},{"key":"e_1_3_2_1_26_1","volume-title":"Edmond Shu-lim Ho, and Adrian Munteanu","author":"Hu Pengpeng","year":"2021","unstructured":"Pengpeng Hu , Edmond Shu-lim Ho, and Adrian Munteanu . 2021 . 3DBodyNet: Fast Reconstruction of 3D Animatable Human Body Shape from a Single Commodity Depth Camera . IEEE Transactions on Multimedia (2021). Pengpeng Hu, Edmond Shu-lim Ho, and Adrian Munteanu. 2021. 3DBodyNet: Fast Reconstruction of 3D Animatable Human Body Shape from a Single Commodity Depth Camera. IEEE Transactions on Multimedia (2021)."},{"key":"e_1_3_2_1_27_1","first-page":"299","article-title":"Modelling scattering distortion in 3D range camera. International Archives of Photogrammetry","volume":"38","author":"Jamtsho Sonam","year":"2010","unstructured":"Sonam Jamtsho and Derek D Lichti . 2010 . Modelling scattering distortion in 3D range camera. International Archives of Photogrammetry , Remote Sensing and Spatial Information Sciences 38 , 5 (2010), 299 -- 304 . Sonam Jamtsho and Derek D Lichti. 2010. Modelling scattering distortion in 3D range camera. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences 38, 5 (2010), 299--304.","journal-title":"Remote Sensing and Spatial Information Sciences"},{"key":"e_1_3_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/TII.2013.2251892"},{"key":"e_1_3_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIM.2010.2089190"},{"key":"e_1_3_2_1_30_1","volume-title":"Modelling and compensating internal light scattering in time of flight range cameras. The photogrammetric record 27, 138","author":"Karel Wilfried","year":"2012","unstructured":"Wilfried Karel , Sajid Ghuffar , and Norbert Pfeifer . 2012. Modelling and compensating internal light scattering in time of flight range cameras. The photogrammetric record 27, 138 ( 2012 ), 155--174. Wilfried Karel, Sajid Ghuffar, and Norbert Pfeifer. 2012. Modelling and compensating internal light scattering in time of flight range cameras. The photogrammetric record 27, 138 (2012), 155--174."},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1117\/12.791019"},{"key":"e_1_3_2_1_32_1","volume-title":"2014 International conference on computer vision theory and applications (VISAPP)","volume":"2","author":"Kepski Michal","year":"2014","unstructured":"Michal Kepski and Bogdan Kwolek . 2014 . Fall detection using ceiling-mounted 3d depth camera . In 2014 International conference on computer vision theory and applications (VISAPP) , Vol. 2 . IEEE, 640--647. Michal Kepski and Bogdan Kwolek. 2014. Fall detection using ceiling-mounted 3d depth camera. In 2014 International conference on computer vision theory and applications (VISAPP), Vol. 2. IEEE, 640--647."},{"key":"e_1_3_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICAMechS.2018.8506987"},{"key":"e_1_3_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-01237-3_20"},{"key":"e_1_3_2_1_35_1","volume-title":"Time-of-flight camera-an introduction. Technical white paper SLOA190B","author":"Larry Li.","year":"2014","unstructured":"Larry Li. 2014. Time-of-flight camera-an introduction. Technical white paper SLOA190B ( 2014 ). Larry Li. 2014. Time-of-flight camera-an introduction. Technical white paper SLOA190B (2014)."},{"key":"e_1_3_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2016.11.019"},{"key":"e_1_3_2_1_37_1","unstructured":"MARKETSANDMARKETS. 2020. Global Time-of-flight (ToF) Sensor Market 2021--2025. https:\/\/https:\/\/www.marketsandmarkets.com\/Market-Reports\/time-of-flight-sensor-market-264466295.html.  MARKETSANDMARKETS. 2020. Global Time-of-flight (ToF) Sensor Market 2021--2025. https:\/\/https:\/\/www.marketsandmarkets.com\/Market-Reports\/time-of-flight-sensor-market-264466295.html."},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1109\/WACV.2017.135"},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1364\/JOSAA.23.000800"},{"key":"e_1_3_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1109\/IPSN.2018.00051"},{"key":"e_1_3_2_1_41_1","volume-title":"International Conference on Computer Vision Systems: Proceedings","author":"Mure-Dubois James","year":"2007","unstructured":"James Mure-Dubois and Heinz H\u00fcgli . 2007 . Real-time scattering compensation for time-of-flight camera . In International Conference on Computer Vision Systems: Proceedings (2007). James Mure-Dubois and Heinz H\u00fcgli. 2007. Real-time scattering compensation for time-of-flight camera. In International Conference on Computer Vision Systems: Proceedings (2007)."},{"volume-title":"Modeling kinect sensor noise for improved 3d reconstruction and tracking. In 2012 second international conference on 3D imaging, modeling, processing, visualization & transmission","author":"Nguyen Chuong V","key":"e_1_3_2_1_42_1","unstructured":"Chuong V Nguyen , Shahram Izadi , and David Lovell . 2012. Modeling kinect sensor noise for improved 3d reconstruction and tracking. In 2012 second international conference on 3D imaging, modeling, processing, visualization & transmission . IEEE , 524--530. Chuong V Nguyen, Shahram Izadi, and David Lovell. 2012. Modeling kinect sensor noise for improved 3d reconstruction and tracking. In 2012 second international conference on 3D imaging, modeling, processing, visualization & transmission. IEEE, 524--530."},{"volume-title":"Introduction to optics","author":"Pedrotti Frank L","key":"e_1_3_2_1_43_1","unstructured":"Frank L Pedrotti , Leno M Pedrotti , and Leno S Pedrotti . 2017. Introduction to optics . Cambridge University Press . Frank L Pedrotti, Leno M Pedrotti, and Leno S Pedrotti. 2017. Introduction to optics. Cambridge University Press."},{"key":"e_1_3_2_1_44_1","unstructured":"PointcloudAI. 2021. DepthEye Pro-VGA Depth camera. http:\/\/pointcloud.ai\/products.  PointcloudAI. 2021. DepthEye Pro-VGA Depth camera. http:\/\/pointcloud.ai\/products."},{"key":"e_1_3_2_1_45_1","unstructured":"PointcloudAI. 2021. DepthEye Pro-VGA Depth camera. http:\/\/pointcloud.ai\/products.  PointcloudAI. 2021. DepthEye Pro-VGA Depth camera. http:\/\/pointcloud.ai\/products."},{"key":"e_1_3_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1504\/IJISTA.2008.021303"},{"key":"e_1_3_2_1_47_1","volume-title":"Visual Communications and Image Processing","author":"Razlighi QR","year":"2009","unstructured":"QR Razlighi and N Kehtarnavaz . 2009. A comparison study of image spatial entropy . In Visual Communications and Image Processing 2009 , Vol. 7257 . International Society for Optics and Photonics , 72571X. QR Razlighi and N Kehtarnavaz. 2009. A comparison study of image spatial entropy. In Visual Communications and Image Processing 2009, Vol. 7257. International Society for Optics and Photonics, 72571X."},{"key":"e_1_3_2_1_48_1","volume-title":"Yolov3: An incremental improvement. arXiv preprint arXiv","author":"Redmon Joseph","year":"1804","unstructured":"Joseph Redmon and Ali Farhadi . 2018. Yolov3: An incremental improvement. arXiv preprint arXiv 1804 .02767 (2018). Joseph Redmon and Ali Farhadi. 2018. Yolov3: An incremental improvement. arXiv preprint arXiv 1804.02767 (2018)."},{"key":"e_1_3_2_1_49_1","volume-title":"2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2759--2766","author":"Ren Xiaofeng","year":"2012","unstructured":"Xiaofeng Ren , Liefeng Bo , and Dieter Fox . 2012 . Rgb-(d) scene labeling: Features and algorithms . In 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2759--2766 . Xiaofeng Ren, Liefeng Bo, and Dieter Fox. 2012. Rgb-(d) scene labeling: Features and algorithms. In 2012 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 2759--2766."},{"key":"e_1_3_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/3384419.3430781"},{"key":"e_1_3_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.3390\/s18061679"},{"key":"e_1_3_2_1_52_1","doi-asserted-by":"publisher","DOI":"10.1177\/0278364917713117"},{"key":"e_1_3_2_1_53_1","volume-title":"Gray level co-occurrence matrices: generalisation and some new features. arXiv preprint arXiv:1205.4831","author":"Unnikrishnan Bino Sebastian V, A","year":"2012","unstructured":"Bino Sebastian V, A Unnikrishnan , and Kannan Balakrishnan . 2012. Gray level co-occurrence matrices: generalisation and some new features. arXiv preprint arXiv:1205.4831 ( 2012 ). Bino Sebastian V, A Unnikrishnan, and Kannan Balakrishnan. 2012. Gray level co-occurrence matrices: generalisation and some new features. arXiv preprint arXiv:1205.4831 (2012)."},{"key":"e_1_3_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICDAR.2007.4376991"},{"key":"e_1_3_2_1_55_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.28"},{"key":"e_1_3_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2018.2872629"},{"key":"e_1_3_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1109\/IROS.2014.6943136"},{"key":"e_1_3_2_1_58_1","unstructured":"Vzense Technology. 2021. VZense Product Dcam710. https:\/\/www.vzense.com\/products.  Vzense Technology. 2021. VZense Product Dcam710. https:\/\/www.vzense.com\/products."},{"volume-title":"Computer vision and machine learning with RGB-D sensors","author":"Zhang Cha","key":"e_1_3_2_1_59_1","unstructured":"Cha Zhang and Zhengyou Zhang . 2014. Calibration between depth and color sensors for commodity depth cameras . In Computer vision and machine learning with RGB-D sensors . Springer , 47--64. Cha Zhang and Zhengyou Zhang. 2014. Calibration between depth and color sensors for commodity depth cameras. In Computer vision and machine learning with RGB-D sensors. Springer, 47--64."},{"key":"e_1_3_2_1_60_1","volume-title":"Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012)","author":"Zhang Zhong","year":"2012","unstructured":"Zhong Zhang , Weihua Liu , Vangelis Metsis , and Vassilis Athitsos . 2012 . A viewpoint-independent statistical method for fall detection . In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012) . IEEE, 3626--3630. Zhong Zhang, Weihua Liu, Vangelis Metsis, and Vassilis Athitsos. 2012. A viewpoint-independent statistical method for fall detection. In Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012). IEEE, 3626--3630."},{"key":"e_1_3_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.3390\/s18093099"},{"key":"e_1_3_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1109\/TASE.2018.2861382"},{"key":"e_1_3_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1109\/DCOSS.2019.00028"}],"event":{"name":"SenSys '21: The 19th ACM Conference on Embedded Networked Sensor Systems","sponsor":["SIGMETRICS ACM Special Interest Group on Measurement and Evaluation","SIGCOMM ACM Special Interest Group on Data Communication","SIGMOBILE ACM Special Interest Group on Mobility of Systems, Users, Data and Computing","SIGOPS ACM Special Interest Group on Operating Systems","SIGBED ACM Special Interest Group on Embedded Systems","SIGARCH ACM Special Interest Group on Computer Architecture"],"location":"Coimbra Portugal","acronym":"SenSys '21"},"container-title":["Proceedings of the 19th ACM Conference on Embedded Networked Sensor Systems"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3485730.3485927","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3485730.3485927","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T20:12:10Z","timestamp":1750191130000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3485730.3485927"}},"subtitle":["Exposing High-Resolution Texture from Depth Cameras"],"short-title":[],"issued":{"date-parts":[[2021,11,15]]},"references-count":63,"alternative-id":["10.1145\/3485730.3485927","10.1145\/3485730"],"URL":"https:\/\/doi.org\/10.1145\/3485730.3485927","relation":{},"subject":[],"published":{"date-parts":[[2021,11,15]]},"assertion":[{"value":"2021-11-15","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}