{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,30]],"date-time":"2026-04-30T13:05:41Z","timestamp":1777554341323,"version":"3.51.4"},"reference-count":46,"publisher":"MDPI AG","issue":"1","license":[{"start":{"date-parts":[[2021,1,5]],"date-time":"2021-01-05T00:00:00Z","timestamp":1609804800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100003725","name":"National Research Foundation of Korea","doi-asserted-by":"publisher","award":["NRF-2019R1I1A3A01059082"],"award-info":[{"award-number":["NRF-2019R1I1A3A01059082"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003710","name":"Korea Health Industry Development Institute","doi-asserted-by":"publisher","award":["HI19C0642"],"award-info":[{"award-number":["HI19C0642"]}],"id":[{"id":"10.13039\/501100003710","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Typical AR methods have generic problems such as visual mismatching, incorrect occlusions, and limited augmentation due to the inability to estimate depth from AR images and attaching the AR markers onto physical objects, which prevents the industrial worker from conducting manufacturing tasks effectively. This paper proposes a hybrid approach to industrial AR for complementing existing AR methods using deep learning-based facility segmentation and depth prediction without AR markers and a depth camera. First, the outlines of physical objects are extracted by applying a deep learning-based instance segmentation method to the RGB image acquired from the AR camera. Simultaneously, a depth prediction method is applied to the AR image to estimate the depth map as a 3D point cloud for the detected object. Based on the segmented 3D point cloud data, 3D spatial relationships among the physical objects are calculated, which can assist in solving the visual mismatch and occlusion problems properly. In addition, it can deal with a dynamically operating or a moving facility, such as a robot\u2014the conventional AR cannot do so. For these reasons, the proposed approach can be utilized as a hybrid or complementing function to existing AR methods, since it can be activated whenever the industrial worker requires handing of visual mismatches or occlusions. Quantitative and qualitative analyses verify the advantage of the proposed approach compared with existing AR methods. Some case studies also prove that the proposed method can be applied not only to manufacturing but also to other fields. These studies confirm the scalability, effectiveness, and originality of this proposed approach.<\/jats:p>","DOI":"10.3390\/s21010307","type":"journal-article","created":{"date-parts":[[2021,1,5]],"date-time":"2021-01-05T10:35:12Z","timestamp":1609842912000},"page":"307","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":18,"title":["A Hybrid Approach to Industrial Augmented Reality Using Deep Learning-Based Facility Segmentation and Depth Prediction"],"prefix":"10.3390","volume":"21","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-1675-8003","authenticated-orcid":false,"given":"Minseok","family":"Kim","sequence":"first","affiliation":[{"name":"Korea Institute of Science and Technology Information (KISTI), Daejeon 34141, Korea"}]},{"given":"Sung Ho","family":"Choi","sequence":"additional","affiliation":[{"name":"Department of Industrial Engineering, Chonnam National University, Gwangju 61186, Korea"}]},{"given":"Kyeong-Beom","family":"Park","sequence":"additional","affiliation":[{"name":"Department of Industrial Engineering, Chonnam National University, Gwangju 61186, Korea"}]},{"given":"Jae Yeol","family":"Lee","sequence":"additional","affiliation":[{"name":"Department of Industrial Engineering, Chonnam National University, Gwangju 61186, Korea"}]}],"member":"1968","published-online":{"date-parts":[[2021,1,5]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Wang, S., Wan, J., Li, D., and Liu, C. (2018). Knowledge reasoning with semantic data for real-time data processing in smart factory. Sensors, 8.","DOI":"10.3390\/s18020471"},{"key":"ref_2","unstructured":"Lorenz, M., Ruessmann, M., Strack, R., Lueth, K.L., and Bolle, M. (2015). Man and machine in industry 4.0: How will technology transform the industrial workforce through 2025. Boston Consult. Group, 2, Available online: http:\/\/hdl.voced.edu.au\/10707\/405644."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Jo, D., and Kim, G.H. (2019). AR enabled IT for a smart and interactive environment: A survey and future directions. Sensors, 10.","DOI":"10.3390\/s19194330"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Blanco-Novoa, \u00d3., Fraga-Lamas, P., Vilar-Montesinos, M.A., and Fern\u00e1ndez-Caram\u00e9s, T.M. (2020). Creating the internet of augmented things: An open-source framework to make IoT devices and augmented and mixed reality systems talk to each other. Sensors, 20.","DOI":"10.3390\/s20113328"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Kim, M., Choi, S.H., Park, K.-B., and Lee, J.Y. (2019). User interactions for augmented smart glasses: A comparative evaluation of visual contexts and interaction gestures. Appl. Sci., 9.","DOI":"10.3390\/app9153171"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"106585","DOI":"10.1016\/j.cie.2020.106585","article-title":"Deep learning-based mobile augmented reality for task assistance using 3D spatial mapping and snapshot-based RGB-D data","volume":"146","author":"Park","year":"2020","journal-title":"Comput. Ind. Eng."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"51","DOI":"10.1016\/j.compind.2018.06.006","article-title":"Situation-dependent remote AR collaborations: Image-based collaboration using a 3D perspective map and live video-based collaboration with a synchronized VR mode","volume":"101","author":"Choi","year":"2018","journal-title":"Comput. Ind."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"2885","DOI":"10.3390\/s100402885","article-title":"Real-time occlusion handling in augmented reality based on an object tracking approach","volume":"10","author":"Tian","year":"2010","journal-title":"Sensors"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Du, C., Chen, Y.L., Ye, M., and Ren, L. (2016, January 19\u201323). Edge snapping-based depth enhancement for dynamic occlusion handling in augmented reality. Proceedings of the ISMAR\u201916, Merida, Mexico.","DOI":"10.1109\/ISMAR.2016.17"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Walton, D.R., and Steed, A. (2017, January 8\u201310). Accurate real-time occlusion for mixed reality. Proceedings of the VRST\u201917, Gothenburg, Sweden.","DOI":"10.1145\/3139131.3139153"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Kasperi, J., Edwardsson, M.P., and Romero, M. (2017, January 8\u201310). Occlusion in outdoor Augmented Reality using geospatial building data. Proceedings of the VRST\u201917, Gothenburg, Sweden.","DOI":"10.1145\/3139131.3139159"},{"key":"ref_12","unstructured":"(2019, December 01). Project Tango Data Handling. Available online: https:\/\/support.google.com\/faqs\/answer\/6122425?hl=en."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Loch, F., Quint, F., and Brishtel, I. (2016, January 14\u201316). Comparing video and augmented reality assistance in manual assembly. Proceedings of the 12th International Conference on Intelligent Environments, London, UK.","DOI":"10.1109\/IE.2016.31"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Tang, A., Owen, C., Biocca, F., and Mou, W. (2003, January 5\u201310). Comparative effectiveness of augmented reality in object assembly. In Proceeding of the CHI\u201903, Fort Lauderdale, FL, USA.","DOI":"10.1145\/642625.642626"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"439","DOI":"10.1061\/(ASCE)CP.1943-5487.0000184","article-title":"Using animated augmented reality to cognitively guide assembly","volume":"27","author":"Hou","year":"2013","journal-title":"J. Comput. Civ. Eng."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"649","DOI":"10.1007\/s00170-009-2012-0","article-title":"AR\/RP-based tangible interactions for collaborative design evaluation of digital products","volume":"45","author":"Lee","year":"2009","journal-title":"Int. J. Adv. Manuf. Technol."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"406","DOI":"10.1016\/j.aei.2016.05.004","article-title":"Multi-modal augmented-reality assembly guidance based on bare-hand interface","volume":"30","author":"Wang","year":"2016","journal-title":"Adv. Eng. Inform."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"43","DOI":"10.1016\/j.rcim.2015.12.002","article-title":"Towards a griddable distributed manufacturing system with augmented reality interfaces","volume":"39","author":"Yew","year":"2016","journal-title":"Robot. Comput. Integr. Manuf."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"61","DOI":"10.1016\/j.cirp.2016.04.038","article-title":"Augmented reality system for operator support in human\u2013robot collaborative assembly","volume":"65","author":"Makris","year":"2016","journal-title":"Cirp Ann."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"276","DOI":"10.1016\/j.rcim.2018.10.001","article-title":"Towards augmented reality manuals for industry 4.0: A methodology","volume":"56","author":"Gattullo","year":"2019","journal-title":"Robot. Comput. Integr. Manuf."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"370","DOI":"10.1016\/j.procir.2015.12.005","article-title":"Augmented reality (AR) applications for supporting human-robot interactive cooperation","volume":"41","author":"Michalos","year":"2016","journal-title":"Procedia CIRP"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"121","DOI":"10.1007\/s12008-018-0470-z","article-title":"Machine tool setup instructions in the smart factory using augmented reality: A system construction perspective","volume":"13","author":"Tzimas","year":"2018","journal-title":"Int. J. Interact. Des. Manuf."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"100","DOI":"10.1016\/j.eswa.2017.03.060","article-title":"Markerless tracking system for augmented reality in the automotive industry","volume":"82","author":"Lima","year":"2017","journal-title":"Expert Syst. Appl."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"77","DOI":"10.1108\/AA-11-2016-152","article-title":"Mechanical assembly assistance using marker-less augmented reality system","volume":"38","author":"Wang","year":"2018","journal-title":"Assem. Autom."},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"252","DOI":"10.1016\/j.aei.2015.03.005","article-title":"Augmented reality visualization: A review of civil infrastructure system applications","volume":"29","author":"Behzadan","year":"2015","journal-title":"Adv. Eng. Inform."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"96","DOI":"10.1016\/j.neucom.2014.12.081","article-title":"Handling occlusions in augmented reality based on 3D reconstruction method","volume":"156","author":"Tian","year":"2015","journal-title":"Neurocomputing"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Zollmann, S., Poglitsch, C., and Ventura, J. (2016, January 21\u201322). VISGIS: Dynamic situated visualization for geographic information systems. Proceedings of the IVCNZ\u201916, Palmerston North, New Zealand.","DOI":"10.1109\/IVCNZ.2016.7804440"},{"key":"ref_28","unstructured":"Kim, M. (2019). A New Approach to Supporting User-Centric Manufacturing Information Recommendation, Visualization and Interaction Using Augmented Reality and Deep Learning. [Ph.D. Thesis, Chonnam National University]."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"52","DOI":"10.1109\/MCOM.2018.1700629","article-title":"Toward dynamic resources management for IoT based manufacturing","volume":"56","author":"Wan","year":"2018","journal-title":"IEEE Commun. Mag."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"1435","DOI":"10.1109\/TII.2014.2306383","article-title":"CCIoT-CMfg: Cloud computing and Internet of Things-based cloud manufacturing service system","volume":"10","author":"Tao","year":"2014","journal-title":"IEEE Trans. Ind. Inform."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"616","DOI":"10.1016\/J.ENG.2017.05.015","article-title":"Intelligent manufacturing in the context of industry 4.0: A review","volume":"3","author":"Zhong","year":"2017","journal-title":"Engineering"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"11","DOI":"10.1016\/j.compind.2015.11.003","article-title":"Hybrid reality-based user experience and evaluation of a context-aware smart home","volume":"76","author":"Seo","year":"2016","journal-title":"Comput. Ind."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Huo, K., Cao, Y., Yoon, S.H., Xu, Z., Chen, G., and Ramani, K. (2018, January 21\u201326). Scenariot: Spatially mapping smart things within augmented reality scenes. Proceedings of the CHI\u201918, Montreal, QC, Canada.","DOI":"10.1145\/3173574.3173793"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Phupattanasilp, P., and Tong, S.-R. (2019). Augmented reality in the integrative Internet of Things (AR-IoT): Application for precision farming. Sustainability, 11.","DOI":"10.3390\/su11092658"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Cao, Y., Xu, Z., Li, F., Zhong, W., Huo, K., and Ramani, K.V. (2019, January 23\u201328). Ra: An in-situ visual authoring system for robot-IoT task planning with augmented reality. Proceedings of the DISI\u201919, San Diego, CA, USA.","DOI":"10.1145\/3322276.3322278"},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"334","DOI":"10.1109\/TCE.2016.7613201","article-title":"ARIoT: Scalable augmented reality framework for interacting with Internet of Things appliances everywhere","volume":"62","author":"Jo","year":"2016","journal-title":"IEEE Trans. Consum. Electron."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"He, K., Gkioxari, G., Doll\u00e1r, P., and Girshick, R. (2017, January 22\u201329). Mask R-CNN. Proceedings of the ICCV\u201917, Venice, Italy.","DOI":"10.1109\/ICCV.2017.322"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Li, Z., and Snavely, N. (2018, January 19\u201321). MegaDepth: Learning single-view depth prediction from internet photos. Proceedings of the CVPR\u201918, Salt Lake City, Utah, USA.","DOI":"10.1109\/CVPR.2018.00218"},{"key":"ref_39","unstructured":"Chen, W., Fu, Z., Yang, D., and Deng, J. (2016, January 5\u201310). Single-image depth perception in the wild. Proceedings of the NIPS\u201916, Barcelona, Spain."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"1137","DOI":"10.1109\/TPAMI.2016.2577031","article-title":"Faster R-CNN: Towards real-time object detection with region proposal networks","volume":"39","author":"Ren","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"219","DOI":"10.1007\/BF00977785","article-title":"Two algorithms for constructing a Delaunay triangulation","volume":"9","author":"Lee","year":"1980","journal-title":"Int. J. Comput. Inf. Sci."},{"key":"ref_42","unstructured":"Moreira, A., and Santos, M.Y. (2007, January 8\u201311). Concave hull: A k-nearest neighbours approach for the computation of the region occupied by a set of points. Proceedings of the GRAPP\u201907, Barcelona, Spain."},{"key":"ref_43","unstructured":"(2019, May 01). TensorFlow. Available online: https:\/\/www.tensorflow.org."},{"key":"ref_44","unstructured":"(2019, May 01). Pytorch. Available online: https:\/\/pytorch.org\/."},{"key":"ref_45","unstructured":"(2019, January 01). Unity3D. Available online: https:\/\/unity3d.com."},{"key":"ref_46","unstructured":"(2019, April 01). COCO dataset. Available online: http:\/\/cocodataset.org\/#home."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/1\/307\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T05:07:09Z","timestamp":1760159229000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/1\/307"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,1,5]]},"references-count":46,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2021,1]]}},"alternative-id":["s21010307"],"URL":"https:\/\/doi.org\/10.3390\/s21010307","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,1,5]]}}}