{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,14]],"date-time":"2026-02-14T10:03:42Z","timestamp":1771063422681,"version":"3.50.1"},"reference-count":28,"publisher":"MDPI AG","issue":"11","license":[{"start":{"date-parts":[[2021,5,22]],"date-time":"2021-05-22T00:00:00Z","timestamp":1621641600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Applied Sciences"],"abstract":"<jats:p>Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.<\/jats:p>","DOI":"10.3390\/app11114758","type":"journal-article","created":{"date-parts":[[2021,5,23]],"date-time":"2021-05-23T23:58:05Z","timestamp":1621814285000},"page":"4758","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":97,"title":["Augmented Reality Maintenance Assistant Using YOLOv5"],"prefix":"10.3390","volume":"11","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-9313-9332","authenticated-orcid":false,"given":"Ana","family":"Malta","sequence":"first","affiliation":[{"name":"Polytechnic of Coimbra, ISEC, 3045-093 Coimbra, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4313-7966","authenticated-orcid":false,"given":"Mateus","family":"Mendes","sequence":"additional","affiliation":[{"name":"Polytechnic of Coimbra, ISEC, 3045-093 Coimbra, Portugal"},{"name":"ISR, University of Coimbra, 3004-531 Coimbra, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9694-8079","authenticated-orcid":false,"given":"Torres","family":"Farinha","sequence":"additional","affiliation":[{"name":"Polytechnic of Coimbra, ISEC, 3045-093 Coimbra, Portugal"},{"name":"Centre for Mechanical Engineering, Materials and Processes\u2014CEMMPRE, 3030-788 Coimbra, Portugal"}]}],"member":"1968","published-online":{"date-parts":[[2021,5,22]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"3212","DOI":"10.1109\/TNNLS.2018.2876865","article-title":"Object detection with deep learning: A review","volume":"30","author":"Zhao","year":"2019","journal-title":"IEEE Trans. Neural Netw. Learn. Syst."},{"key":"ref_2","first-page":"1097","article-title":"Imagenet classification with deep convolutional neural networks","volume":"25","author":"Krizhevsky","year":"2012","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_3","unstructured":"Sharma, V. (2020). Face Mask Detection Using YOLOv5 for COVID-19. [Mater\u2019s Thesis, Master of Science in Computer Science, California State University San Marcos]."},{"key":"ref_4","unstructured":"Tolbert, S.W. (2021, March 15). Artificial Intelligence for Real Time Threat Detection and Monitoring. Available online: https:\/\/stevenwtolbert.com\/pdf\/threat_detection.pdf."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Liu, L., Li, H., and Gruteser, M. (2019, January 21\u201325). Edge assisted real-time object detection for mobile augmented reality. Proceedings of the 25th Annual International Conference on Mobile Computing and Networking, Los Cabos, Mexico.","DOI":"10.1145\/3300061.3300116"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"101887","DOI":"10.1016\/j.rcim.2019.101887","article-title":"Deep learning-based smart task assistance in wearable augmented reality","volume":"63","author":"Park","year":"2020","journal-title":"Robot. Comput. Integr. Manuf."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"15","DOI":"10.3182\/20130619-3-RU-3018.00637","article-title":"Virtual and augmented reality applications in manufacturing","volume":"46","author":"Nee","year":"2013","journal-title":"IFAC Proc. Vol."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Charoenseang, S., and Tonggoed, T. (2011). Human\u2013robot collaboration with augmented reality. International Conference on Human-Computer Interaction, Springer.","DOI":"10.1007\/978-3-642-22095-1_19"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Kim, M., Choi, S.H., Park, K.B., and Lee, J.Y. (2021). A Hybrid Approach to Industrial Augmented Reality Using Deep Learning-Based Facility Segmentation and Depth Prediction. Sensors, 21.","DOI":"10.3390\/s21010307"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Kim, M., Choi, S.H., Park, K.B., and Lee, J.Y. (2019). User Interactions for Augmented Reality Smart Glasses: A comparative evaluation of visual contexts and interaction gestures. Appl. Sci., 9.","DOI":"10.3390\/app9153171"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"51","DOI":"10.1016\/j.compind.2018.06.006","article-title":"Situation-dependent remote AR collaborations: Image-based collaboration using a 3D perspective map and live video-based collaboration with a synchronized VR mode","volume":"101","author":"Choi","year":"2018","journal-title":"Comput. Ind."},{"key":"ref_12","unstructured":"Bosh (2021, April 15). Bosch Connected Devices and Solutions GmbH\u2014Interplay of Augmented Reality and IoT. Available online: https:\/\/www.bosch-connectivity.com\/newsroom\/blog\/interplay-of-augmented-reality-and-iot\/."},{"key":"ref_13","unstructured":"Microsoft (2021, April 15). Mixed Reality in der Automobilindustrie oder wie Bosch Mobilit\u00e4t Digitaler Gestaltet. Available online: https:\/\/news.microsoft.com\/de-de\/mixed-reality-in-der-automobilindustrie-oder-wie-bosch-mobilitaet-digitaler-gestaltet\/."},{"key":"ref_14","unstructured":"IBM (2021, April 23). What Is a CMMS?. Available online: https:\/\/www.ibm.com\/topics\/what-is-a-cmms."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Farinha, J.M.T. (2018). Asset Maintenance Engineering Methodologies, CRC Press. [1st ed.].","DOI":"10.1201\/9781315232867"},{"key":"ref_16","unstructured":"Johns, R. (2021, February 10). Real Python\u2014PyTorch vs. Tensorflow For Your Python Deep Learning Project. Available online: https:\/\/realpython.com\/pytorch-vs-tensorflow."},{"key":"ref_17","unstructured":"Microsoft (2021, April 21). Micrisoft-HoloLens2. Available online: https:\/\/www.microsoft.com\/en-us\/hololens\/hardware."},{"key":"ref_18","unstructured":"Vuzix (2021, April 21). Vuzix\u2014Manufacturing Solutions. Available online: https:\/\/www.vuzix.com\/solutions\/manufacturing."},{"key":"ref_19","unstructured":"(2021, April 10). Andreas Geiger. Welcome to the KITTI Vision Benchmark Suite!. Available online: http:\/\/www.cvlibs.net\/datasets\/kitti\/."},{"key":"ref_20","unstructured":"(2021, April 10). The Comprehensive Cars (CompCars) Dataset. Available online: http:\/\/mmlab.ie.cuhk.edu.hk\/datasets\/comp_cars\/index.html."},{"key":"ref_21","unstructured":"Analytics India Magazine (2021, February 10). Analytics India Magazine\u2014Guide to Yolov5 for Real-Time\u2014Object Detection. Available online: https:\/\/analyticsindiamag.com\/yolov5\/."},{"key":"ref_22","unstructured":"(2021, April 21). Glenn Jocher. Available online: https:\/\/github.com\/ultralytics\/yolov5."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Buric, M., Pobar, M., and Ivasic-Kos, M. (2018, January 12\u201314). Ball detection using YOLO and Mask R-CNN. Proceedings of the 2018 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA.","DOI":"10.1109\/CSCI46756.2018.00068"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Simon, M., Amende, K., Kraus, A., Honer, J., Samann, T., Kaulbersch, H., Milz, S., and Michael Gross, H. (2019, January 16\u201317). Complexer-yolo: Real-time 3d object detection and tracking on semantic point clouds. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA.","DOI":"10.1109\/CVPRW.2019.00158"},{"key":"ref_25","unstructured":"Solawetz, J., and Nelson, J. (2021, April 21). How to Train YOLOv5 on a Custom Dataset. Available online: https:\/\/blog.roboflow.com\/how-to-train-yolov5-on-a-custom-dataset\/."},{"key":"ref_26","unstructured":"Bochkovskiy, A., Wang, C.Y., and Liao, H.Y.M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv."},{"key":"ref_27","unstructured":"Redmon, J., and Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Redmon, J., and Farhadi, A. (2018, January 12\u201314). YOLO9000: Better, faster, stronger. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2017.690"}],"container-title":["Applied Sciences"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2076-3417\/11\/11\/4758\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T06:05:55Z","timestamp":1760162755000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2076-3417\/11\/11\/4758"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,5,22]]},"references-count":28,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2021,6]]}},"alternative-id":["app11114758"],"URL":"https:\/\/doi.org\/10.3390\/app11114758","relation":{},"ISSN":["2076-3417"],"issn-type":[{"value":"2076-3417","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,5,22]]}}}