{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T13:35:48Z","timestamp":1773840948837,"version":"3.50.1"},"reference-count":60,"publisher":"Wiley","license":[{"start":{"date-parts":[[2023,4,11]],"date-time":"2023-04-11T00:00:00Z","timestamp":1681171200000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Elm Company"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Complexity"],"published-print":{"date-parts":[[2023,4,11]]},"abstract":"<jats:p>Automated assessment of car damage is a major challenge in the auto repair and damage assessment industries. The domain has several application areas, ranging from car assessment companies, such as car rentals and body shops, to accidental damage assessment for car insurance companies. In vehicle assessment, the damage can take many forms, from scratches, minor dents, and major dents to missing parts. Often, the assessment area has a significant level of noise, such as dirt, grease, oil, or rush, which makes accurate identification challenging. Moreover, in the repair industry, identifying a particular part is the first step in obtaining an accurate labor and part assessment, where the presence of different car models, shapes, and sizes makes the task even more challenging for a machine-learning model to perform well. To address these challenges, this study explores and applies various instance segmentation methodologies to determine the best-performing models. This study focuses on two genres of real-time instance segmentation models, namely, SipMask and YOLACT, owing to their industrial significance. These methodologies were evaluated against a previously reported car parts dataset (DSMLR) as well as an internally curated dataset extracted from local car repair workshops. The YOLACT-based part localization and segmentation method outperformed other real-time instance mechanisms with an mAP of 66.5. For the workshop repair dataset, SipMask++ reported better accuracy for object detection with a mAP of 57.0, with outcomes for <jats:inline-formula>\n                     <a:math xmlns:a=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" id=\"M1\">\n                        <a:mi mathvariant=\"normal\">A<\/a:mi>\n                        <a:msup>\n                           <a:mrow>\n                              <a:mi mathvariant=\"normal\">P<\/a:mi>\n                           <\/a:mrow>\n                           <a:mrow>\n                              <a:mi mathvariant=\"normal\">I<\/a:mi>\n                              <a:mi mathvariant=\"normal\">o<\/a:mi>\n                              <a:mi mathvariant=\"normal\">U<\/a:mi>\n                              <a:mo>=<\/a:mo>\n                              <a:mo>.<\/a:mo>\n                              <a:mn>50<\/a:mn>\n                           <\/a:mrow>\n                        <\/a:msup>\n                     <\/a:math>\n                  <\/jats:inline-formula> and <jats:inline-formula>\n                     <h:math xmlns:h=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" id=\"M2\">\n                        <h:mi mathvariant=\"normal\">A<\/h:mi>\n                        <h:msup>\n                           <h:mrow>\n                              <h:mi mathvariant=\"normal\">P<\/h:mi>\n                           <\/h:mrow>\n                           <h:mrow>\n                              <h:mi mathvariant=\"normal\">I<\/h:mi>\n                              <h:mi mathvariant=\"normal\">o<\/h:mi>\n                              <h:mi mathvariant=\"normal\">U<\/h:mi>\n                              <h:mo>=<\/h:mo>\n                              <h:mo>.<\/h:mo>\n                              <h:mn>75<\/h:mn>\n                           <\/h:mrow>\n                        <\/h:msup>\n                     <\/h:math>\n                  <\/jats:inline-formula> reporting 72.0 and 67.0, respectively, whereas YOLACT was observed to be a better performer for <jats:inline-formula>\n                     <o:math xmlns:o=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" id=\"M3\">\n                        <o:mi mathvariant=\"normal\">A<\/o:mi>\n                        <o:msup>\n                           <o:mrow>\n                              <o:mi mathvariant=\"normal\">P<\/o:mi>\n                           <\/o:mrow>\n                           <o:mrow>\n                              <o:mi>s<\/o:mi>\n                           <\/o:mrow>\n                        <\/o:msup>\n                     <\/o:math>\n                  <\/jats:inline-formula> with 44.0 and 2.6 for object detection and segmentation categories, respectively.<\/jats:p>","DOI":"10.1155\/2023\/6460639","type":"journal-article","created":{"date-parts":[[2023,4,11]],"date-time":"2023-04-11T23:05:08Z","timestamp":1681254308000},"page":"1-16","source":"Crossref","is-referenced-by-count":11,"title":["Real-Time Instance Segmentation Models for Identification of Vehicle Parts"],"prefix":"10.1155","volume":"2023","author":[{"given":"Abdulmalik","family":"Aldawsari","sequence":"first","affiliation":[{"name":"Research Department, R&I Division, Elm Company, Riyadh 12382-4182, Saudi Arabia"}]},{"given":"Syed Adnan","family":"Yusuf","sequence":"additional","affiliation":[{"name":"Research Department, R&I Division, Elm Company, Riyadh 12382-4182, Saudi Arabia"}]},{"given":"Riad","family":"Souissi","sequence":"additional","affiliation":[{"name":"Research Department, R&I Division, Elm Company, Riyadh 12382-4182, Saudi Arabia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7594-7325","authenticated-orcid":true,"given":"Muhammad","family":"AL-Qurishi","sequence":"additional","affiliation":[{"name":"Research Department, R&I Division, Elm Company, Riyadh 12382-4182, Saudi Arabia"}]}],"member":"311","reference":[{"key":"1","doi-asserted-by":"crossref","DOI":"10.22161\/ijaers.74.56","article-title":"Deep Learning Models for Visual Inspection on Automotive Assembling Line","author":"M. Mazzetto","year":"2020"},{"key":"2","first-page":"594","article-title":"Partsnet: a unified deep network for automotive engine precision parts defect detection","author":"Z. Qu"},{"key":"3","first-page":"624","article-title":"Automatic instrument segmentation in robot-assisted surgery using deep learning","author":"A. A. Shvets"},{"key":"4","doi-asserted-by":"publisher","DOI":"10.1016\/j.ejrad.2019.01.028"},{"key":"5","first-page":"484","article-title":"Neural body fitting: unifying deep learning and model based human pose and shape estimation","author":"O. Mohamed"},{"key":"6","doi-asserted-by":"publisher","DOI":"10.1016\/j.asoc.2019.02.036"},{"key":"7","doi-asserted-by":"publisher","DOI":"10.1016\/j.measurement.2019.05.027"},{"key":"8","doi-asserted-by":"publisher","DOI":"10.1002\/aps3.11373"},{"issue":"10","key":"9","doi-asserted-by":"crossref","DOI":"10.3390\/electronics9101602","article-title":"Ced-net: crops and weeds segmentation for smart farming using a small cascaded encoder-decoder architecture","volume":"9","author":"A. Khan","year":"2020","journal-title":"Electronics"},{"key":"10","doi-asserted-by":"publisher","DOI":"10.1016\/j.compind.2018.02.003"},{"key":"11","doi-asserted-by":"publisher","DOI":"10.3390\/rs12101667"},{"key":"12","doi-asserted-by":"publisher","DOI":"10.1016\/j.isprsjprs.2020.01.013"},{"key":"13","first-page":"9204","article-title":"Pointgrid: a deep network for 3d shape understanding","author":"T. Le"},{"key":"14","first-page":"2569","article-title":"SGPN: Similarity group proposal network for 3D point cloud instance segmentation","author":"W. Wang"},{"key":"15","doi-asserted-by":"publisher","DOI":"10.1109\/access.2020.3032034"},{"key":"16","doi-asserted-by":"publisher","DOI":"10.1016\/j.compind.2019.08.002"},{"key":"17","doi-asserted-by":"publisher","DOI":"10.1016\/j.compind.2021.103545"},{"key":"18","doi-asserted-by":"publisher","DOI":"10.1016\/j.compind.2020.103225"},{"key":"19","doi-asserted-by":"publisher","DOI":"10.1007\/s12599-009-0088-6"},{"key":"20","article-title":"Mobilenets: efficient convolutional neural networks for mobile vision applications","author":"A. G. Howard","year":"2017"},{"key":"21","first-page":"1421","article-title":"Deepdecision: a mobile deep learning framework for edge video analytics","author":"X. Ran"},{"key":"22","first-page":"580","article-title":"Rich feature hierarchies for accurate object detection and semantic segmentation","author":"R. Girshick"},{"key":"23","first-page":"1440","article-title":"Fast r-cnn","author":"R. Girshick"},{"key":"24","article-title":"Faster r-cnn: towards real-time object detection with region proposal networks","volume":"28","author":"S. Ren","year":"2015","journal-title":"Advances in Neural Information Processing Systems"},{"key":"25","first-page":"21","article-title":"Ssd: single shot multibox detector","author":"W. Liu"},{"key":"26","first-page":"779","article-title":"You only look once: unified, real-time object detection","author":"R. Joseph"},{"key":"27","doi-asserted-by":"publisher","DOI":"10.1016\/j.compind.2021.103585"},{"key":"28","doi-asserted-by":"publisher","DOI":"10.1016\/j.compind.2021.103559"},{"key":"29","doi-asserted-by":"publisher","DOI":"10.1016\/j.compind.2021.103482"},{"key":"30","doi-asserted-by":"publisher","DOI":"10.1016\/j.compind.2021.103583"},{"key":"31","doi-asserted-by":"publisher","DOI":"10.1016\/j.compind.2021.103551"},{"key":"32","doi-asserted-by":"publisher","DOI":"10.1016\/j.compind.2019.02.003"},{"key":"33","doi-asserted-by":"publisher","DOI":"10.1016\/j.compind.2021.103450"},{"key":"34","doi-asserted-by":"publisher","DOI":"10.1016\/j.compind.2018.11.003"},{"key":"35","doi-asserted-by":"publisher","DOI":"10.1016\/j.compind.2020.103385"},{"key":"36","doi-asserted-by":"publisher","DOI":"10.1016\/j.compind.2021.103528"},{"key":"37","first-page":"770","article-title":"Deep residual learning for image recognition","author":"K. He"},{"key":"38","first-page":"2961","article-title":"Mask r-cnn","author":"K. He"},{"key":"39","article-title":"Imagenet classification with deep convolutional neural networks","volume":"25","author":"A. Krizhevsky","year":"2012","journal-title":"Advances in Neural Information Processing Systems"},{"key":"40","article-title":"Very deep convolutional networks for large-scale image recognition","author":"K. Simonyan","year":"2014"},{"key":"41","first-page":"1","article-title":"Going deeper with convolutions","author":"C. Szegedy"},{"key":"42","article-title":"Squeezenet: alexnet-level accuracy with 50x fewer parameters and\u00a1 0.5 mb model size","author":"F. N. Iandola","year":"2016"},{"key":"43","first-page":"129","article-title":"What is the state of neural network pruning?","volume":"2","author":"B. Davis","year":"2020","journal-title":"Proceedings of machine learning and systems"},{"key":"44","first-page":"9157","article-title":"Yolact: real-time instance segmentation","author":"D. Bolya"},{"key":"45","first-page":"1","article-title":"Sipmask: spatial information preservation for fast image and video instance segmentation","volume-title":"European Conference on Computer Vision","author":"J. Cao","year":"2020"},{"key":"46","article-title":"Automotive parts assessment: applying real-timeinstance-segmentation models to identify vehicle parts","author":"Y. Syed Adnan","year":"2022"},{"key":"47","doi-asserted-by":"publisher","DOI":"10.1016\/j.compind.2020.103232"},{"key":"48","doi-asserted-by":"publisher","DOI":"10.1016\/j.compind.2020.103303"},{"key":"49","doi-asserted-by":"publisher","DOI":"10.1016\/j.compind.2021.103459"},{"key":"50","doi-asserted-by":"publisher","DOI":"10.1109\/34.927467"},{"key":"51","first-page":"1","article-title":"A discriminatively trained, multiscale, deformable part model","author":"P. Felzenszwalb"},{"key":"52","first-page":"2171","article-title":"Scalable bayesian optimization using deep neural networks","author":"S. Jasper"},{"key":"53","article-title":"A survey of quantization methods for efficient neural network inference","author":"A. Gholami","year":"2021"},{"key":"54","article-title":"Learning both weights and connections for efficient neural network","volume":"28","author":"S. Han","year":"2015","journal-title":"Advances in Neural Information Processing Systems"},{"issue":"241","key":"55","first-page":"1","article-title":"Sparsity in deep learning: pruning and growth for efficient inference and training in neural networks","volume":"22","author":"T. Hoefler","year":"2021","journal-title":"Journal of Machine Learning Research"},{"key":"56","doi-asserted-by":"publisher","DOI":"10.1007\/s40747-021-00397-8"},{"key":"57","doi-asserted-by":"publisher","DOI":"10.1016\/j.compind.2018.03.002"},{"key":"58","doi-asserted-by":"publisher","DOI":"10.1109\/tpami.2020.3014297"},{"key":"59","first-page":"2918","article-title":"Simple copy-paste is a strong data augmentation method for instance segmentation","author":"G. Ghiasi"},{"key":"60","first-page":"1","article-title":"Evaluation of deep learning algorithms for semantic segmentation of car parts","author":"K. Pasupa","year":"2021","journal-title":"Complex & Intelligent Systems,"}],"container-title":["Complexity"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/downloads.hindawi.com\/journals\/complexity\/2023\/6460639.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/complexity\/2023\/6460639.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/complexity\/2023\/6460639.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,4,11]],"date-time":"2023-04-11T23:05:21Z","timestamp":1681254321000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.hindawi.com\/journals\/complexity\/2023\/6460639\/"}},"subtitle":[],"editor":[{"given":"Lingzhong","family":"Guo","sequence":"additional","affiliation":[]}],"short-title":[],"issued":{"date-parts":[[2023,4,11]]},"references-count":60,"alternative-id":["6460639","6460639"],"URL":"https:\/\/doi.org\/10.1155\/2023\/6460639","relation":{},"ISSN":["1099-0526","1076-2787"],"issn-type":[{"value":"1099-0526","type":"electronic"},{"value":"1076-2787","type":"print"}],"subject":[],"published":{"date-parts":[[2023,4,11]]}}}