{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:52:13Z","timestamp":1760147533158,"version":"build-2065373602"},"reference-count":34,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2023,2,10]],"date-time":"2023-02-10T00:00:00Z","timestamp":1675987200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Natural Science Foundation of China","award":["52275511"],"award-info":[{"award-number":["52275511"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>During the manual grinding of blades, the workers can estimate the material removal rate based on their experiences from observing the characteristics of the grinding sparks, leading to low grinding accuracy and low efficiency and affecting the processing quality of the blades. As an alternative to the recognition of spark images by the human eye, we used the deep learning algorithm YOLO5 to perform target detection on spark images and obtain spark image regions. First the spark images generated during one turbine blade-grinding process were collected, and some of the images were selected as training samples, with the remaining images used as test samples, which were labelled with LabelImg. Afterwards, the selected images were trained with YOLO5 to obtain an optimisation model. In the end, the trained optimisation model was used to predict the images of the test set. The proposed method was able to detect spark image regions quickly and accurately, with an average accuracy of 0.995. YOLO4 was also used to train and predict spark images, and the two methods were compared. Our findings show that YOLO5 is faster and more accurate than the YOLO4 target detection algorithm and can replace manual observation, laying a specific foundation for the automatic segmentation of spark images and the study of the relationship between the material removal rate and spark images at a later stage, which has some practical value.<\/jats:p>","DOI":"10.3390\/s23042025","type":"journal-article","created":{"date-parts":[[2023,2,13]],"date-time":"2023-02-13T02:14:11Z","timestamp":1676254451000},"page":"2025","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["A Study of an Online Tracking System for Spark Images of Abrasive Belt-Polishing Workpieces"],"prefix":"10.3390","volume":"23","author":[{"given":"Jian","family":"Huang","sequence":"first","affiliation":[{"name":"School of Mechanical and Precision Instrument Engineering, Xi\u2019an University of Technology, Xi\u2019an 710048, China"},{"name":"School of Computer Science, Xijing University, Xi\u2019an 710123, China"}]},{"given":"Guangpeng","family":"Zhang","sequence":"additional","affiliation":[{"name":"School of Mechanical and Precision Instrument Engineering, Xi\u2019an University of Technology, Xi\u2019an 710048, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,2,10]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"302","DOI":"10.1016\/j.jmapro.2020.09.061","article-title":"Multi-information fusion-based belt condition monitoring in grinding process using the improved-Mahalanobis distance and convolutional neural networks","volume":"59","author":"Qi","year":"2020","journal-title":"J. Manuf. Process."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"114","DOI":"10.1016\/j.jmapro.2020.06.013","article-title":"Modelling and monitoring of abrasive finishing processes using artificial intelligence techniques: A review","volume":"57","author":"Pandiyan","year":"2020","journal-title":"J. Manuf. Process."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"199","DOI":"10.1016\/j.jmapro.2017.11.014","article-title":"In-process tool condition monitoring in compliant abrasive belt grinding process using support vector machine and genetic algorithm","volume":"31","author":"Pandiyan","year":"2018","journal-title":"J. Manuf. Process."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"217","DOI":"10.1007\/s00170-019-04170-7","article-title":"A novel material removal prediction method based on acoustic sensing and ensemble XGBoost learning algorithm for robotic belt grinding of Inconel 718","volume":"105","author":"Gao","year":"2019","journal-title":"Int. J. Adv. Manuf. Technol."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016, January 27\u201330). You only look once: Unifified, real-time object detection. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.91"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Redmon, J., and Farhadi, A. (2017, January 21\u201326). YOLO9000: Better, faster, stronger. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.690"},{"key":"ref_7","unstructured":"Redmon, J., and Farhadi, A. (2018). YOLOv3: An incremental improvement. arXiv."},{"key":"ref_8","unstructured":"Bochkovskiy, A., Wang, C.-Y., and Liao, H.-Y.M. (2020). YOLOv4: Optimal Speed and Accuracy of Object Detection. arXiv."},{"key":"ref_9","unstructured":"Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., and Berg, A.C. (2016). Computer Vision-ECCV 2016, Springer."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Girshick, R., Donahue, J., Malik, T.D.J., and Berkeley, U. (2014). Rich feature hierarchies for accurate object detection and semantic segmentation Tech report (v5). arXiv.","DOI":"10.1109\/CVPR.2014.81"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Girshick, R. (2015). Fast R-CNN. arXiv.","DOI":"10.1109\/ICCV.2015.169"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Ren, S., He, K., Girshick, R., and Sun, J. (2016). Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. arXiv.","DOI":"10.1109\/TPAMI.2016.2577031"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"He, K., Gkioxari, G., Dollar, P., and Girshick, R. (2018). Mask R-CNN. arXiv.","DOI":"10.1109\/ICCV.2017.322"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"103770","DOI":"10.1016\/j.infrared.2021.103770","article-title":"Adaptive spatial pixel-level feature fusion network for multispectral pedestrian detection","volume":"116","author":"Fu","year":"2021","journal-title":"Infrared Phys. Technol."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Lian, J., Yin, Y., Li, L., Wang, Z., and Zhou, Y. (2021). Small Object Detection in Traffific Scenes Based on Attention Feature Fusion. Sensors, 21.","DOI":"10.3390\/s21093031"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Wenkel, S., Alhazmi, K., Liiv, T., Alrshoud, S., and Simon, M. (2021). Confifidence Score: The Forgotten Dimension of Object Detection Performance Evaluation. Sensors, 21.","DOI":"10.3390\/s21134350"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"5471","DOI":"10.1007\/s00521-019-04645-4","article-title":"Real-time behavior detection and judgment of egg breeders based on YOLO v3","volume":"32","author":"Wang","year":"2020","journal-title":"Neural Comput. Appl."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"3895","DOI":"10.1007\/s00521-021-06651-x","article-title":"A fast accurate fine-grain object detection model based on YOLO4deep neural network","volume":"34","author":"Arunabha","year":"2022","journal-title":"Neural Comput. Appl."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"2715","DOI":"10.1007\/s00170-019-04124-z","article-title":"A new in-process material removal rate monitoring approach in abrasive belt grinding","volume":"104","author":"Ren","year":"2019","journal-title":"Int. J. Adv. Manuf. Technol."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"3241","DOI":"10.1007\/s00170-021-06988-6","article-title":"Novel monitoring method for material removal rate considering quantitative wear of abrasive belts based on LightGBM learning algorithm","volume":"114","author":"Wang","year":"2021","journal-title":"Int. J. Adv. Manuf. Technol."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"281","DOI":"10.1016\/j.jmapro.2021.04.014","article-title":"Vision and sound fusion-based material removal rate monitoring for abrasive belt grinding using improved LightGBM algorithm","volume":"66","author":"Wang","year":"2021","journal-title":"J. Manuf. Process."},{"key":"ref_22","first-page":"234","article-title":"Camellia Fruit Detection in Natural Scene Based on YOLO v5s","volume":"53","author":"Huaibo","year":"2022","journal-title":"Trans. Chin. Soc. Agric. Mach."},{"key":"ref_23","unstructured":"Wenliang, W., Yanxiang, L., Yifan, Z., Peng, H., and Shihao, L. (2021). MPANet-YOLOv5: Multi-Path Aggregation Network for Complex Sea Object Detection. J. Hunan Univ. Nat. Sci."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Rezatofifighi, H., Tsoi, N., Gwak, J., Sadeghian, A., Reid, I., and Savarese, S. (2019). Generalized Intersection over Union: A Metric and A Loss for Bounding Box Regression. arXiv.","DOI":"10.1109\/CVPR.2019.00075"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"1225","DOI":"10.1007\/s11517-022-02551-x","article-title":"An improved YOLO Nano model for dorsal hand vein detection system","volume":"60","author":"Tian","year":"2022","journal-title":"Med. Biol. Eng. Comput."},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"2389","DOI":"10.1007\/s11554-021-01131-w","article-title":"A lightweight Tiny-YOLOv3 vehicle detection approach","volume":"18","author":"Tajar","year":"2021","journal-title":"J. Real-Time Image Process."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"9","DOI":"10.1016\/j.jmatprotec.2018.05.013","article-title":"A novel sound-based belt condition monitoring method for robotic grinding using optimally pruned extreme learning machine","volume":"260","author":"Zhang","year":"2018","journal-title":"J. Mater. Process. Tech."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Gai, R., Chen, N., and Yuan, H. (2021). A detection algorithm for cherry fruits based on the improved YOLO-v4 mode. Neural Comput. Appl.","DOI":"10.1007\/s00521-021-06029-z"},{"key":"ref_29","unstructured":"Ting, Z.F. (2021). Research on Target Detection System of Basketball Robot Based on Improved YOLOv5 Algorithm, Chong Qing University."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2015). Deep Residual Learning for Image Recognition. arXiv.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2014). Spatial Pyramid Pooling in Deep Convolutional Networks for Visual Recognition. arXiv.","DOI":"10.1007\/978-3-319-10578-9_23"},{"key":"ref_32","unstructured":"Wang, C.Y., Liao, H.Y.M., Wu, Y.H., Chen, P.Y., and Yeh, I.H. (2020). Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14\u201319 June 2020, IEEE."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Liu, S., Qi, L., Qin, H., Shi, J., and Jia, J. (2018, January 18\u201323). Path aggregation network for instance segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00913"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Huang, G., Liu, Z., Laurens, V., and Weinberger, K. (2018). Densely Connected convolutional Networks. arXiv.","DOI":"10.1109\/CVPR.2017.243"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/4\/2025\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T18:30:45Z","timestamp":1760121045000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/4\/2025"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,2,10]]},"references-count":34,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2023,2]]}},"alternative-id":["s23042025"],"URL":"https:\/\/doi.org\/10.3390\/s23042025","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2023,2,10]]}}}