{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,6]],"date-time":"2026-03-06T23:08:23Z","timestamp":1772838503158,"version":"3.50.1"},"reference-count":35,"publisher":"MDPI AG","issue":"9","license":[{"start":{"date-parts":[[2022,8,25]],"date-time":"2022-08-25T00:00:00Z","timestamp":1661385600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Natural Science Foundation of China","award":["62033014"],"award-info":[{"award-number":["62033014"]}]},{"name":"National Natural Science Foundation of China","award":["61673166"],"award-info":[{"award-number":["61673166"]}]},{"name":"National Natural Science Foundation of China","award":["2021JJ50006"],"award-info":[{"award-number":["2021JJ50006"]}]},{"name":"National Natural Science Foundation of China","award":["2022JJ50074"],"award-info":[{"award-number":["2022JJ50074"]}]},{"name":"Natural Science Foundation of Hunan Province","award":["62033014"],"award-info":[{"award-number":["62033014"]}]},{"name":"Natural Science Foundation of Hunan Province","award":["61673166"],"award-info":[{"award-number":["61673166"]}]},{"name":"Natural Science Foundation of Hunan Province","award":["2021JJ50006"],"award-info":[{"award-number":["2021JJ50006"]}]},{"name":"Natural Science Foundation of Hunan Province","award":["2022JJ50074"],"award-info":[{"award-number":["2022JJ50074"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Entropy"],"abstract":"<jats:p>It is difficult to identify the working conditions of the rotary kilns due to the harsh environment in the kilns. The flame images of the firing zone in the kilns contain a lot of working condition information, but the flame image data sample size is too small to be used to fully extract the key features. In order to solve this problem, a method combining transfer learning and attention mechanism is proposed to extract key features of flame images, in which the deep residual network is used as the backbone network, the coordinate attention module is introduced to capture the position information and channel information on the branch of feature graphs, and the features of flame images obtained are further screened to improve the extraction ability. At the same time, migration learning is performed by the pre-trained ImageNet data set, and feature migration and parameter sharing are realized to cope with the training difficulty of a small sample data size. Moreover, an activation function Mish is introduced to reduce the loss of effective information. The experimental results show that, compared with traditional methods, the working condition recognition accuracy of rotary kilns is improved by about 5% with the proposed method.<\/jats:p>","DOI":"10.3390\/e24091186","type":"journal-article","created":{"date-parts":[[2022,8,25]],"date-time":"2022-08-25T21:28:12Z","timestamp":1661462892000},"page":"1186","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":10,"title":["Working Condition Recognition Based on Transfer Learning and Attention Mechanism for a Rotary Kiln"],"prefix":"10.3390","volume":"24","author":[{"given":"Yuchao","family":"Hu","sequence":"first","affiliation":[{"name":"School of Electrical & Information Engineering, Hunan University of Technology, Zhuzhou 412007, China"},{"name":"College of Railway Transportation, Hunan University of Technology, Zhuzhou 412007, China"}]},{"given":"Weihua","family":"Zheng","sequence":"additional","affiliation":[{"name":"School of Electrical & Information Engineering, Hunan University of Technology, Zhuzhou 412007, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9075-2833","authenticated-orcid":false,"given":"Xin","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Electrical & Information Engineering, Hunan University of Technology, Zhuzhou 412007, China"}]},{"given":"Bin","family":"Qin","sequence":"additional","affiliation":[{"name":"School of Electrical & Information Engineering, Hunan University of Technology, Zhuzhou 412007, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,8,25]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"118656","DOI":"10.1016\/j.energy.2020.118656","article-title":"Burning condition recognition of rotary kiln based on spatiotemporal features of flame video","volume":"211","author":"Chen","year":"2020","journal-title":"Energy"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"271","DOI":"10.1016\/j.isatra.2020.07.010","article-title":"Sintering conditions recognition of rotary kiln based on kernel modification considering class imbalance","volume":"106","author":"Wang","year":"2020","journal-title":"ISA Trans."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Guo, S., Sheng, Y., and Chai, L. (2017, January 18). SVD-Based burning state recognition in rotary kiln using machine learning. Proceedings of the 2017 12th IEEE Conference on Industrial Electronics and Applications (ICIEA), Siem Reap, Cambodia.","DOI":"10.1109\/ICIEA.2017.8282832"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"167418","DOI":"10.1016\/j.ijleo.2021.167418","article-title":"Recognition method of cement rotary kiln burning state based on Otsu-Kmeans flame image segmentation and SVM","volume":"243","author":"Zhang","year":"2021","journal-title":"Optik"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"4458","DOI":"10.1109\/ACCESS.2017.2683480","article-title":"Simulated Feedback Mechanism-Based Rotary Kiln Burning State Cognition Intelligence Method","volume":"5","author":"Chen","year":"2017","journal-title":"IEEE Access"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"148","DOI":"10.1109\/TII.2015.2500891","article-title":"Recognition of the Temperature Condition of a Rotary Kiln Using Dynamic Features of a Series of Blurry Flame Images","volume":"12","author":"Chen","year":"2016","journal-title":"IEEE Trans. Ind. Inform."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Mohapatra, M., Parida, A.K., Mallick, P.K., Zymbler, M., and Kumar, S. (2022). Botanical Leaf Disease Detection and Classification Using Convolutional Neural Network: A Hybrid Metaheuristic Enabled Approach. Computers, 11.","DOI":"10.3390\/computers11050082"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Memon, M.S., Kumar, P., and Iqbal, R. (2022). Meta Deep Learn Leaf Disease Identification Model for Cotton Crop. Computers, 11.","DOI":"10.3390\/computers11070102"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"012030","DOI":"10.1088\/1742-6596\/1575\/1\/012030","article-title":"Rotary Kiln Combustion State Recognition Based on Convolutional Neural Network","volume":"1575","author":"Li","year":"2020","journal-title":"J. Phys. Conf. Ser."},{"key":"ref_10","first-page":"84","article-title":"Intelligent cognition of rotary kiln burning state based on deep transfer learning","volume":"42","author":"Luan","year":"2019","journal-title":"J. Chongqing Univ."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Firat, O., Cho, K., and Bengio, Y. (2016). Multi-Way, Multilingual Neural Machine Translation with a Shared Attention Mechanism. arXiv.","DOI":"10.18653\/v1\/N16-1101"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Alzahrani, S., Al-Bander, B., and Al-Nuaimy, W. (2022). Attention Mechanism Guided Deep Regression Model for Acne Severity Grading. Computers, 11.","DOI":"10.3390\/computers11030031"},{"key":"ref_13","first-page":"27","article-title":"Recurrent Models of Visual Attention","volume":"3","author":"Mnih","year":"2014","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_14","unstructured":"Ba, J., Mnih, V., and Kavukcuoglu, K. (2014). Multiple Object Recognition with Visual Attention. arXiv."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"2439","DOI":"10.1109\/TIP.2018.2886767","article-title":"Occlusion Aware Facial Expression Recognition Using CNN With Attention Mechanism","volume":"28","author":"Li","year":"2019","journal-title":"IEEE Trans. Image Process."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"42","DOI":"10.1016\/j.patcog.2018.02.026","article-title":"Breast mass classification via deeply integrating the contextual information from multi-view data","volume":"80","author":"Wang","year":"2018","journal-title":"Pattern Recognit."},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Sumbul, G., and Demir, B. (2019). A CNN-RNN Framework with a Novel Patch-Based Multi-Attention Mechanism for Multi-Label Image Classification in Remote Sensing. arXiv.","DOI":"10.1109\/IGARSS.2019.8898188"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"10922","DOI":"10.1109\/TIE.2019.2962437","article-title":"An Efficient Convolutional Neural Network Model Based on Object-Level Attention Mechanism for Casting Defect Detection on Radiography Images","volume":"67","author":"Hu","year":"2020","journal-title":"IEEE Trans. Ind. Electron."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"104547","DOI":"10.1016\/j.engappai.2021.104547","article-title":"An efficient unsupervised image quality metric with application for condition recognition in kiln","volume":"107","author":"Wu","year":"2022","journal-title":"Eng. Appl. Artif. Intell."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"3843","DOI":"10.1109\/TII.2021.3118135","article-title":"Combustion Condition Recognition of Coal-Fired Kiln Based on Chaotic Characteristics Analysis of Flame Video","volume":"18","author":"Jiang","year":"2022","journal-title":"IEEE Trans. Ind. Inform."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"7400","DOI":"10.1109\/TIE.2020.3003579","article-title":"A Sintering State Recognition Framework to Integrate Prior Knowledge and Hidden Information Considering Class Imbalance","volume":"68","author":"Wang","year":"2021","journal-title":"IEEE Trans. Ind. Electron."},{"key":"ref_22","unstructured":"Tomasi, C., and Manduchi, R. (1998, January 7). Bilateral filtering for gray and color images. Proceedings of the Sixth International Conference on Computer Vision (IEEE Cat. No.98CH36271), Bombay, India."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Ledig, C., Theis, L., Huszar, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., and Wang, Z. (2017, January 21\u201326). Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.19"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep Residual Learning for Image Recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_25","unstructured":"Misra, D. (2019). Mish: A Self Regularized Non-Monotonic Neural Activation Function. arXiv."},{"key":"ref_26","unstructured":"Wang, Z. (2018). Theoretical Guarantees of Transfer Learning. arXiv."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Hou, Q., Zhou, D., and Feng, J. (2021, January 20\u201325). Coordinate Attention for Efficient Mobile Network Design. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.01350"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., and Sun, G. (2018, January 18\u201323). Squeeze-and-Excitation Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00745"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Woo, S., Park, J., Lee, J.Y., and Kweon, I.S. (2018, January 8\u201314). CBAM: Convolutional Block Attention Module. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01234-2_1"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Wang, Q., Wu, B., Zhu, P., Li, P., Zuo, W., and Hu, Q. (2020, January 13\u201319). ECA-Net: Efficient Channel Attention for Deep Convolutional Neural Neworks. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.01155"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Selvaraju, R.R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., and Batra, D. (2017, January 22\u201329). Grad-CAM: Visual Explanations From Deep Networks via Gradient-Based Localization. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.74"},{"key":"ref_32","unstructured":"Krizhevsky, A., Sutskever, I., and Hinton, G.E. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems, Curran Associates, Inc."},{"key":"ref_33","unstructured":"Simonyan, K., and Zisserman, A. (2014). Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.C. (2019, January 15\u201320). MobileNetV2: Inverted Residuals and Linear Bottlenecks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2018.00474"},{"key":"ref_35","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2021). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv."}],"container-title":["Entropy"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1099-4300\/24\/9\/1186\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:15:02Z","timestamp":1760141702000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1099-4300\/24\/9\/1186"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,8,25]]},"references-count":35,"journal-issue":{"issue":"9","published-online":{"date-parts":[[2022,9]]}},"alternative-id":["e24091186"],"URL":"https:\/\/doi.org\/10.3390\/e24091186","relation":{},"ISSN":["1099-4300"],"issn-type":[{"value":"1099-4300","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,8,25]]}}}