{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,10]],"date-time":"2026-03-10T05:08:25Z","timestamp":1773119305514,"version":"3.50.1"},"reference-count":35,"publisher":"SAGE Publications","issue":"4","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["AIC"],"published-print":{"date-parts":[[2023,10,13]]},"abstract":"<jats:p>Fire monitoring of fire-prone areas is essential, and in order to meet the requirements of edge deployment and the balance of fire recognition accuracy and speed, we design a lightweight fire recognition network, Conflagration-YOLO. Conflagration-YOLO is constructed by depthwise separable convolution and more attention to fire feature information extraction from a three-dimensional(3D) perspective, which improves the network feature extraction capability, achieves a balance of accuracy and speed, and reduces model parameters. In addition, a new activation function is used to improve the accuracy of fire recognition while minimizing the inference time of the network. All models are trained and validated on a custom fire dataset and fire inference is performed on the CPU. The mean Average Precision(mAP) of the proposed model reaches 80.92%, which has a great advantage compared with Faster R-CNN. Compared with YOLOv3-Tiny, the proposed model decreases the number of parameters by 5.71\u00a0M and improves the mAP by 6.67%. Compared with YOLOv4-Tiny, the number of parameters decreases by 3.54\u00a0M, mAP increases by 8.47%, and inference time decreases by 62.59\u00a0ms. Compared with YOLOv5s, the difference in the number of parameters is nearly twice reduced by 4.45\u00a0M and the inference time is reduced by 41.87\u00a0ms. Compared with YOLOX-Tiny, the number of parameters decreases by 2.5\u00a0M, mAP increases by 0.7%, and inference time decreases by 102.49\u00a0ms. Compared with YOLOv7, the number of parameters decreases significantly and the balance of accuracy and speed is achieved. Compared with YOLOv7-Tiny, the number of parameters decreases by 3.64\u00a0M, mAP increases by 0.5%, and inference time decreases by 15.65\u00a0ms. The experiment verifies the superiority and effectiveness of Conflagration-YOLO compared to the state-of-the-art (SOTA) network model. Furthermore, our proposed model and its dimensional variants can be applied to computer vision downstream target detection tasks in other scenarios as required.<\/jats:p>","DOI":"10.3233\/aic-230094","type":"journal-article","created":{"date-parts":[[2023,10,10]],"date-time":"2023-10-10T16:05:14Z","timestamp":1696953914000},"page":"361-376","source":"Crossref","is-referenced-by-count":3,"title":["Conflagration-YOLO: a lightweight object detection architecture for conflagration"],"prefix":"10.1177","volume":"36","author":[{"given":"Ning","family":"Sun","sequence":"first","affiliation":[{"name":"School of Automation, Wuxi University, Wuxi 214105, China"}]},{"given":"Pengfei","family":"Shen","sequence":"additional","affiliation":[{"name":"School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, China"}]},{"given":"Xiaoling","family":"Ye","sequence":"additional","affiliation":[{"name":"School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, China"}]},{"given":"Yifei","family":"Chen","sequence":"additional","affiliation":[{"name":"School of Automation, Wuxi University, Wuxi 214105, China"}]},{"given":"Xiping","family":"Cheng","sequence":"additional","affiliation":[{"name":"Fire and Rescue Detachment, Wuxi, Jiangsu, China"}]},{"given":"Pingping","family":"Wang","sequence":"additional","affiliation":[{"name":"Fire Research Institute, Shanghai, China"}]},{"given":"Jie","family":"Min","sequence":"additional","affiliation":[{"name":"School of Automation, Nanjing University of Information Science and Technology, Nanjing 210044, China"}]}],"member":"179","reference":[{"key":"10.3233\/AIC-230094_ref3","doi-asserted-by":"crossref","first-page":"507","DOI":"10.1016\/j.firesaf.2007.01.006","article-title":"Fire detection using smoke and gas sensors","volume":"42","author":"Chen","year":"2007","journal-title":"Fire Saf. J."},{"key":"10.3233\/AIC-230094_ref7","doi-asserted-by":"publisher","first-page":"3","DOI":"10.1016\/j.neunet.2017.12.012","article-title":"Sigmoid-weighted linear units for neural network function approximation in reinforcement learning","volume":"107","author":"Elfwing","year":"2018","journal-title":"Neural Networks"},{"key":"10.3233\/AIC-230094_ref8","doi-asserted-by":"publisher","first-page":"212","DOI":"10.1016\/S0375-9601(00)00725-8","article-title":"Extended tanh-function method and its applications to nonlinear equations","volume":"277","author":"Fan","year":"2000","journal-title":"Physics Letters A"},{"key":"10.3233\/AIC-230094_ref10","doi-asserted-by":"crossref","unstructured":"R.\u00a0Girshick, J.\u00a0Donahue, T.\u00a0Darrell and J.\u00a0Malik, Rich feature hierarchies for accurate object detection and semantic segmentation, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2014, pp.\u00a0580\u2013587.","DOI":"10.1109\/CVPR.2014.81"},{"key":"10.3233\/AIC-230094_ref12","unstructured":"I.\u00a0Goodfellow, D.\u00a0Warde-Farley, M.\u00a0Mirza, A.\u00a0Courville and Y.\u00a0Bengio, Maxout networks, in: Maxout Networks, International Conference on Machine Learning. PMLR, 2013, pp.\u00a01319\u20131327."},{"key":"10.3233\/AIC-230094_ref13","doi-asserted-by":"crossref","unstructured":"J.\u00a0Han and C.\u00a0Moraga, The influence of the sigmoid function parameters on the speed of backpropagation learning, in: International Workshop on Artificial Neural Networks, Springer, 1995, pp.\u00a0195\u2013201.","DOI":"10.1007\/3-540-59497-3_175"},{"key":"10.3233\/AIC-230094_ref14","doi-asserted-by":"crossref","unstructured":"K.\u00a0Han, Y.\u00a0Wang, Q.\u00a0Tian et al., Ghostnet: More features from cheap operations, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp.\u00a01580\u20131589.","DOI":"10.1109\/CVPR42600.2020.00165"},{"key":"10.3233\/AIC-230094_ref15","doi-asserted-by":"publisher","first-page":"1904","DOI":"10.1109\/TPAMI.2015.2389824","article-title":"Spatial pyramid pooling in deep convolutional networks for visual recognition","volume":"37","author":"He","year":"2015","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"10.3233\/AIC-230094_ref16","doi-asserted-by":"crossref","unstructured":"K.\u00a0He, X.\u00a0Zhang, S.\u00a0Ren and J.\u00a0Sun, Delving deep into rectifiers: Surpassing human-level performance on imagenet classification, in: Proceedings of the IEEE International Conference on Computer Vision, 2015, pp.\u00a01026\u20131034.","DOI":"10.1109\/ICCV.2015.123"},{"key":"10.3233\/AIC-230094_ref17","doi-asserted-by":"crossref","unstructured":"J.H.\u00a0Holland, Adaptation in Natural and Artificial Systems: An Introductory Analysis with Applications to Biology, Control, and Artificial Intelligence, MIT Press, 1992.","DOI":"10.7551\/mitpress\/1090.001.0001"},{"key":"10.3233\/AIC-230094_ref18","doi-asserted-by":"crossref","unstructured":"A.\u00a0Howard, M.\u00a0Sandler, G.\u00a0Chu et al., Searching for mobilenetv3, in: Proceedings of the IEEE\/CVF International Conference on Computer Vision, 2019, pp.\u00a01314\u20131324.","DOI":"10.1109\/ICCV.2019.00140"},{"key":"10.3233\/AIC-230094_ref20","doi-asserted-by":"crossref","unstructured":"J.\u00a0Hu, L.\u00a0Shen and G.\u00a0Sun, Squeeze-and-excitation networks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp.\u00a07132\u20137141.","DOI":"10.1109\/CVPR.2018.00745"},{"key":"10.3233\/AIC-230094_ref21","doi-asserted-by":"publisher","DOI":"10.3390\/s22145259"},{"key":"10.3233\/AIC-230094_ref22","doi-asserted-by":"publisher","DOI":"10.3390\/jmse9070691"},{"key":"10.3233\/AIC-230094_ref23","doi-asserted-by":"crossref","unstructured":"J.\u00a0Huang, P.\u00a0Zhu, M.\u00a0Geng et al., Range scaling global u-net for perceptual image enhancement on mobile devices, in: Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 2018.","DOI":"10.1007\/978-3-030-11021-5_15"},{"key":"10.3233\/AIC-230094_ref25","doi-asserted-by":"publisher","DOI":"10.3390\/s17020303"},{"key":"10.3233\/AIC-230094_ref26","doi-asserted-by":"crossref","unstructured":"Y.\u00a0Li, Y.\u00a0Chen, X.\u00a0Dai et al., Micronet: Improving image recognition with extremely low flops, in: Proceedings of the IEEE\/CVF International Conference on Computer Vision, 2021, pp.\u00a0468\u2013477.","DOI":"10.1109\/ICCV48922.2021.00052"},{"key":"10.3233\/AIC-230094_ref27","doi-asserted-by":"crossref","unstructured":"F.\u00a0Lin, Z.\u00a0Wang, D.\u00a0Shen et al., Intelligent flame detection based on principal component analysis and support vector machine, in: 2019 Tenth International Conference on Intelligent Control and Information Processing (ICICIP), IEEE, 2020.","DOI":"10.1109\/ICICIP47338.2019.9012179"},{"issue":"02","key":"10.3233\/AIC-230094_ref28","first-page":"250","article-title":"Early flame recognition method based on improved gradient edge feature","volume":"38","author":"Liu","year":"2019","journal-title":"Fire Science and Technology"},{"key":"10.3233\/AIC-230094_ref29","doi-asserted-by":"publisher","DOI":"10.3390\/app12147286"},{"key":"10.3233\/AIC-230094_ref30","doi-asserted-by":"publisher","DOI":"10.3390\/app122412876"},{"key":"10.3233\/AIC-230094_ref31","unstructured":"J.\u00a0MacQueen, Classification and analysis of multivariate observations, in: 5th Berkeley Symp. Math. Statist. Probability, 1967, pp.\u00a0281\u2013297."},{"issue":"4","key":"10.3233\/AIC-230094_ref32","doi-asserted-by":"publisher","first-page":"363","DOI":"10.1023\/A:1025378100781","article-title":"Investigation of multi-sensor algorithms for fire detection","volume":"39","author":"Milke","year":"2003","journal-title":"Fire Technology"},{"key":"10.3233\/AIC-230094_ref34","unstructured":"S.\u00a0Ren, K.\u00a0He, R.\u00a0Girshick and J.\u00a0Sun, Faster r-cnn: Towards real-time object detection with region proposal networks, Advances in Neural Information Processing Systems 28 (2015)."},{"key":"10.3233\/AIC-230094_ref35","doi-asserted-by":"crossref","unstructured":"M.\u00a0Sandler, A.\u00a0Howard, M.\u00a0Zhu et al., Mobilenetv2: Inverted residuals and linear bottlenecks, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp.\u00a04510\u20134520.","DOI":"10.1109\/CVPR.2018.00474"},{"key":"10.3233\/AIC-230094_ref36","doi-asserted-by":"crossref","unstructured":"V.\u00a0Sharma, A.\u00a0Diba, D.\u00a0Neven et al., Classification-driven dynamic image enhancement, in: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2018, pp.\u00a04033\u20134041.","DOI":"10.1109\/CVPR.2018.00424"},{"key":"10.3233\/AIC-230094_ref37","doi-asserted-by":"publisher","DOI":"10.3390\/app12147346"},{"key":"10.3233\/AIC-230094_ref38","doi-asserted-by":"crossref","unstructured":"Z.C.\u00a0Wan, Y.\u00a0Zhuo, H.H.\u00a0Jiang et al., Fire detection from images based on single shot MultiBox detector, in: The 10th International Conference on Computer Engineering and Networks, Vol.\u00a01274, Springer, 2020.","DOI":"10.1007\/978-981-15-8462-6_36"},{"key":"10.3233\/AIC-230094_ref39","doi-asserted-by":"crossref","unstructured":"C.Y.\u00a0Wang, A.\u00a0Bochkovskiy and H.Y.M.\u00a0Liao, Scaled-yolov4: Scaling cross stage partial network, in: Proceedings of the IEEE\/Cvf Conference on Computer Vision and Pattern Recognition, 2021, pp.\u00a013029\u201313038.","DOI":"10.1109\/CVPR46437.2021.01283"},{"key":"10.3233\/AIC-230094_ref41","doi-asserted-by":"crossref","unstructured":"W.\u00a0Wu, J.\u00a0Weng, P.\u00a0Zhang et al., URetinex-Net: Retinex-based deep unfolding network for low-light image enhancement, in: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp.\u00a05901\u20135910.","DOI":"10.1109\/CVPR52688.2022.00581"},{"key":"10.3233\/AIC-230094_ref42","unstructured":"L.\u00a0Yang, R.Y.\u00a0Zhang, L.\u00a0Li and X.\u00a0Xie, SimAM: A simple, parameter-free attention module for convolutional neural networks, in: Proceedings of the 38th International Conference on Machine Learning. PMLR, M.\u00a0Marina and Z.\u00a0Tong, eds, Proceedings of Machine Learning Research, 2021, pp.\u00a011863\u201311874."},{"key":"10.3233\/AIC-230094_ref43","doi-asserted-by":"publisher","first-page":"2205","DOI":"10.1137\/18M1166134","article-title":"Binaryrelax: A relaxation approach for training deep neural networks with quantized weights","volume":"11","author":"Yin","year":"2018","journal-title":"SIAM Journal on Imaging Sciences"},{"issue":"06","key":"10.3233\/AIC-230094_ref44","first-page":"234","article-title":"Flame identification algorithm based on improved multi-feature fusion of YCbCr and region growth","volume":"57","author":"Zhang","year":"2020","journal-title":"Laser & Optoelectronics Progress"},{"key":"10.3233\/AIC-230094_ref45","doi-asserted-by":"publisher","DOI":"10.3390\/su14094930"},{"key":"10.3233\/AIC-230094_ref46","doi-asserted-by":"crossref","unstructured":"H.\u00a0Zheng, Z.\u00a0Yang, W.\u00a0Liu, J.\u00a0Liang and Y.\u00a0Li, Improving deep neural networks using softplus units, in: 2015 International Joint Conference on Neural Networks (IJCNN), IEEE, 2015, pp.\u00a01\u20134.","DOI":"10.1109\/IJCNN.2015.7280459"}],"container-title":["AI Communications"],"original-title":[],"link":[{"URL":"https:\/\/content.iospress.com\/download?id=10.3233\/AIC-230094","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,3,10]],"date-time":"2025-03-10T14:56:50Z","timestamp":1741618610000},"score":1,"resource":{"primary":{"URL":"https:\/\/journals.sagepub.com\/doi\/full\/10.3233\/AIC-230094"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,10,13]]},"references-count":35,"journal-issue":{"issue":"4"},"URL":"https:\/\/doi.org\/10.3233\/aic-230094","relation":{},"ISSN":["1875-8452","0921-7126"],"issn-type":[{"value":"1875-8452","type":"electronic"},{"value":"0921-7126","type":"print"}],"subject":[],"published":{"date-parts":[[2023,10,13]]}}}