{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,24]],"date-time":"2025-12-24T12:43:09Z","timestamp":1766580189104},"reference-count":23,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2023,10,16]],"date-time":"2023-10-16T00:00:00Z","timestamp":1697414400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,10,16]],"date-time":"2023-10-16T00:00:00Z","timestamp":1697414400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Real-Time Image Proc"],"published-print":{"date-parts":[[2023,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Object detection methods based on deep learning have made great progress in recent years and have been used successfully in many different applications. However, since they have been evaluated predominantly on datasets of natural images, it is still unclear how accurate and effective they can be if used in special domain applications, for example in scientific, industrial, etc. images, where the properties of the images are very different from those taken in natural scenes. In this study, we illustrate the challenges one needs to face in such a setting on a concrete practical application, involving the detection of a particular fluid phenomenon\u2014bag-breakup\u2014in images of droplet scattering, which differ significantly from natural images. Using two technologically mature and state-of-the-art object detection methods, RetinaNet and YOLOv7, we discuss what strategies need to be considered in this problem setting, and perform both quantitative and qualitative evaluations to study their effects. Additionally, we also propose a new method to further improve accuracy of detection by utilizing information from several consecutive frames. We hope that the practical insights gained in this study can be of use to other researchers and practitioners when targeting applications where the images differ greatly from natural images.<\/jats:p>","DOI":"10.1007\/s11554-023-01363-y","type":"journal-article","created":{"date-parts":[[2023,10,16]],"date-time":"2023-10-16T12:02:31Z","timestamp":1697457751000},"update-policy":"http:\/\/dx.doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Fast detection of bag-breakups in pulsating and steady airflow using video analysis and deep learning"],"prefix":"10.1007","volume":"20","author":[{"given":"Daiki","family":"Morita","sequence":"first","affiliation":[]},{"given":"Bisser","family":"Raytchev","sequence":"additional","affiliation":[]},{"given":"Abdussalam","family":"Elhanashi","sequence":"additional","affiliation":[]},{"given":"Mikimasa","family":"Kawaguchi","sequence":"additional","affiliation":[]},{"given":"Yoichi","family":"Ogata","sequence":"additional","affiliation":[]},{"given":"Toru","family":"Higaki","sequence":"additional","affiliation":[]},{"given":"Kazufumi","family":"Kaneda","sequence":"additional","affiliation":[]},{"given":"Akira","family":"Nakashima","sequence":"additional","affiliation":[]},{"given":"Sergio","family":"Saponara","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,10,16]]},"reference":[{"issue":"6","key":"1363_CR1","doi-asserted-by":"publisher","first-page":"1170","DOI":"10.1249\/MSS.0000000000002569","volume":"53","author":"H Alessio","year":"2021","unstructured":"Alessio, H., Bassett, D., Bopp, M., Parr, B., Patch, G., Rankin, J., Rojas-Rueda, D., Roti, M., Wojcik, J.: Climate change, air pollution, and physical inactivity: Is active transportation part of the solution? Med. Sci. Sports Exerc. 53(6), 1170\u20131178 (2021)","journal-title":"Med. Sci. Sports Exerc."},{"key":"1363_CR2","unstructured":"Bochkovskiy, A., Wang, C.Y., Liao, H.Y.M.: Yolov4: Optimal speed and accuracy of object detection. arXiv preprint arXiv:2004.10934 (2020)"},{"key":"1363_CR3","doi-asserted-by":"crossref","unstructured":"Cherapanamjeri, J., Rao, B.N.K.: Neural networks based object detection techniques in computer vision. In: 4th International Conference on Inventive Research in Computing Applications (ICIRCA), pp. 1092\u20131099 (2022)","DOI":"10.1109\/ICIRCA54612.2022.9985581"},{"key":"1363_CR4","doi-asserted-by":"crossref","unstructured":"Ding, X., Zhang, X., Ma, N., Han, J., Ding, G., Sun, J.: Repvgg: Making vgg-style convnets great again. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 13733\u201313742 (2021)","DOI":"10.1109\/CVPR46437.2021.01352"},{"key":"1363_CR5","unstructured":"Everingham, M., Winn, J.: The pascal visual object classes challenge 2012 (voc2012) development kit. Pattern Anal. Stat. Model. Comput. Learn., Tech. Rep 2007, 1\u201345 (2012)"},{"key":"1363_CR6","doi-asserted-by":"publisher","first-page":"128837","DOI":"10.1109\/ACCESS.2019.2939201","volume":"7","author":"L Jiao","year":"2019","unstructured":"Jiao, L., Zhang, F., Liu, F., Yang, S., Li, L., Feng, Z., Qu, R.: A survey of deep learning-based object detection. IEEE Access 7, 128837\u2013128868 (2019)","journal-title":"IEEE Access"},{"key":"1363_CR7","doi-asserted-by":"publisher","DOI":"10.1016\/j.dsp.2022.103812","volume":"132","author":"R Kaur","year":"2023","unstructured":"Kaur, R., Singh, S.: A comprehensive review of object detection with deep learning. Digital Signal Process. 132, 103812 (2023)","journal-title":"Digital Signal Process."},{"key":"1363_CR8","unstructured":"Kingma, D.P., Ba, J.: Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)"},{"key":"1363_CR9","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Doll\u00e1r, P., Girshick, R., He, K., Hariharan, B., Belongie, S.: Feature pyramid networks for object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2117\u20132125 (2017)","DOI":"10.1109\/CVPR.2017.106"},{"key":"1363_CR10","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Goyal, P., Girshick, R., He, K., Doll\u00e1r, P.: Focal loss for dense object detection. In: Proceedings of the IEEE international conference on computer vision, pp. 2980\u20132988 (2017)","DOI":"10.1109\/ICCV.2017.324"},{"key":"1363_CR11","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., Zitnick, C.L.: Microsoft coco: Common objects in context. In: European conference on computer vision, pp. 740\u2013755. Springer (2014)","DOI":"10.1007\/978-3-319-10602-1_48"},{"issue":"2","key":"1363_CR12","doi-asserted-by":"publisher","first-page":"261","DOI":"10.1007\/s11263-019-01247-4","volume":"128","author":"L Liu","year":"2020","unstructured":"Liu, L., Ouyang, W., Wang, X., Fieguth, P., Chen, J., Liu, X., Pietik\u00e4inen, M.: Deep learning for generic object detection: A survey. Int. J. Comput. Vision 128(2), 261\u2013318 (2020)","journal-title":"Int. J. Comput. Vision"},{"key":"1363_CR13","doi-asserted-by":"crossref","unstructured":"Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: Ssd: Single shot multibox detector. In: European conference on computer vision, pp. 21\u201337. Springer (2016)","DOI":"10.1007\/978-3-319-46448-0_2"},{"key":"1363_CR14","unstructured":"Nakada, S., Akiyama, K., Ma, J., Nishida, K., Yamamoto, R., Nakashima, A., Nakamura, K., Marui, K., Nishimura, M., Yokohata, H., Ogata, Y.: Study of breakup on water film sheared by steady and pulsatile air flow in a horizontal rectangular duct. The 31st International Symposium on Transport Phenomena, 13-16 October 2020, Honolulu, USA (2020)"},{"key":"1363_CR15","doi-asserted-by":"crossref","unstructured":"Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: Unified, real-time object detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779\u2013788 (2016)","DOI":"10.1109\/CVPR.2016.91"},{"key":"1363_CR16","first-page":"5","volume":"28","author":"S Ren","year":"2015","unstructured":"Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 28, 5 (2015)","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"1363_CR17","doi-asserted-by":"crossref","unstructured":"Saponara, S., Elhanashi, A.: Impact of image resizing on deep learning detectors for training time and model performance. In: Applications in Electronics Pervading Industry, Environment and Society, pp. 10\u201317. Springer International Publishing (2022)","DOI":"10.1007\/978-3-030-95498-7_2"},{"key":"1363_CR18","doi-asserted-by":"crossref","unstructured":"Tan, M., Pang, R., Le, Q.V.: Efficientdet: Scalable and efficient object detection. In: 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 10778\u201310787 (2020)","DOI":"10.1109\/CVPR42600.2020.01079"},{"key":"1363_CR19","doi-asserted-by":"crossref","unstructured":"Wang, C.Y., Bochkovskiy, A., Liao, H.Y.M.: Yolov7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv preprint arXiv:2207.02696 (2022)","DOI":"10.1109\/CVPR52729.2023.00721"},{"key":"1363_CR20","unstructured":"Wang, C.Y., Liao, H.Y.M., Yeh, I.H.: Designing network design strategies through gradient path analysis. arXiv preprint arXiv:2211.04800 (2022)"},{"key":"1363_CR21","doi-asserted-by":"crossref","unstructured":"Xiao, X., Gao, M.: Overview of climate change, air pollution, and human health. In: M.\u00a0Gao, Z.\u00a0Wang, G.\u00a0Carmichael (eds.) Air Pollution, Climate, and Health, pp. 3\u201312. Elsevier (2021)","DOI":"10.1016\/B978-0-12-820123-7.00003-6"},{"key":"1363_CR22","doi-asserted-by":"crossref","unstructured":"Zheng, Z., Wang, P., Liu, W., Li, J., Ye, R., Ren, D.: Distance-iou loss: Faster and better learning for bounding box regression. In: Proceedings of the AAAI conference on artificial intelligence, vol. 34-7, pp. 12993\u201313000 (2020)","DOI":"10.1609\/aaai.v34i07.6999"},{"key":"1363_CR23","unstructured":"Zhu, X., Su, W., Lu, L., Li, B., Wang, X., Dai, J.: Deformable detr: Deformable transformers for end-to-end object detection. arXiv preprint arXiv:2010.04159 (2021)"}],"container-title":["Journal of Real-Time Image Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11554-023-01363-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11554-023-01363-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11554-023-01363-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,11,23]],"date-time":"2023-11-23T16:26:26Z","timestamp":1700756786000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11554-023-01363-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,10,16]]},"references-count":23,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2023,12]]}},"alternative-id":["1363"],"URL":"https:\/\/doi.org\/10.1007\/s11554-023-01363-y","relation":{},"ISSN":["1861-8200","1861-8219"],"issn-type":[{"value":"1861-8200","type":"print"},{"value":"1861-8219","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,10,16]]},"assertion":[{"value":"21 January 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"12 September 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"16 October 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"114"}}