{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,3,26]],"date-time":"2025-03-26T21:44:09Z","timestamp":1743025449460,"version":"3.40.3"},"publisher-location":"Cham","reference-count":27,"publisher":"Springer Nature Switzerland","isbn-type":[{"type":"print","value":"9783031264375"},{"type":"electronic","value":"9783031264382"}],"license":[{"start":{"date-parts":[[2023,1,1]],"date-time":"2023-01-01T00:00:00Z","timestamp":1672531200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,2,23]],"date-time":"2023-02-23T00:00:00Z","timestamp":1677110400000},"content-version":"vor","delay-in-days":53,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2023]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Responding to a water rescue situation is challenging. First responders need access to data as quickly as possible to increase the likelihood of a successful rescue. Using aerial imagery systems is especially useful in a search and rescue scenario because it provides a higher dimensional view of the search environment. Unmanned aerial vehicles can be easily used to acquire aerial image data. During water-based search and rescue scenarios, first responders sometimes deploy an inflatable marker called a rescue danbuoy. The danbuoy is fitted with a small conical sack known as a drogue, this ensures that the marker is not blown off course by the wind and instead follows the flow of the body of water. Tracking the danbuoy as it moves is of utmost importance in a water rescue. We present a new data-set \u201cVisBuoy\u201d with imagery containing instances of danbuoy markers and boats in real-world water-based settings. We also show how using various deep learning-based computer vision techniques, we can autonomously detect danbuoy instances in aerial imagery. We compare the performance of four state-of-the-art object detectors Faster RCNN Retinanet, Efficientdet and YOLOv5 on the \u201cVisBuoy\u201d data-set, to find the best detector for this task. We then propose a best model with a precision score of 74% which can be used in search and rescue operations to detect inflatable danbuoy markers in water-based settings.<\/jats:p>","DOI":"10.1007\/978-3-031-26438-2_27","type":"book-chapter","created":{"date-parts":[[2023,2,22]],"date-time":"2023-02-22T06:32:56Z","timestamp":1677047576000},"page":"344-354","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Aerial Object Detection for\u00a0Water-Based Search &amp; Rescue"],"prefix":"10.1007","author":[{"given":"Eoghan","family":"Mulcahy","sequence":"first","affiliation":[]},{"given":"Pepijn","family":"Van de Ven","sequence":"additional","affiliation":[]},{"given":"John","family":"Nelson","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,2,23]]},"reference":[{"key":"27_CR1","doi-asserted-by":"publisher","unstructured":"Acatay, O., Sommer, L., Schumann, A., Beyerer, J.: Comprehensive evaluation of deep learning based detection methods for vehicle detection in aerial imagery. In: 2018 15th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS) (2018). https:\/\/doi.org\/10.1109\/avss.2018.8639127","DOI":"10.1109\/avss.2018.8639127"},{"key":"27_CR2","doi-asserted-by":"publisher","first-page":"1151","DOI":"10.3390\/electronics11071151","volume":"11","author":"KR Akshatha","year":"2022","unstructured":"Akshatha, K.R., Karunakar, A.K., Shenoy, S.B., Pai, A.K., Nagaraj, N.H., Rohatgi, S.S.: Human detection in aerial thermal images using faster R-CNN and SSD algorithms. Electronics 11, 1151 (2022). https:\/\/doi.org\/10.3390\/electronics11071151","journal-title":"Electronics"},{"key":"27_CR3","unstructured":"Biewald, L.: Experiment tracking with weights and biases (2020). https:\/\/www.wandb.com\/"},{"key":"27_CR4","series-title":"Lecture Notes in Computer Science (Lecture Notes in Artificial Intelligence)","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/978-3-540-76928-6_1","volume-title":"AI 2007: Advances in Artificial Intelligence","author":"P Doherty","year":"2007","unstructured":"Doherty, P., Rudol, P.: A UAV search and rescue scenario with human body detection and geolocalization. In: Orgun, M.A., Thornton, J. (eds.) AI 2007. LNCS (LNAI), vol. 4830, pp. 1\u201313. Springer, Heidelberg (2007). https:\/\/doi.org\/10.1007\/978-3-540-76928-6_1"},{"key":"27_CR5","doi-asserted-by":"publisher","unstructured":"Dousai, N.M.K., Lon\u010dari\u0107, S.: Detection of humans in drone images for search and rescue operations. APIT (2021). https:\/\/doi.org\/10.1145\/3449365.3449377","DOI":"10.1145\/3449365.3449377"},{"key":"27_CR6","doi-asserted-by":"publisher","unstructured":"Erdelj, M., Natalizio, E.: UAV-assisted disaster management: applications and open issues. Institute of Electrical and Electronics Engineers Inc. (2016). https:\/\/doi.org\/10.1109\/ICCNC.2016.7440563","DOI":"10.1109\/ICCNC.2016.7440563"},{"key":"27_CR7","doi-asserted-by":"publisher","DOI":"10.1109\/mprv.2017.11","author":"M Erdelj","year":"2017","unstructured":"Erdelj, M., Natalizio, E., Chowdhury, K.R., Akyildiz, I.F.: Help from the sky: leveraging UAVs for disaster management. IEEE Pervasive Comput. (2017). https:\/\/doi.org\/10.1109\/mprv.2017.11","journal-title":"IEEE Pervasive Comput."},{"key":"27_CR8","doi-asserted-by":"publisher","unstructured":"Ezequiel, C.A.F., et al.: UAV aerial imaging applications for post-disaster assessment, environmental management and infrastructure development, pp. 274\u2013283. IEEE Computer Society (2014). https:\/\/doi.org\/10.1109\/ICUAS.2014.6842266","DOI":"10.1109\/ICUAS.2014.6842266"},{"key":"27_CR9","unstructured":"Falcon: PyTorch lightning (2022). https:\/\/github.com\/PytorchLightning\/pytorch-lightning"},{"key":"27_CR10","doi-asserted-by":"publisher","unstructured":"Felice, M.D., Trotta, A., Bedogni, L., Chowdhury, K.R., Bononi, L.: Self-organizing aerial mesh networks for emergency communication, vol. 2014-June, pp. 1631\u20131636. Institute of Electrical and Electronics Engineers Inc. (2014). https:\/\/doi.org\/10.1109\/PIMRC.2014.7136429","DOI":"10.1109\/PIMRC.2014.7136429"},{"key":"27_CR11","doi-asserted-by":"publisher","first-page":"89","DOI":"10.1002\/rob.20226","volume":"25","author":"MA Goodrich","year":"2008","unstructured":"Goodrich, M.A., et al.: Supporting wilderness search and rescue using a camera-equipped mini UAV. J. Field Robot. 25, 89\u2013110 (2008). https:\/\/doi.org\/10.1002\/rob.20226","journal-title":"J. Field Robot."},{"key":"27_CR12","doi-asserted-by":"publisher","unstructured":"He, K., Gkioxari, G., Doll\u00e1r, P., Girshick, R.: Mask R-CNN. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2980\u20132988 (2017). https:\/\/doi.org\/10.1109\/ICCV.2017.322","DOI":"10.1109\/ICCV.2017.322"},{"key":"27_CR13","doi-asserted-by":"publisher","unstructured":"Kruijff, G.J.M., et al.: Rescue robots at earthquake-hit Mirandola, Italy: a field report (2012). https:\/\/doi.org\/10.1109\/SSRR.2012.6523866","DOI":"10.1109\/SSRR.2012.6523866"},{"key":"27_CR14","doi-asserted-by":"publisher","unstructured":"Lin, T.Y., Goyal, P., Girshick, R., He, K., Doll\u00e1r, P.: Focal loss for dense object detection. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2999\u20133007 (2017). https:\/\/doi.org\/10.1109\/ICCV.2017.324","DOI":"10.1109\/ICCV.2017.324"},{"key":"27_CR15","series-title":"Lecture Notes in Computer Science","doi-asserted-by":"publisher","first-page":"740","DOI":"10.1007\/978-3-319-10602-1_48","volume-title":"Computer Vision \u2013 ECCV 2014","author":"T-Y Lin","year":"2014","unstructured":"Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., Zitnick, C.L.: Microsoft COCO: common objects in context. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8693, pp. 740\u2013755. Springer, Cham (2014). https:\/\/doi.org\/10.1007\/978-3-319-10602-1_48"},{"key":"27_CR16","unstructured":"Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library (2019). http:\/\/papers.neurips.cc\/paper\/9015-pytorch-an-imperative-style-high-performance-deep-learning-library.pdf"},{"key":"27_CR17","doi-asserted-by":"publisher","unstructured":"Pi, Y., Nath, N.D., Behzadan, A.H.: Convolutional neural networks for object detection in aerial imagery for disaster response and recovery. Adv. Eng. Inform. 43, 101009 (2020). https:\/\/doi.org\/10.1016\/j.aei.2019.101009","DOI":"10.1016\/j.aei.2019.101009"},{"key":"27_CR18","doi-asserted-by":"publisher","unstructured":"Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection, vol. 2016-December, pp. 779\u2013788. IEEE Computer Society (2016). https:\/\/doi.org\/10.1109\/CVPR.2016.91","DOI":"10.1109\/CVPR.2016.91"},{"key":"27_CR19","unstructured":"Ren, S., He, K., Girshick, R., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks, vol. 28. Curran Associates, Inc. (2015). https:\/\/proceedings.neurips.cc\/paper\/2015\/file\/14bfa6bb14875e45bba028a21ed38046-Paper.pdf"},{"key":"27_CR20","doi-asserted-by":"publisher","DOI":"10.1109\/access.2021.3063681","author":"S Sambolek","year":"2021","unstructured":"Sambolek, S., Iva\u0161i\u0107-Kos, M.: Automatic person detection in search and rescue operations using deep CNN detectors. IEEE Access (2021). https:\/\/doi.org\/10.1109\/access.2021.3063681","journal-title":"IEEE Access"},{"key":"27_CR21","doi-asserted-by":"publisher","unstructured":"Tan, M., Pang, R., Le, Q.V.: EfficientDet: scalable and efficient object detection. pp. 10778\u201310787. IEEE Computer Society (2020). https:\/\/doi.org\/10.1109\/CVPR42600.2020.01079","DOI":"10.1109\/CVPR42600.2020.01079"},{"key":"27_CR22","unstructured":"Tkachenko, M., Malyuk, M., Holmanyuk, A., Liubimov, N.: Label studio: data labeling software (2020\u20132022). https:\/\/github.com\/heartexlabs\/label-studio"},{"key":"27_CR23","first-page":"10","volume":"2006","author":"S Tomar","year":"2006","unstructured":"Tomar, S.: Converting video formats with FFmpeg. Linux J. 2006, 10 (2006)","journal-title":"Linux J."},{"key":"27_CR24","doi-asserted-by":"publisher","first-page":"81","DOI":"10.1109\/MCOM.2014.6979956","volume":"52","author":"J Ueyama","year":"2014","unstructured":"Ueyama, J., et al.: Exploiting the use of unmanned aerial vehicles to provide resilience in wireless sensor networks. IEEE Commun. Mag. 52, 81\u201387 (2014). https:\/\/doi.org\/10.1109\/MCOM.2014.6979956","journal-title":"IEEE Commun. Mag."},{"key":"27_CR25","doi-asserted-by":"publisher","unstructured":"Waharte, S., Trigoni, N.: Supporting search and rescue operations with UAVs, pp. 142\u2013147 (2010). https:\/\/doi.org\/10.1109\/EST.2010.31","DOI":"10.1109\/EST.2010.31"},{"key":"27_CR26","unstructured":"Yadan, O.: Hydra - a framework for elegantly configuring complex applications (2019). https:\/\/github.com\/facebookresearch\/hydra"},{"key":"27_CR27","doi-asserted-by":"publisher","first-page":"7380","DOI":"10.1109\/TPAMI.2021.3119563","volume":"44","author":"P Zhu","year":"2021","unstructured":"Zhu, P., et al.: Detection and tracking meet drones challenge. IEEE Trans. Pattern Anal. Mach. Intell. 44, 7380\u20137399 (2021). https:\/\/doi.org\/10.1109\/TPAMI.2021.3119563","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."}],"container-title":["Communications in Computer and Information Science","Artificial Intelligence and Cognitive Science"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/978-3-031-26438-2_27","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,2,22]],"date-time":"2023-02-22T06:38:42Z","timestamp":1677047922000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/978-3-031-26438-2_27"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023]]},"ISBN":["9783031264375","9783031264382"],"references-count":27,"URL":"https:\/\/doi.org\/10.1007\/978-3-031-26438-2_27","relation":{},"ISSN":["1865-0929","1865-0937"],"issn-type":[{"type":"print","value":"1865-0929"},{"type":"electronic","value":"1865-0937"}],"subject":[],"published":{"date-parts":[[2023]]},"assertion":[{"value":"23 February 2023","order":1,"name":"first_online","label":"First Online","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"AICS","order":1,"name":"conference_acronym","label":"Conference Acronym","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Irish Conference on Artificial Intelligence and Cognitive Science","order":2,"name":"conference_name","label":"Conference Name","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Munster","order":3,"name":"conference_city","label":"Conference City","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Ireland","order":4,"name":"conference_country","label":"Conference Country","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"2022","order":5,"name":"conference_year","label":"Conference Year","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"8 December 2022","order":7,"name":"conference_start_date","label":"Conference Start Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"9 December 2022","order":8,"name":"conference_end_date","label":"Conference End Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"30","order":9,"name":"conference_number","label":"Conference Number","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"aics2022","order":10,"name":"conference_id","label":"Conference ID","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"https:\/\/aics2022.mtu.ie\/","order":11,"name":"conference_url","label":"Conference URL","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Single-blind","order":1,"name":"type","label":"Type","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"EasyChair","order":2,"name":"conference_management_system","label":"Conference Management System","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"102","order":3,"name":"number_of_submissions_sent_for_review","label":"Number of Submissions Sent for Review","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"41","order":4,"name":"number_of_full_papers_accepted","label":"Number of Full Papers Accepted","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"0","order":5,"name":"number_of_short_papers_accepted","label":"Number of Short Papers Accepted","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"40% - The value is computed by the equation \"Number of Full Papers Accepted \/ Number of Submissions Sent for Review * 100\" and then rounded to a whole number.","order":6,"name":"acceptance_rate_of_full_papers","label":"Acceptance Rate of Full Papers","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"3","order":7,"name":"average_number_of_reviews_per_paper","label":"Average Number of Reviews per Paper","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"3","order":8,"name":"average_number_of_papers_per_reviewer","label":"Average Number of Papers per Reviewer","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"No","order":9,"name":"external_reviewers_involved","label":"External Reviewers Involved","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}}]}}