{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T12:25:25Z","timestamp":1764937525983,"version":"3.40.3"},"publisher-location":"Cham","reference-count":15,"publisher":"Springer Nature Switzerland","isbn-type":[{"type":"print","value":"9783031264375"},{"type":"electronic","value":"9783031264382"}],"license":[{"start":{"date-parts":[[2023,1,1]],"date-time":"2023-01-01T00:00:00Z","timestamp":1672531200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,2,23]],"date-time":"2023-02-23T00:00:00Z","timestamp":1677110400000},"content-version":"vor","delay-in-days":53,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2023]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Fisheye cameras are extensively employed in autonomous vehicles due to their wider field of view, which produces a complete 360-degree image of the vehicle with a minimum number of sensors. The drawback of having a broader field of view is that it may include undesirable portions of the vehicle\u2019s ego body in its perspective. Due to objects\u2019 reflections on the car body, this may produce false positives in perception systems. Processing ego vehicle pixels also uses up unnecessary computing power. Unexpectedly, there is no literature on this relevant practical problem. To our knowledge, this is the first attempt to discuss the significance of autonomous ego body extraction for automobile applications that are crucial for safety. We also proposed a simple deep learning model for identifying the vehicle\u2019s ego-body. This model would enable us to eliminate any pointless processing of the car\u2019s bodywork, eliminate the potential for pedestrians or other objects to be mistakenly detected in the car\u2019s ego-body reflection, and finally, check to see if the camera is mounted incorrectly. The proposed network is a U-Net model with a Res-Net50 encoder pre-trained on ImageNet and trained for binary semantic segmentation on vehicle ego-body data. Our training data is an internal Valeo dataset with 10K samples collected by three separate car lines across Europe. This proposed network could then be integrated into the vehicles existing perception system by extracting the ego-body contour data and supplying this to the other algorithms which then ignore the area outside the contour coordinates. The proposed network can run at set intervals to save computing power and to check if the camera is misaligned by comparing the new contour data to the previous data.<\/jats:p>","DOI":"10.1007\/978-3-031-26438-2_21","type":"book-chapter","created":{"date-parts":[[2023,2,22]],"date-time":"2023-02-22T06:32:56Z","timestamp":1677047576000},"page":"264-275","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["Automatic Vehicle Ego Body Extraction for\u00a0Reducing False Detections in\u00a0Automated Driving Applications"],"prefix":"10.1007","author":[{"given":"Ciar\u00e1n","family":"Hogan","sequence":"first","affiliation":[]},{"given":"Ganesh","family":"Sistu","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,2,23]]},"reference":[{"issue":"2","key":"21_CR1","doi-asserted-by":"publisher","first-page":"712","DOI":"10.1109\/TITS.2019.2962338","volume":"22","author":"S Kuutti","year":"2021","unstructured":"Kuutti, S., Bowden, R., Jin, Y., Barber, P., Fallah, S.: A survey of deep learning applications to autonomous vehicle control. IEEE Trans. Intell. Transp. Syst. 22(2), 712\u2013733 (2021). https:\/\/doi.org\/10.1109\/TITS.2019.2962338","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"21_CR2","doi-asserted-by":"publisher","unstructured":"Yogamani, S., Siam, M., Gamal, M., Abdel-Razek, M., Jagersand, M., Zhang, H.: A comparative study of real-time semantic segmentation for autonomous driving. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, pp. 587\u2013597 (2018). https:\/\/doi.org\/10.1109\/CVPRW.2018.00101","DOI":"10.1109\/CVPRW.2018.00101"},{"key":"21_CR3","doi-asserted-by":"publisher","unstructured":"Saez, A., Bergasa, L.M., Romeral, E., Lopez, E., Barea, R., Sanz, R.: CNN-based fisheye image real-time semantic segmentation. In: IEEE Intelligent Vehicles Symposium (IV), 2018, pp. 1039\u20131044 (2018). https:\/\/doi.org\/10.1109\/IVS.2018.8500456","DOI":"10.1109\/IVS.2018.8500456"},{"key":"21_CR4","doi-asserted-by":"publisher","unstructured":"Yogamani, et al.: WoodScape: a multi-task, multi-camera fisheye dataset for autonomous driving. In: 2019 IEEE\/CVF International Conference on Computer Vision (ICCV), pp. 9307\u20139317 (2019). https:\/\/doi.org\/10.1109\/ICCV.2019.00940","DOI":"10.1109\/ICCV.2019.00940"},{"key":"21_CR5","doi-asserted-by":"publisher","unstructured":"Deng, L., Yang, M., Qian, Y., Wang, C., Wang, B.: CNN based semantic segmentation for urban traffic scenes using fisheye camera. In: 2017 IEEE Intelligent Vehicles Symposium (IV), pp. 231\u2013236 (2017). https:\/\/doi.org\/10.1109\/IVS.2017.7995725","DOI":"10.1109\/IVS.2017.7995725"},{"key":"21_CR6","doi-asserted-by":"publisher","first-page":"241","DOI":"10.1007\/BF00128233","volume":"17","author":"R Bajcsy","year":"1996","unstructured":"Bajcsy, R., Lee, S.W., Leonardis, A.: Detection of diffuse and specular interface reflections and inter-reflections by color image segmentation. Int. J. Comput. Vis. 17, 241\u2013272 (1996). https:\/\/doi.org\/10.1007\/BF00128233","journal-title":"Int. J. Comput. Vis."},{"key":"21_CR7","doi-asserted-by":"publisher","unstructured":"DelPozo, A., Savarese, S.: Detecting specular surfaces on natural images. In: 2007 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1\u20138 (2007). https:\/\/doi.org\/10.1109\/CVPR.2007.383215","DOI":"10.1109\/CVPR.2007.383215"},{"key":"21_CR8","doi-asserted-by":"publisher","unstructured":"Owen, D., Chang, P.L.: Detecting reflections by combining semantic and instance segmentation. Umbo Comput. Vis. (2019). https:\/\/doi.org\/10.48550\/ARXIV.1904.13273","DOI":"10.48550\/ARXIV.1904.13273"},{"key":"21_CR9","doi-asserted-by":"publisher","unstructured":"Ronneberger, O., Fischer, P., Brox, T.: U-Net: convolutional networks for biomedical image segmentation. In: Navab, N., Hornegger, J., Wells, W., Frangi, A. (eds.) MICCAI 2015. LNCS, vol. 9351, pp. 234\u2013241. Springer, Cham (2015). https:\/\/doi.org\/10.1007\/978-3-319-24574-4_28, https:\/\/doi.org\/10.48550\/arXiv.1505.04597","DOI":"10.1007\/978-3-319-24574-4_28 10.48550\/arXiv.1505.04597"},{"key":"21_CR10","doi-asserted-by":"publisher","unstructured":"He, K., Zhang, X., Ren, S., Sun, J. Deep residual learning for image recognition. Microsoft Res. (2015). https:\/\/doi.org\/10.48550\/arXiv.1512.03385","DOI":"10.48550\/arXiv.1512.03385"},{"key":"21_CR11","doi-asserted-by":"publisher","unstructured":"Yogamani, S., et al.: WoodScape: a multi-task, multi-camera fisheye dataset for autonomous driving. In: 2019 IEEE\/CVF International Conference on Computer Vision (ICCV), pp. 9307\u20139317 (2019). https:\/\/doi.org\/10.1109\/ICCV.2019.00940","DOI":"10.1109\/ICCV.2019.00940"},{"key":"21_CR12","doi-asserted-by":"publisher","unstructured":"Cordts, M.: The cityscapes dataset for semantic urban scene understanding. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2016). https:\/\/doi.org\/10.1109\/CVPR.2016.350","DOI":"10.1109\/CVPR.2016.350"},{"issue":"5","key":"21_CR13","doi-asserted-by":"publisher","first-page":"4201","DOI":"10.1109\/TITS.2020.3042759","volume":"23","author":"L Mariotti","year":"2022","unstructured":"Mariotti, L., Eising, C.: Spherical formulation of geometric motion segmentation constraints in fisheye cameras. IEEE Trans. Intell. Transp. Syst. 23(5), 4201\u20134211 (2022). https:\/\/doi.org\/10.1109\/TITS.2020.3042759","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"21_CR14","unstructured":"Tesla Phantom breaking article. The Verge (2022). https:\/\/www.theverge.com\/2022\/6\/3\/23153241\/tesla-phantom-braking-nhtsa-complaints-investigation. Accessed 20 Aug 2022"},{"key":"21_CR15","unstructured":"Nissan Emergency breaking failure article. CNET (2019). https:\/\/www.cnet.com\/roadshow\/news\/nissan-rogue-nhtsa-brakes-investigation\/. Accessed 20 Aug 2022"}],"container-title":["Communications in Computer and Information Science","Artificial Intelligence and Cognitive Science"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/978-3-031-26438-2_21","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,2,22]],"date-time":"2023-02-22T06:36:59Z","timestamp":1677047819000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/978-3-031-26438-2_21"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023]]},"ISBN":["9783031264375","9783031264382"],"references-count":15,"URL":"https:\/\/doi.org\/10.1007\/978-3-031-26438-2_21","relation":{},"ISSN":["1865-0929","1865-0937"],"issn-type":[{"type":"print","value":"1865-0929"},{"type":"electronic","value":"1865-0937"}],"subject":[],"published":{"date-parts":[[2023]]},"assertion":[{"value":"23 February 2023","order":1,"name":"first_online","label":"First Online","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"AICS","order":1,"name":"conference_acronym","label":"Conference Acronym","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Irish Conference on Artificial Intelligence and Cognitive Science","order":2,"name":"conference_name","label":"Conference Name","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Munster","order":3,"name":"conference_city","label":"Conference City","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Ireland","order":4,"name":"conference_country","label":"Conference Country","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"2022","order":5,"name":"conference_year","label":"Conference Year","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"8 December 2022","order":7,"name":"conference_start_date","label":"Conference Start Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"9 December 2022","order":8,"name":"conference_end_date","label":"Conference End Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"30","order":9,"name":"conference_number","label":"Conference Number","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"aics2022","order":10,"name":"conference_id","label":"Conference ID","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"https:\/\/aics2022.mtu.ie\/","order":11,"name":"conference_url","label":"Conference URL","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Single-blind","order":1,"name":"type","label":"Type","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"EasyChair","order":2,"name":"conference_management_system","label":"Conference Management System","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"102","order":3,"name":"number_of_submissions_sent_for_review","label":"Number of Submissions Sent for Review","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"41","order":4,"name":"number_of_full_papers_accepted","label":"Number of Full Papers Accepted","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"0","order":5,"name":"number_of_short_papers_accepted","label":"Number of Short Papers Accepted","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"40% - The value is computed by the equation \"Number of Full Papers Accepted \/ Number of Submissions Sent for Review * 100\" and then rounded to a whole number.","order":6,"name":"acceptance_rate_of_full_papers","label":"Acceptance Rate of Full Papers","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"3","order":7,"name":"average_number_of_reviews_per_paper","label":"Average Number of Reviews per Paper","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"3","order":8,"name":"average_number_of_papers_per_reviewer","label":"Average Number of Papers per Reviewer","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"No","order":9,"name":"external_reviewers_involved","label":"External Reviewers Involved","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}}]}}