{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,27]],"date-time":"2026-02-27T04:31:27Z","timestamp":1772166687614,"version":"3.50.1"},"reference-count":22,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,3,12]],"date-time":"2025-03-12T00:00:00Z","timestamp":1741737600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,3,12]],"date-time":"2025-03-12T00:00:00Z","timestamp":1741737600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100014188","name":"Ministry of Science and ICT, South Korea","doi-asserted-by":"publisher","id":[{"id":"10.13039\/501100014188","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003725","name":"The national research foundation of Korea","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Institute of Information and Communication Technology Planning Evaluation"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Big Data"],"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>Autonomous vehicles must be aware of dynamic and static objects, road lanes, road signs, and road markings. Recent autonomous vehicles awareness studies adaptable to various road environments continue, it is necessary to construct datasets that accurately reflect the real driving environment. The existing datasets consist of annotations that focus on dynamic and static objects, lanes, and road signs in the driving environment. These annotations enable management of object distance and avoidance, lane recognition and maintenance, and awareness of road signs. Although road markings on the road surface provide the traffic regulations and guidance for driving lanes in the driving environment, there is a lack of road marking datasets containing various types of directions and regulations. When driving without recognizing road markings, difficulty in recognizing lane information poses challenges in determining the appropriate lanes for the driving route and the limitation in predicting the movement of surrounding cars presents difficulties in maintaining stable driving responses. This paper presents a road marking dataset, UNFLAPSet (UNFLAPpable Set). UNFLAPSet is able to be aware of multidimensional information by including the various types of directions and regulations compared to existing road marking datasets. UNFLAPSet consists of three classes - 1. Driving Direction Centric class (DDCclass) 2. Capable of Lane Maneuver class (CLMclass) 3. Specific Condition Caution class (SCCclass), which are based on the meaning of road markings for emphasizing the primary implication of labels unlike the existing road marking datasets. Especially, Merge Line, Merge Arrow, and Uturn Dot Line of CLMclass enable the prediction of surrounding vehicles\u2019 movement and subsequently allow for stable responses. Furthermore, restricted direction road markings of SCCclass help mitigate the risk of crashes caused by driving in unsuitable lanes, thereby facilitating the maintenance of smooth traffic flow. The validation results of UNFLAPSet showed high recognition accuracy for each label, enabling predictable driving integrated perception of driving lanes and surrounding lanes based on road surface marking recognition using UNFLAPSet.<\/jats:p>","DOI":"10.1186\/s40537-025-01101-0","type":"journal-article","created":{"date-parts":[[2025,3,12]],"date-time":"2025-03-12T05:11:22Z","timestamp":1741756282000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Dataset for unflappable driving: UNFLAPSet"],"prefix":"10.1186","volume":"12","author":[{"given":"SuBi","family":"Kim","sequence":"first","affiliation":[]},{"given":"JiEun","family":"Kang","sequence":"additional","affiliation":[]},{"given":"YongIk","family":"Yoon","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,3,12]]},"reference":[{"issue":"2","key":"1101_CR1","doi-asserted-by":"publisher","first-page":"559","DOI":"10.1007\/s00371-023-02801-5","volume":"40","author":"S Zhao","year":"2024","unstructured":"Zhao S, Gong Z, Zhao D. Traffic signs and markings recognition based on lightweight convolutional neural network. Visual Computer. 2024;40(2):559\u201370.","journal-title":"Visual Computer"},{"issue":"2","key":"1101_CR2","doi-asserted-by":"publisher","first-page":"519","DOI":"10.1007\/s00371-021-02353-6","volume":"39","author":"M Haris","year":"2023","unstructured":"Haris M, Hou J, Wang X. Lane line detection and departure estimation in a complex environment by using an asymmetric kernel convolution algorithm. Visual Computer. 2023;39(2):519\u201338.","journal-title":"Visual Computer"},{"key":"1101_CR3","doi-asserted-by":"crossref","unstructured":"Li J, Xu R, Ma J, Zou Q, Ma J, Yu H. Domain adaptive object detection for autonomous driving under foggy weather. In: Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, 2023;612\u2013622","DOI":"10.1109\/WACV56688.2023.00068"},{"issue":"1","key":"1101_CR4","doi-asserted-by":"publisher","first-page":"15523","DOI":"10.1038\/s41598-022-19674-8","volume":"12","author":"G Guo","year":"2022","unstructured":"Guo G, Zhang Z. Road damage detection algorithm for improved yolov5. Sci Rep. 2022;12(1):15523.","journal-title":"Sci Rep"},{"key":"1101_CR5","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2021.107941","volume":"240","author":"Y Kortli","year":"2022","unstructured":"Kortli Y, Gabsi S, Voon LFLY, Jridi M, Merzougui M, Atri M. Deep embedded hybrid cnn-lstm network for lane detection on nvidia jetson xavier nx. Knowledge-based Syst. 2022;240: 107941.","journal-title":"Knowledge-based Syst"},{"issue":"20","key":"1101_CR6","doi-asserted-by":"publisher","first-page":"8361","DOI":"10.3390\/s23208361","volume":"23","author":"X Wang","year":"2023","unstructured":"Wang X, Gao H, Jia Z, Li Z. Bl-yolov8: an improved road defect detection model based on yolov8. Sensors. 2023;23(20):8361.","journal-title":"Sensors"},{"key":"1101_CR7","doi-asserted-by":"publisher","DOI":"10.1016\/j.autcon.2022.104139","volume":"135","author":"S Shim","year":"2022","unstructured":"Shim S, Kim J, Lee S-W, Cho G-C. Road damage detection using super-resolution and semi-supervised learning with generative adversarial network. Automat Constr. 2022;135: 104139.","journal-title":"Automat Constr"},{"key":"1101_CR8","doi-asserted-by":"crossref","unstructured":"Lee S, Kim J, Shin Yoon J, Shin S, Bailo O, Kim N, Lee T-H, Seok Hong H, Han S-H, So Kweon I. Vpgnet: Vanishing point guided network for lane and road marking detection and recognition. In: Proceedings of the IEEE International Conference on Computer Vision, 2017;1947\u20131955","DOI":"10.1109\/ICCV.2017.215"},{"key":"1101_CR9","doi-asserted-by":"publisher","DOI":"10.1016\/j.imavis.2020.103978","volume":"102","author":"X-Y Ye","year":"2020","unstructured":"Ye X-Y, Hong D-S, Chen H-H, Hsiao P-Y, Fu L-C. A two-stage real-time yolov2-based road marking detector with lightweight spatial transformation-invariant classification. Image Vision Comput. 2020;102: 103978.","journal-title":"Image Vision Comput"},{"issue":"14","key":"1101_CR10","doi-asserted-by":"publisher","first-page":"6545","DOI":"10.3390\/s23146545","volume":"23","author":"W Tian","year":"2023","unstructured":"Tian W, Yu X, Hu H. Interactive attention learning on detection of lane and lane marking on the road by monocular camera image. Sensors. 2023;23(14):6545.","journal-title":"Sensors"},{"key":"1101_CR11","unstructured":"What is Road Marking?. figshare https:\/\/roadgrip.co.uk\/blog\/what-is-road-marking 2022"},{"key":"1101_CR12","doi-asserted-by":"publisher","unstructured":"Jayasinghe O, Hemachandra S, Anhettigama D, Kariyawasam S, Rodrigo R, Jayasekara P. Ceymo: See more on roads - a novel benchmark dataset for road marking detection. In: 2022 IEEE\/CVF Winter Conference on Applications of Computer Vision (WACV), 2022;3381\u20133390. https:\/\/doi.org\/10.1109\/WACV51458.2022.00344","DOI":"10.1109\/WACV51458.2022.00344"},{"issue":"1","key":"1101_CR13","doi-asserted-by":"publisher","first-page":"187","DOI":"10.1109\/JAS.2021.1004293","volume":"9","author":"W Jang","year":"2021","unstructured":"Jang W, Hyun J, An J, Cho M, Kim E. A lane-level road marking map using a monocular camera. IEEE\/CAA Journal of Automatica Sinica. 2021;9(1):187\u2013204.","journal-title":"IEEE\/CAA Journal of Automatica Sinica"},{"key":"1101_CR14","doi-asserted-by":"crossref","unstructured":"Liu X, Deng Z, Lu H, Cao L. Benchmark for road marking detection: Dataset specification and performance baseline. In: 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), 2017;1\u20136. IEEE","DOI":"10.1109\/ITSC.2017.8317749"},{"key":"1101_CR15","doi-asserted-by":"crossref","unstructured":"Tran L-A, Le M-H. Robust u-net-based road lane markings detection for autonomous driving. In: 2019 International Conference on System Science and Engineering (ICSSE), 2019;62\u201366. IEEE","DOI":"10.1109\/ICSSE.2019.8823532"},{"key":"1101_CR16","doi-asserted-by":"crossref","unstructured":"Alibeigi M, Ljungbergh W, Tonderski A, Hess G, Lilja A, Lindstr\u00f6m C, Motorniuk D, Fu J, Widahl J, Petersson C. Zenseact open dataset: A large-scale and diverse multimodal dataset for autonomous driving. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, 2023;20178\u201320188","DOI":"10.1109\/ICCV51070.2023.01846"},{"key":"1101_CR17","doi-asserted-by":"crossref","unstructured":"Li K, Chen K, Wang H, Hong L, Ye C, Han J, Chen Y, Zhang W, Xu C, Yeung D-Y, et al. Coda: A real-world road corner case dataset for object detection in autonomous driving. In: European Conference on Computer Vision, 2022;406\u2013423. Springer","DOI":"10.1007\/978-3-031-19839-7_24"},{"key":"1101_CR18","doi-asserted-by":"crossref","unstructured":"Xiao P, Shao Z, Hao S, Zhang Z, Chai X, Jiao J, Li Z, Wu J, Sun K, Jiang K, et al. Pandaset: Advanced sensor suite dataset for autonomous driving. In: 2021 IEEE International Intelligent Transportation Systems Conference (ITSC), 2021;3095\u20133101. IEEE","DOI":"10.1109\/ITSC48978.2021.9565009"},{"key":"1101_CR19","first-page":"10","volume":"42","author":"P XinyuHuang","year":"2020","unstructured":"XinyuHuang P, XinjingCheng D, Geng Q, Yang R. The apolloscape open dataset for autonomous driving and its application. IEEE Trans Pattern Anal Machine Intell. 2020;42:10.","journal-title":"IEEE Trans Pattern Anal Machine Intell"},{"key":"1101_CR20","doi-asserted-by":"crossref","unstructured":"Yu F, Chen H, Wang X, Xian W, Chen Y, Liu F, Madhavan V, Darrell T. Bdd100k: A diverse driving dataset for heterogeneous multitask learning. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2020;2636\u20132645","DOI":"10.1109\/CVPR42600.2020.00271"},{"key":"1101_CR21","unstructured":"Driving video data for road marking recognition. figshare https:\/\/aihub.or.kr\/ 2021"},{"key":"1101_CR22","unstructured":"Introduction - Roboflow Docs. figshare https:\/\/docs.roboflow.com\/"}],"container-title":["Journal of Big Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s40537-025-01101-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s40537-025-01101-0\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s40537-025-01101-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,3,12]],"date-time":"2025-03-12T05:11:34Z","timestamp":1741756294000},"score":1,"resource":{"primary":{"URL":"https:\/\/journalofbigdata.springeropen.com\/articles\/10.1186\/s40537-025-01101-0"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,3,12]]},"references-count":22,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["1101"],"URL":"https:\/\/doi.org\/10.1186\/s40537-025-01101-0","relation":{"has-preprint":[{"id-type":"doi","id":"10.21203\/rs.3.rs-4218700\/v1","asserted-by":"object"}]},"ISSN":["2196-1115"],"issn-type":[{"value":"2196-1115","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,3,12]]},"assertion":[{"value":"4 April 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"11 February 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"12 March 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"Informed consent was obtained from all individual participants included in the study.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"The authors have consented to the submission of the research report to the journal.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"The authors declare that they have no Competing interests.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"65"}}