{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,13]],"date-time":"2026-02-13T04:21:43Z","timestamp":1770956503747,"version":"3.50.1"},"reference-count":31,"publisher":"Walter de Gruyter GmbH","issue":"11","license":[{"start":{"date-parts":[[2023,11,1]],"date-time":"2023-11-01T00:00:00Z","timestamp":1698796800000},"content-version":"unspecified","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2023,11,27]]},"abstract":"<jats:title>Abstract<\/jats:title>\n               <jats:p>The development and testing of autonomous systems require sufficient meaningful data. However, generating suitable scenario data is a challenging task. In particular, it raises the question of how to narrow down what kind of data should be considered meaningful. Autonomous systems are characterized by their ability to cope with uncertain situations, i.e. complex and unknown environmental conditions. Due to this openness, the definition of training and test scenarios cannot be easily specified. Not all relevant influences can be sufficiently specified with requirements in advance, especially for unknown scenarios and corner cases, and therefore the \u201cright\u201d data, balancing quality and efficiency, is hard to generate. This article discusses the challenges of automated generation of 3D scenario data. We present a training and testing loop that provides a way to generate synthetic camera and Lidar data using 3D simulated environments. Those can be automatically varied and modified to support a closed-loop system for deriving and generating datasets that can be used for continuous development and testing of autonomous systems.<\/jats:p>","DOI":"10.1515\/auto-2023-0026","type":"journal-article","created":{"date-parts":[[2023,11,8]],"date-time":"2023-11-08T13:29:03Z","timestamp":1699450143000},"page":"953-968","source":"Crossref","is-referenced-by-count":6,"title":["Synthetic data generation for the continuous development and testing of autonomous construction machinery"],"prefix":"10.1515","volume":"71","author":[{"given":"Alexander","family":"Schuster","sequence":"first","affiliation":[{"name":"University of Stuttgart, Institute of Industrial Automation and Software Engineering (IAS) , Stuttgart , Germany"}]},{"given":"Raphael","family":"Hagmanns","sequence":"additional","affiliation":[{"name":"Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB , Karlsruhe , Germany"}]},{"given":"Iman","family":"Sonji","sequence":"additional","affiliation":[{"name":"University of Stuttgart, Institute of Industrial Automation and Software Engineering (IAS) , Stuttgart , Germany"}]},{"given":"Andreas","family":"L\u00f6cklin","sequence":"additional","affiliation":[{"name":"University of Stuttgart, Institute of Industrial Automation and Software Engineering (IAS) , Stuttgart , Germany"}]},{"given":"Janko","family":"Petereit","sequence":"additional","affiliation":[{"name":"Fraunhofer Institute of Optronics, System Technologies and Image Exploitation IOSB , Karlsruhe , Germany"}]},{"given":"Christof","family":"Ebert","sequence":"additional","affiliation":[{"name":"Vector Consulting Services GmbH , Stuttgart , Germany"},{"name":"Robo-Test , Stuttgart , Germany"}]},{"given":"Michael","family":"Weyrich","sequence":"additional","affiliation":[{"name":"University of Stuttgart, Institute of Industrial Automation and Software Engineering (IAS) , Stuttgart , Germany"}]}],"member":"374","published-online":{"date-parts":[[2023,11,8]]},"reference":[{"key":"2023110813285493587_j_auto-2023-0026_ref_001","doi-asserted-by":"crossref","unstructured":"C. Ebert and R. Ray, \u201cTest-driven requirements engineering,\u201d IEEE Softw., vol.\u00a038, no.\u00a01, pp.\u00a016\u201324, 2021. https:\/\/doi.org\/10.1109\/ms.2020.3029811.","DOI":"10.1109\/MS.2020.3029811"},{"key":"2023110813285493587_j_auto-2023-0026_ref_002","doi-asserted-by":"crossref","unstructured":"C. Ebert, M. Weyrich, B. Lindemann, and S. Chandrasekar, \u201cSystematic testing for autonomous driving,\u201d ATZ Electron Worldw, vol. 16, no. 3, pp. 18\u201323, 2021. https:\/\/doi.org\/10.1007\/s38314-020-0575-6.","DOI":"10.1007\/s38314-020-0575-6"},{"key":"2023110813285493587_j_auto-2023-0026_ref_003","doi-asserted-by":"crossref","unstructured":"C. Ebert and J. John, \u201cPractical cybersecurity with iso 21434,\u201d ATZ Electron Worldw, vol. 17, pp. 3\u20134, 2022. https:\/\/doi.org\/10.1007\/s38314-021-0741-5.","DOI":"10.1007\/s38314-021-0741-5"},{"key":"2023110813285493587_j_auto-2023-0026_ref_004","doi-asserted-by":"crossref","unstructured":"S. Garg, P. Pundir, G. Rathee, P. Gupta, S. Garg, and S. Ahlawat, \u201cOn continuous integration\/continuous delivery for automated deployment of machine learning models using mlops,\u201d in 2021 IEEE Fourth International Conference on Artificial Intelligence and Knowledge Engineering (AIKE), Laguna Hills, CA, USA, 2021, pp. 25\u201328.","DOI":"10.1109\/AIKE52691.2021.00010"},{"key":"2023110813285493587_j_auto-2023-0026_ref_005","doi-asserted-by":"crossref","unstructured":"H. Vietz, T. Rauch, and M. Weyrich, \u201cSynthetic training data generation for convolutional neural networks in vision applications,\u201d in 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA), 2022.","DOI":"10.1109\/ETFA52439.2022.9921534"},{"key":"2023110813285493587_j_auto-2023-0026_ref_006","doi-asserted-by":"crossref","unstructured":"A. L\u00f6cklin, M. M\u00fcller, T. Jung, N. Jazdi, D. White, and M. Weyrich, \u201cDigital twin for verification and validation of industrial automation systems \u2013 a survey,\u201d in 2020 25th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Vienna, Austria, 2020, pp. 851\u2013858.","DOI":"10.1109\/ETFA46521.2020.9212051"},{"key":"2023110813285493587_j_auto-2023-0026_ref_007","doi-asserted-by":"crossref","unstructured":"H. Vietz, T. Rauch, A. L\u00f6cklin, N. Jazdi, and M. Weyrich, \u201cA methodology to identify cognition gaps in visual recognition applications based on convolutional neural networks,\u201d in 2021 IEEE 17th International Conference on Automation Science and Engineering (CASE), Lyon, France, 2021, pp. 2045\u20132050.","DOI":"10.1109\/CASE49439.2021.9551605"},{"key":"2023110813285493587_j_auto-2023-0026_ref_008","doi-asserted-by":"crossref","unstructured":"M. M\u00fcller, T. M\u00fcller, B. Talkhestani, P. Marks, N. Jazdi, and M. Weyrich, \u201cIndustrial autonomous systems: a survey on definitions, characteristics and abilities,\u201d Automatisierungstechnik, vol.\u00a069, no.\u00a01, pp.\u00a03\u201313, 2021. https:\/\/doi.org\/10.1515\/auto-2020-0131.","DOI":"10.1515\/auto-2020-0131"},{"key":"2023110813285493587_j_auto-2023-0026_ref_009","doi-asserted-by":"crossref","unstructured":"T. Emter, C. Frese, A. Zube, and J. Petereit, \u201cAlgorithm toolbox for autonomous mobile robotic systems,\u201d ATZ offhighw worldw, vol. 10, no. 3, pp. 48\u201353, 2017. https:\/\/doi.org\/10.1007\/s41321-017-0037-0.","DOI":"10.1007\/s41321-017-0037-0"},{"key":"2023110813285493587_j_auto-2023-0026_ref_010","doi-asserted-by":"crossref","unstructured":"J. Petereit, J. Beyerer, T. Asfour, et al.., \u201cROBDEKON: robotic systems for decontamination in hazardous environments,\u201d in IEEE SSRR, 2019.","DOI":"10.1109\/SSRR.2019.8848969"},{"key":"2023110813285493587_j_auto-2023-0026_ref_011","doi-asserted-by":"crossref","unstructured":"C. Ebert, D. Bajaj, and M. Weyrich, \u201cTesting software systems,\u201d IEEE Softw., vol.\u00a039, no.\u00a04, pp.\u00a08\u201317, 2022. https:\/\/doi.org\/10.1109\/ms.2022.3166755.","DOI":"10.1109\/MS.2022.3166755"},{"key":"2023110813285493587_j_auto-2023-0026_ref_012","doi-asserted-by":"crossref","unstructured":"D. J. Fremont, E. Kim, Y. V. Pant, et al.., \u201cFormal scenario-based testing of autonomous vehicles: from simulation to the real world,\u201d 2020 [Online]. Available at: https:\/\/arxiv.org\/abs\/2003.07739.","DOI":"10.1109\/ITSC45102.2020.9294368"},{"key":"2023110813285493587_j_auto-2023-0026_ref_013","unstructured":"J. Mazzega and H.-P. Sch\u00f6ener, \u201cWie PEGASUS die L\u00fccke im Bereich Testen und Freigabe von automatisierten Fahrzeugen schlie\u00dft,\u201d in Methodenentwicklung f\u00fcr Aktive Sicherheit und Automatisiertes Fahren, vol. 144, 2016, pp.\u00a0163\u2013176."},{"key":"2023110813285493587_j_auto-2023-0026_ref_014","doi-asserted-by":"crossref","unstructured":"P. Jiang, P. Osteen, M. Wigness, and S. Saripalli, \u201cRELLIS-3D dataset: data, benchmarks and analysis,\u201d in 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp.\u00a01110\u20131116.","DOI":"10.1109\/ICRA48506.2021.9561251"},{"key":"2023110813285493587_j_auto-2023-0026_ref_015","doi-asserted-by":"crossref","unstructured":"P. Jiang and S. Saripalli, \u201cLiDARNet: a boundary-aware domain adaptation model for point cloud semantic segmentation,\u201d in 2021 IEEE International Conference on Robotics and Automation(ICRA), 2021, pp.\u00a02457\u20132464.","DOI":"10.1109\/ICRA48506.2021.9561255"},{"key":"2023110813285493587_j_auto-2023-0026_ref_016","doi-asserted-by":"crossref","unstructured":"A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, \u201cVision meets robotics: the KITTI dataset,\u201d Int. J. Robot Res., vol.\u00a032, no.\u00a011, pp.\u00a01231\u20131237, 2013. https:\/\/doi.org\/10.1177\/0278364913491297.","DOI":"10.1177\/0278364913491297"},{"key":"2023110813285493587_j_auto-2023-0026_ref_017","doi-asserted-by":"crossref","unstructured":"H. Vietz, A. L\u00f6cklin, H. Ben Haj Ammar, and M. Weyrich, \u201cDeep learning-based 5g indoor positioning in a manufacturing environment,\u201d in 2022 IEEE 27th International Conference on Emerging Technologies and Factory Automation (ETFA), September 2022, 2022.","DOI":"10.1109\/ETFA52439.2022.9921635"},{"key":"2023110813285493587_j_auto-2023-0026_ref_018","doi-asserted-by":"crossref","unstructured":"H. Achicanoy, D. Chaves, and M. Trujillo, \u201cStylegans and transfer learning for generating synthetic images in industrial applications,\u201d Symmetry, vol.\u00a013, no.\u00a08, p.\u00a01497, 2021. https:\/\/doi.org\/10.3390\/sym13081497.","DOI":"10.3390\/sym13081497"},{"key":"2023110813285493587_j_auto-2023-0026_ref_019","doi-asserted-by":"crossref","unstructured":"J. Fang, X. Zuo, D. Zhou, S. Jin, S. Wang, and L. Zhang, \u201cLidar-aug: a general rendering-based augmentation framework for 3d object detection,\u201d in 2021 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp.\u00a04708\u20134718.","DOI":"10.1109\/CVPR46437.2021.00468"},{"key":"2023110813285493587_j_auto-2023-0026_ref_020","doi-asserted-by":"crossref","unstructured":"A. Antoniou, A. Storkey, and H. Edwards, \u201cData augmentation generative adversarial networks,\u201d arXiv preprint arXiv:1711.04340, 2017.","DOI":"10.1007\/978-3-030-01424-7_58"},{"key":"2023110813285493587_j_auto-2023-0026_ref_021","doi-asserted-by":"crossref","unstructured":"D. Dwibedi, I. Misra, and M. Hebert, \u201cCut, paste and learn: surprisingly easy synthesis for instance detection,\u201d in Proceedings of the IEEE International Conference on Computer Vision, 2017, pp.\u00a01301\u20131310.","DOI":"10.1109\/ICCV.2017.146"},{"key":"2023110813285493587_j_auto-2023-0026_ref_022","doi-asserted-by":"crossref","unstructured":"M. Z. Wong, K. Kunii, M. Baylis, W. H. Ong, P. Kroupa, and S. Koller, \u201cSynthetic dataset generation for object-to-model deep learning in industrial applications,\u201d PeerJ. Comput. Sci., vol.\u00a05, p.\u00a0e222, 2019. https:\/\/doi.org\/10.7717\/peerj-cs.222.","DOI":"10.7717\/peerj-cs.222"},{"key":"2023110813285493587_j_auto-2023-0026_ref_023","doi-asserted-by":"crossref","unstructured":"C. Mayershofer, T. Ge, and J. Fottner, \u201cTowards fully-synthetic training for industrial applications,\u201d in LISS 2020: Proceedings of the 10th International Conference on Logistics, Informatics and Service Sciences, Springer, 2021, pp.\u00a0765\u2013782.","DOI":"10.1007\/978-981-33-4359-7_53"},{"key":"2023110813285493587_j_auto-2023-0026_ref_024","doi-asserted-by":"crossref","unstructured":"G. Ros, L. Sellart, J. Materzynska, D. Vazquez, and A. M. Lopez, \u201cThe synthia dataset: a large collection of synthetic images for semantic segmentation of urban scenes,\u201d in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2016, pp.\u00a03234\u20133243.","DOI":"10.1109\/CVPR.2016.352"},{"key":"2023110813285493587_j_auto-2023-0026_ref_025","doi-asserted-by":"crossref","unstructured":"M. Johnson-Roberson, C. Barto, R. Mehta, S. N. Sridhar, K. Rosaen, and R. Vasudevan, \u201cDriving in the matrix: can virtual worlds replace human-generated annotations for real world tasks?\u201d arXiv preprint arXiv:1610.01983, 2016.","DOI":"10.1109\/ICRA.2017.7989092"},{"key":"2023110813285493587_j_auto-2023-0026_ref_026","doi-asserted-by":"crossref","unstructured":"F. Reway, A. Hoffmann, D. Wachtel, W. Huber, A. Knoll, and E. Ribeiro, \u201cTest method for measuring the simulation-to-reality gap of camera-based object detection algorithms for autonomous driving,\u201d in 2020 IEEE Intelligent Vehicles Symposium (IV), 2020, pp.\u00a01249\u20131256.","DOI":"10.1109\/IV47402.2020.9304567"},{"key":"2023110813285493587_j_auto-2023-0026_ref_027","unstructured":"Stanford Artificial Intelligence Laboratory, et al.., \u201cRobotic operating system \u2013 ROS Melodic Morenia,\u201d 2018 [Online]. Available at: https:\/\/www.ros.org."},{"key":"2023110813285493587_j_auto-2023-0026_ref_028","unstructured":"I. Sonji, H. Vietz, C. Ebert, and M. Weyrich, \u201cAn approach to automatically generate test cases for AI-based autonomous heavy machinery,\u201d in 9. AutoTest Fachkonferenz, 2022 [Online]. Available at: https:\/\/www.researchgate.net\/publication\/363536300_An_approach_to_automatically_generate_test_cases_for_AI-based_autonomous_heavy_machinery."},{"key":"2023110813285493587_j_auto-2023-0026_ref_029","doi-asserted-by":"crossref","unstructured":"B. Alvey, D. T. Anderson, A. Buck, M. Deardorff, G. Scott, and J. M. Keller, \u201cSimulated photorealistic deep learning framework and workflows to accelerate computer vision and unmanned aerial vehicle research,\u201d in Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV) Workshops, 2021, pp.\u00a03889\u20133898.","DOI":"10.1109\/ICCVW54120.2021.00435"},{"key":"2023110813285493587_j_auto-2023-0026_ref_030","doi-asserted-by":"crossref","unstructured":"K. He, G. Gkioxari, P. Doll\u00e1r, and R. Girshick, \u201cMask r-cnn,\u201d in 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp.\u00a02980\u20132988.","DOI":"10.1109\/ICCV.2017.322"},{"key":"2023110813285493587_j_auto-2023-0026_ref_031","doi-asserted-by":"crossref","unstructured":"T.-Y. Lin, M. Maire, S. Belongie, et al.., \u201cMicrosoft COCO: common objects in context,\u201d in Computer Vision \u2013 ECCV 2014, vol. 869, 2014, pp.\u00a0740\u2013755.","DOI":"10.1007\/978-3-319-10602-1_48"}],"container-title":["at - Automatisierungstechnik"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.degruyter.com\/document\/doi\/10.1515\/auto-2023-0026\/xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.degruyter.com\/document\/doi\/10.1515\/auto-2023-0026\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,11,8]],"date-time":"2023-11-08T13:29:29Z","timestamp":1699450169000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.degruyter.com\/document\/doi\/10.1515\/auto-2023-0026\/html"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,11,1]]},"references-count":31,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2023,11,9]]},"published-print":{"date-parts":[[2023,11,27]]}},"alternative-id":["10.1515\/auto-2023-0026"],"URL":"https:\/\/doi.org\/10.1515\/auto-2023-0026","relation":{},"ISSN":["0178-2312","2196-677X"],"issn-type":[{"value":"0178-2312","type":"print"},{"value":"2196-677X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,11,1]]}}}