{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,6]],"date-time":"2026-05-06T06:59:17Z","timestamp":1778050757750,"version":"3.51.4"},"reference-count":27,"publisher":"Computers and Informatics","issue":"1","funder":[{"name":"FCT - Funda\u00e7\u00e3o para a Ci\u00eancia e Tecnologia","award":["UIDB\/00319\/2020"],"award-info":[{"award-number":["UIDB\/00319\/2020"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"accepted":{"date-parts":[[2025,1,16]]},"abstract":"<jats:p xml:lang=\"en\">Deep learning-based methodologies are a key component towards the goal of autonomous driving. For a successful application, these models require a significant amount of training data, which is difficult, time-consuming, and expensive to collect. This study assesses the effectiveness of Generative Adversarial Networks (GANs) in generating high-quality training images for in-vehicle applications using a limited dataset. Two advanced GAN architectures were compared for their ability to produce realistic in-vehicle RGB images. The results showed that the StyleGAN-ADA outperformed the MSG-GAN, generating images with better fidelity and accuracy, making it more suitable for scenarios with limited data. However, challenges such as mode collapse and long training times, particularly for high-resolution images, were identified. The models\u2019 reliance on the quality and diversity of the training dataset also limits their effectiveness in real-world applications. This research highlights the potential of GANs to reduce the lack of data in autonomous driving, pointing to future approaches for optimizing these models.<\/jats:p>","DOI":"10.62189\/ci.1261718","type":"journal-article","created":{"date-parts":[[2025,5,19]],"date-time":"2025-05-19T13:00:02Z","timestamp":1747659602000},"page":"23-31","source":"Crossref","is-referenced-by-count":0,"title":["Automatic generation of in-vehicle images: StyleGAN-ADA vs. MSG-GAN"],"prefix":"10.62189","volume":"5","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-7002-8496","authenticated-orcid":true,"given":"Sahar","family":"Azadi","sequence":"first","affiliation":[{"name":"Universidade do Minho"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4595-3828","authenticated-orcid":true,"given":"Sandra","family":"Dixe","sequence":"additional","affiliation":[{"name":"Universidade do Minho"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1452-7842","authenticated-orcid":true,"given":"Joao","family":"Leite","sequence":"additional","affiliation":[{"name":"Universidade do Minho"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5880-033X","authenticated-orcid":true,"given":"Joao","family":"Borges","sequence":"additional","affiliation":[{"name":"Polytechnic Institute of C\u00e1vado and Ave"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5259-1891","authenticated-orcid":true,"given":"Sandro","family":"Queiros","sequence":"additional","affiliation":[{"name":"Universidade do Minho"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6703-3278","authenticated-orcid":true,"given":"Jeime","family":"Fonseca","sequence":"additional","affiliation":[{"name":"Universidade do Minho"}]}],"member":"48588","published-online":{"date-parts":[[2025,6,30]]},"reference":[{"key":"ref1","unstructured":"[1] Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, Canada, 8\u201313 December 2014; MIT Press: Cambridge, MA, USA, 2014; pp. 2672\u20132680."},{"key":"ref2","unstructured":"[2] Perez, L.; Wang, J. The effectiveness of data augmentation in image classification using deep learning. arXiv 2017, arXiv:1712.04621. https:\/\/doi.org\/10.48550\/arXiv.1712.04621."},{"key":"ref3","unstructured":"[3] Salimans, T.; Goodfellow, I.; Zaremba, W.; Cheung, V.; Radford, A.; Chen, X. Improved techniques for training GANs. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5\u201310 December 2016; Curran Associates Inc.: Red Hook, NY, USA, 2016; pp. 2234\u20132242."},{"key":"ref4","unstructured":"[4] Lin, Z.; Khetan, A.; Fanti, G.; Oh, S. PacGAN: The power of two samples in generative adversarial networks. In Proceedings of the 31st Conference on Neural Information Processing Systems, Montreal, Canada, 3\u20138 December 2018; Curran Associates Inc.: Red Hook, NY, USA, 2018; pp. 1498\u20131507."},{"key":"ref5","unstructured":"[5] Arjovsky, M.; Bottou, L. Towards principled methods for training generative adversarial networks. arXiv 2017, arXiv:1701.04862. https:\/\/doi.org\/10.48550\/arXiv.1701.04862."},{"key":"ref6","unstructured":"[6] Zhang, D.; Khoreva, A. PA-GAN: Improving GAN training by progressive augmentation. arXiv 2019, arXiv:1901.10422. https:\/\/doi.org\/10.48550\/arXiv.1901.10422."},{"key":"ref7","unstructured":"[7] LeCun, Y.; Cortes, C.; Burges, C.J. MNIST handwritten digit database. AT&T Labs. 2010. Available online: http:\/\/yann.lecun.com\/exdb\/mnist (accessed on 18 Dec 2024)."},{"key":"ref8","doi-asserted-by":"crossref","unstructured":"[8] Liu, Z.; Luo, P.; Wang, X.; Tang, X. Deep learning face attributes in the wild. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7\u201313 December 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 3730\u20133738. https:\/\/doi.org\/10.1109\/ICCV.2015.425.","DOI":"10.1109\/ICCV.2015.425"},{"key":"ref9","doi-asserted-by":"crossref","unstructured":"[9] Zhu, K.; Liu, X.; Yang, H. A survey of generative adversarial networks. In Proceedings of the Chinese Automation Congress, Xi'an, China, 30 November\u20132 December 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 2768\u20132773. https:\/\/doi.org\/10.1109\/CAC.2018.8623645.","DOI":"10.1109\/CAC.2018.8623645"},{"key":"ref10","doi-asserted-by":"crossref","unstructured":"[10] Yi, X.; Walia, E.; Babyn, P. Generative adversarial network in medical imaging: a review. Med. Image Anal. 2019, 58, 101552. https:\/\/doi.org\/10.1016\/j.media.2019.101552.","DOI":"10.1016\/j.media.2019.101552"},{"key":"ref11","doi-asserted-by":"crossref","unstructured":"[11] Turhan, C.G.; Bilge, H.S. Recent trends in deep generative models: a review. In Proceedings of the 3rd International Conference on Computer Science and Engineering, Sarajevo, Bosnia and Herzegovina, 20\u201323 September 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 574\u2013579. https:\/\/doi.org\/10.1109\/UBMK.2018.8566353.","DOI":"10.1109\/UBMK.2018.8566353"},{"key":"ref12","unstructured":"[12] Radford, A.; Metz, L.; Chintala, S. Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv 2016, arXiv:1511.06434. https:\/\/doi.org\/10.48550\/arXiv.1511.06434."},{"key":"ref13","unstructured":"[13] Karras, T.; Aila, T.; Laine, S.; Lehtinen, J. Progressive growing of GANs for improved quality, stability, and variation. arXiv 2018, arXiv:1710.10196. https:\/\/doi.org\/10.48550\/arXiv.1710.10196."},{"key":"ref14","doi-asserted-by":"crossref","unstructured":"[14] Karnewar, A.; Wang, O. MSG-GAN: Multi-scale gradients for generative adversarial networks. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 13\u201319 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 7799\u20137808. https:\/\/doi.org\/10.1109\/CVPR42600.2020.00782.","DOI":"10.1109\/CVPR42600.2020.00782"},{"key":"ref15","doi-asserted-by":"crossref","unstructured":"[15] Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the 18th International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5\u20139 October 2015; Springer: Cham, Switzerland, 2015; pp. 234\u2013241. https:\/\/doi.org\/10.1007\/978-3-319-24574-4_28.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref16","doi-asserted-by":"crossref","unstructured":"[16] Deepak, S.; Ameer, P.M. MSG-GAN based synthesis of brain MRI with meningioma for data augmentation. In Proceedings of the 2020 IEEE International Conference on Electronics, Computing and Communication Technologies, Bangalore, India, 2\u20134 July 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 1\u20136. https:\/\/doi.org\/10.1109\/CONECCT50063.2020.9198672.","DOI":"10.1109\/CONECCT50063.2020.9198672"},{"key":"ref17","unstructured":"[17] Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; Courville, A.C. Improved training of Wasserstein GANs. In Proceedings of the 31st International Conference on Neural Information Processing Systems, California, USA, 4\u20139 December 2017; Curran Associates Inc.: Red Hook, NY, USA, 2017; pp. 5767\u20135777."},{"key":"ref18","unstructured":"[18] Mescheder, L.; Geiger, A.; Nowozin, S. Which training methods for GANs do actually converge? In Proceedings of the 35th International Conference on Machine Learning, Stockholm, Sweden, 10\u201315 July 2018; PMLR: 2018; pp. 3481\u20133490."},{"key":"ref19","doi-asserted-by":"crossref","unstructured":"[19] Karras, T.; Laine, S.; Aila, T. A style-based generator architecture for generative adversarial networks. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, California, USA, 15\u201320 June 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 4401\u20134410.","DOI":"10.1109\/CVPR.2019.00453"},{"key":"ref20","doi-asserted-by":"crossref","unstructured":"[20] Karras, T.; Laine, S.; Aittala, M.; Hellsten, J.; Lehtinen, J.; Aila, T. Analyzing and improving the image quality of StyleGAN. In Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Washington, USA, 13\u201319 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 8110\u20138119. https:\/\/doi.org\/10.1109\/CVPR42600.2020.00813.","DOI":"10.1109\/CVPR42600.2020.00813"},{"key":"ref21","unstructured":"[21] Karras, T.; Aittala, M.; Hellsten, J.; Laine, S.; Lehtinen, J.; Aila, T. Training generative adversarial networks with limited data. In Proceedings of the 34th International Conference on Neural Information Processing Systems, New York, USA, 6\u201312 December 2020; Curran Associates Inc.: Red Hook, NY, USA, 2020; pp. 12104\u201312114."},{"key":"ref22","doi-asserted-by":"crossref","unstructured":"[22] Dixe, S.; Leite, J.; Azadi, S.; Faria, P.; Mendes, J.; Fonseca, J.C.; Borges, J.; Queiros, S. In-car damage dirt and stain estimation with RGB images. In Proceedings of the 13th International Conference on Agents and Artificial Intelligence, Vienna, Austria, 4\u20136 February 2021; Scitepress: 2021; pp. 672\u2013679.","DOI":"10.5220\/0010228006720679"},{"key":"ref23","doi-asserted-by":"crossref","unstructured":"[23] Faria, P.; Dixe, S.; Leite, J.; Azadi, S.; Mendes, J.; Fonseca, J.C.; Borges, J.; Queiros, S. In-car state classification with RGB images. In Proceedings of the 20th International Conference on Intelligent Systems Design and Applications, [Location unknown], 12\u201315 December 2020; Springer: Cham, Switzerland, 2020; pp. 435\u2013445.","DOI":"10.1007\/978-3-030-71187-0_40"},{"key":"ref24","unstructured":"[24] Xu, Q.; Huang, G.; Yuan, Y.; Guo, C.; Sun, Y.; Wu, F.; Zhang, C.; Lin, D. An empirical study on evaluation metrics of generative adversarial networks. arXiv 2018, arXiv:1806.07755. https:\/\/doi.org\/10.48550\/arXiv.1806.07755."},{"key":"ref25","doi-asserted-by":"crossref","unstructured":"[25] Gretton, A.; Borgwardt, K.; Rasch, M.; Sch\u00f6lkopf, B.; Smola, A. A kernel method for the two-sample-problem. In Proceedings of the 20th International Conference on Neural Information Processing Systems, British Columbia, Canada, 4\u20139 December 2006; MIT Press: Cambridge, MA, USA, 2006; pp. 513\u2013520.","DOI":"10.7551\/mitpress\/7503.003.0069"},{"key":"ref26","unstructured":"[26] Lopez-Paz, D.; Oquab, M. Revisiting classifier two-sample tests. arXiv 2017, arXiv:1610.06545. https:\/\/doi.org\/10.48550\/arXiv.1610.06545."},{"key":"ref27","unstructured":"[27] Heusel, M.; Ramsauer, H.; Unterthiner, T.; Nessler, B.; Hochreiter, S. GANs trained by a two time-scale update rule converge to a local Nash equilibrium. In Proceedings of the 31st International Conference on Neural Information Processing Systems, California, USA, 4\u20139 December 2017; Curran Associates Inc.: Red Hook, NY, USA, 2017; pp. 6629\u20136640."}],"container-title":["Computers and Informatics"],"original-title":[],"deposited":{"date-parts":[[2025,7,5]],"date-time":"2025-07-05T23:12:29Z","timestamp":1751757149000},"score":1,"resource":{"primary":{"URL":"http:\/\/dergipark.org.tr\/en\/doi\/10.62189\/ci.1261718"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,30]]},"references-count":27,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,6,30]]}},"URL":"https:\/\/doi.org\/10.62189\/ci.1261718","relation":{},"ISSN":["2757-8259"],"issn-type":[{"value":"2757-8259","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,6,30]]}}}