{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,19]],"date-time":"2026-02-19T17:52:23Z","timestamp":1771523543074,"version":"3.50.1"},"reference-count":56,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2023,2,14]],"date-time":"2023-02-14T00:00:00Z","timestamp":1676332800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Discovery Grants from NSERC"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Mathematics"],"abstract":"<jats:p>Generative Adversarial Networks (GANs) have been used for many applications with overwhelming success. The training process of these models is complex, involving a zero-sum game between two neural networks trained in an adversarial manner. Thus, to use GANs, researchers and developers need to answer the question: \u201cIs the GAN sufficiently trained?\u201d. However, understanding when a GAN is well trained for a given problem is a challenging and laborious task that usually requires monitoring the training process and human intervention for assessing the quality of the GAN generated outcomes. Currently, there is no automatic mechanism for determining the required number of epochs that correspond to a well-trained GAN, allowing the training process to be safely stopped. In this paper, we propose AutoGAN, an algorithm that allows one to answer this question in a fully automatic manner with minimal human intervention, being applicable to different data modalities including imagery and tabular data. Through an extensive set of experiments, we show the clear advantage of our solution when compared against alternative methods, for a task where the GAN outputs are used as an oversampling method. Moreover, we show that AutoGAN not only determines a good stopping point for training the GAN, but it also allows one to run fewer training epochs to achieve a similar or better performance with the GAN outputs.<\/jats:p>","DOI":"10.3390\/math11040977","type":"journal-article","created":{"date-parts":[[2023,2,15]],"date-time":"2023-02-15T04:47:24Z","timestamp":1676436444000},"page":"977","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["AutoGAN: An Automated Human-Out-of-the-Loop Approach for Training Generative Adversarial Networks"],"prefix":"10.3390","volume":"11","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-6415-4610","authenticated-orcid":false,"given":"Ehsan","family":"Nazari","sequence":"first","affiliation":[{"name":"School of Electric Engineering and Computer Science, University of Ottawa, Ottawa, ON K1N 6N5, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9917-3694","authenticated-orcid":false,"given":"Paula","family":"Branco","sequence":"additional","affiliation":[{"name":"School of Electric Engineering and Computer Science, University of Ottawa, Ottawa, ON K1N 6N5, Canada"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6067-6545","authenticated-orcid":false,"given":"Guy-Vincent","family":"Jourdan","sequence":"additional","affiliation":[{"name":"School of Electric Engineering and Computer Science, University of Ottawa, Ottawa, ON K1N 6N5, Canada"}]}],"member":"1968","published-online":{"date-parts":[[2023,2,14]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Borji, A. (2018). Pros and Cons of GAN Evaluation Measures. arXiv.","DOI":"10.1016\/j.cviu.2018.10.009"},{"key":"ref_2","unstructured":"Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative Adversarial Networks. arXiv."},{"key":"ref_3","unstructured":"Brock, A., Donahue, J., and Simonyan, K. (2018). Large Scale GAN Training for High Fidelity Natural Image Synthesis. arXiv."},{"key":"ref_4","unstructured":"Karras, T., Aila, T., Laine, S., and Lehtinen, J. (2017). Progressive Growing of GANs for Improved Quality, Stability, and Variation. arXiv."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Karras, T., Laine, S., and Aila, T. (2018). A Style-Based Generator Architecture for Generative Adversarial Networks. arXiv.","DOI":"10.1109\/CVPR.2019.00453"},{"key":"ref_6","unstructured":"Karras, T., Aittala, M., Laine, S., H\u00e4rk\u00f6nen, E., Hellsten, J., Lehtinen, J., and Aila, T. (2021). Alias-Free Generative Adversarial Networks. arXiv."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., and Aila, T. (2019). Analyzing and Improving the Image Quality of StyleGAN. arXiv.","DOI":"10.1109\/CVPR42600.2020.00813"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J.Y., Zhou, T., and Efros, A.A. (2016). Image-to-Image Translation with Conditional Adversarial Networks. arXiv.","DOI":"10.1109\/CVPR.2017.632"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Zhu, J.Y., Park, T., Isola, P., and Efros, A.A. (2017). Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks. arXiv.","DOI":"10.1109\/ICCV.2017.244"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"391","DOI":"10.1109\/TMM.2020.2975961","article-title":"SPA-GAN: Spatial Attention GAN for Image-to-Image Translation","volume":"23","author":"Emami","year":"2021","journal-title":"IEEE Trans. Multimed."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Ledig, C., Theis, L., Huszar, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., and Wang, Z. (2016). Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. arXiv.","DOI":"10.1109\/CVPR.2017.19"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Tulyakov, S., Liu, M.Y., Yang, X., and Kautz, J. (2017). MoCoGAN: Decomposing Motion and Content for Video Generation. arXiv.","DOI":"10.1109\/CVPR.2018.00165"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Munoz, A., Zolfaghari, M., Argus, M., and Brox, T. (2020). Temporal Shift GAN for Large Scale Video Generation. arXiv.","DOI":"10.1109\/WACV48630.2021.00322"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Dong, H.W., Hsiao, W.Y., Yang, L.C., and Yang, Y.H. (2017). MuseGAN: Multi-track Sequential Generative Adversarial Networks for Symbolic Music Generation and Accompaniment. arXiv.","DOI":"10.1609\/aaai.v32i1.11312"},{"key":"ref_15","unstructured":"Bojchevski, A., Shchur, O., Z\u00fcgner, D., and G\u00fcnnemann, S. (2018). NetGAN: Generating Graphs via Random Walks. arXiv."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Guo, J., Lu, S., Cai, H., Zhang, W., Yu, Y., and Wang, J. (2017). Long Text Generation via Adversarial Training with Leaked Information. arXiv.","DOI":"10.1609\/aaai.v32i1.11957"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"1071","DOI":"10.14778\/3231751.3231757","article-title":"Data synthesis based on generative adversarial networks","volume":"11","author":"Park","year":"2018","journal-title":"Proc. VLDB Endow."},{"key":"ref_18","unstructured":"Nazari, e., and Branco, P. (2021, January 17). On Oversampling via Generative Adversarial Networks under Different Data Difficult Factors. Proceedings of the International Workshop on Learning with Imbalanced Domains: Theory and Applications, Online. PMLR 2021."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Nazari, E., Branco, P., and Jourdan, G.V. (2021, January 13\u201315). Using CGAN to Deal with Class Imbalance and Small Sample Size in Cybersecurity Problems. Proceedings of the 2021 18th International Conference on Privacy, Security and Trust (PST), Auckland, New Zealand.","DOI":"10.1109\/PST52912.2021.9647807"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Pennisi, M., Palazzo, S., and Spampinato, C. (2021, January 11\u201317). Self-improving classification performance through GAN distillation. Proceedings of the 2021 IEEE\/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada.","DOI":"10.1109\/ICCVW54120.2021.00189"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"11381","DOI":"10.1007\/s00500-019-04602-2","article-title":"Data augmentation using MG-GAN for improved cancer classification on gene expression data","volume":"24","author":"Chaudhari","year":"2020","journal-title":"Soft Comput."},{"key":"ref_22","unstructured":"Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (2018). Advances in Neural Information Processing Systems, Curran Associates, Inc."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Gupta, A., Johnson, J., Fei-Fei, L., Savarese, S., and Alahi, A. (2018). Social GAN: Socially Acceptable Trajectories with Generative Adversarial Networks. arXiv.","DOI":"10.1109\/CVPR.2018.00240"},{"key":"ref_24","unstructured":"Saxena, D., and Cao, J. (2019). D-GAN: Deep Generative Adversarial Nets for Spatio-Temporal Prediction. arXiv."},{"key":"ref_25","unstructured":"Mescheder, L., Geiger, A., and Nowozin, S. (2018). Which Training Methods for GANs do actually Converge?. arXiv."},{"key":"ref_26","unstructured":"Daskalakis, C., Ilyas, A., Syrgkanis, V., and Zeng, H. (2017). Training GANs with Optimism. arXiv."},{"key":"ref_27","unstructured":"Goodfellow, I. (2017). NIPS 2016 Tutorial: Generative Adversarial Networks. arXiv."},{"key":"ref_28","unstructured":"Zhou, S., Gordon, M.L., Krishna, R., Narcomey, A., Fei-Fei, L., and Bernstein, M.S. (2019). HYPE: A Benchmark for Human eYe Perceptual Evaluation of Generative Models. arXiv."},{"key":"ref_29","unstructured":"Lee, D., Sugiyama, M., Luxburg, U., Guyon, I., and Garnett, R. (2016). Advances in Neural Information Processing Systems, Curran Associates, Inc."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Borji, A. (2021). Pros and Cons of GAN Evaluation Measures: New Developments. arXiv.","DOI":"10.1016\/j.cviu.2021.103329"},{"key":"ref_31","unstructured":"Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., and Chen, X. (2016). Improved Techniques for Training GANs. arXiv."},{"key":"ref_32","unstructured":"Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. (2017). GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium. arXiv."},{"key":"ref_33","unstructured":"Ravuri, S., and Vinyals, O. (2019). Classification Accuracy Score for Conditional Generative Models. arXiv."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Shmelkov, K., Schmid, C., and Alahari, K. How good is my GAN? In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8\u201314 September 2018.","DOI":"10.1007\/978-3-030-01216-8_14"},{"key":"ref_35","unstructured":"Arjovsky, M., Chintala, S., and Bottou, L. (2017). Wasserstein GAN. arXiv."},{"key":"ref_36","unstructured":"Papadimitriou, C.H. (1994). Computational Complexity, Addison-Wesley."},{"key":"ref_37","unstructured":"Mirza, M., and Osindero, S. (2014). Conditional Generative Adversarial Nets. arXiv."},{"key":"ref_38","unstructured":"Kullback, S. (1997). Information Theory and Statistics, Courier Corporation."},{"key":"ref_39","unstructured":"Ramdas, A., Garcia, N., and Cuturi, M. (2015). On Wasserstein Two Sample Testing and Related Families of Nonparametric Tests. arXiv."},{"key":"ref_40","unstructured":"Barratt, S., and Sharma, R. (2018). A Note on the Inception Score. arXiv."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Silhavy, R., Silhavy, P., and Prokopova, Z. (2020). Software Engineering Perspectives in Intelligent Systems, Springer International Publishing.","DOI":"10.1007\/978-3-030-63319-6"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Tan, M., Chen, B., Pang, R., Vasudevan, V., Sandler, M., Howard, A., and Le, Q.V. (2019, January 16\u201317). Mnasnet: Platform-aware neural architecture search for mobile. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00293"},{"key":"ref_43","unstructured":"Fu, Y., Chen, W., Wang, H., Li, H., Lin, Y., and Wang, Z. (2020). Autogan-distiller: Searching to compress generative adversarial networks. arXiv."},{"key":"ref_44","unstructured":"Wang, H., and Huan, J. (2019). Agan: Towards automated design of generative adversarial networks. arXiv."},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Gong, X., Chang, S., Jiang, Y., and Wang, Z. (2019, January 27\u201328). Autogan: Neural architecture search for generative adversarial networks. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Republic of Korea.","DOI":"10.1109\/ICCV.2019.00332"},{"key":"ref_46","unstructured":"Morozov, S., Voynov, A., and Babenko, A. (2021, January 3\u20137). On Self-Supervised Image Representations for {GAN} Evaluation. Proceedings of the International Conference on Learning Representations, Virtual Event."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J.Y., Zhou, T., and Efros, A.A. (2017, January 21\u201326). Image-To-Image Translation with Conditional Adversarial Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.632"},{"key":"ref_48","unstructured":"Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (2017). Advances in Neural Information Processing Systems, Curran Associates, Inc."},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Habibi Lashkari, A., Draper Gil, G., Mamun, M.S.I., and Ghorbani, A.A. (2017, January 19\u201321). Characterization of Tor Traffic using Time based Features. Proceedings of the 3rd International Conference on Information Systems Security and Privacy\u2014ICISSP, Porto, Portugal. INSTICC.","DOI":"10.5220\/0006105602530262"},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Mahdavifar, S., Abdul Kadir, A.F., Fatemi, R., Alhadidi, D., and Ghorbani, A.A. (2020, January 17\u201322). Dynamic Android Malware Category Classification using Semi-Supervised Deep Learning. Proceedings of the 2020 IEEE Intl Conf on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress (DASC\/PiCom\/CBDCom\/CyberSciTech), Online Event.","DOI":"10.1109\/DASC-PICom-CBDCom-CyberSciTech49142.2020.00094"},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Chen, J., Piuri, V., Su, C., and Yung, M. (2016). Network and System Security, Springer International Publishing.","DOI":"10.1007\/978-3-319-46298-1"},{"key":"ref_52","doi-asserted-by":"crossref","unstructured":"MontazeriShatoori, M., Davidson, L., Kaur, G., and Habibi Lashkari, A. (2020, January 17\u201322). Detection of DoH Tunnels using Time-series Classification of Encrypted Traffic. Proceedings of the 2020 IEEE DASC\/PiCom\/CBDCom\/CyberSciTech, Online Event.","DOI":"10.1109\/DASC-PICom-CBDCom-CyberSciTech49142.2020.00026"},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"141","DOI":"10.1109\/MSP.2012.2211477","article-title":"The mnist database of handwritten digit images for machine learning research","volume":"29","author":"Deng","year":"2012","journal-title":"IEEE Signal Process. Mag."},{"key":"ref_54","unstructured":"Xiao, H., Rasul, K., and Vollgraf, R. (2017). Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv."},{"key":"ref_55","unstructured":"Clanuwat, T., Bober-Irizar, M., Kitamoto, A., Lamb, A., Yamamoto, K., and Ha, D. (2018). Deep learning for classical japanese literature. arXiv."},{"key":"ref_56","unstructured":"Krizhevsky, A. (2009). Learning Multiple Layers of Features from Tiny Images, Computer Science-University of Toronto. Technical Report."}],"container-title":["Mathematics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2227-7390\/11\/4\/977\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T18:35:18Z","timestamp":1760121318000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2227-7390\/11\/4\/977"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,2,14]]},"references-count":56,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2023,2]]}},"alternative-id":["math11040977"],"URL":"https:\/\/doi.org\/10.3390\/math11040977","relation":{},"ISSN":["2227-7390"],"issn-type":[{"value":"2227-7390","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,2,14]]}}}