{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,1]],"date-time":"2026-02-01T20:30:15Z","timestamp":1769977815990,"version":"3.49.0"},"reference-count":32,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2024,2,15]],"date-time":"2024-02-15T00:00:00Z","timestamp":1707955200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,2,15]],"date-time":"2024-02-15T00:00:00Z","timestamp":1707955200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61972096, 61771140, 61872088, 61872090, 61902289"],"award-info":[{"award-number":["61972096, 61771140, 61872088, 61872090, 61902289"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61972096, 61771140, 61872088, 61872090, 61902289"],"award-info":[{"award-number":["61972096, 61771140, 61872088, 61872090, 61902289"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61972096, 61771140, 61872088, 61872090, 61902289"],"award-info":[{"award-number":["61972096, 61771140, 61872088, 61872090, 61902289"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61972096, 61771140, 61872088, 61872090, 61902289"],"award-info":[{"award-number":["61972096, 61771140, 61872088, 61872090, 61902289"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61972096, 61771140, 61872088, 61872090, 61902289"],"award-info":[{"award-number":["61972096, 61771140, 61872088, 61872090, 61902289"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"University-Industry Cooperation of Fujian Province","award":["2022H6025"],"award-info":[{"award-number":["2022H6025"]}]},{"name":"University-Industry Cooperation of Fujian Province","award":["2022H6025"],"award-info":[{"award-number":["2022H6025"]}]},{"name":"University-Industry Cooperation of Fujian Province","award":["2022H6025"],"award-info":[{"award-number":["2022H6025"]}]},{"name":"University-Industry Cooperation of Fujian Province","award":["2022H6025"],"award-info":[{"award-number":["2022H6025"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Neural Process Lett"],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Studies show that machine learning models trained from biased data can discriminate against groups with certain sensitive attributes. This problem can be mitigated by cleaning the original data or learning fair representations. However, collecting real data in real-life is extremely time and resource-consuming, whereas generative models (e.g., GANs) can create new data that enable more application scenarios. Therefore, utilizing fair data generated by generative models can benefit various downstream tasks. In this paper, we propose a information-minimizing generative adversarial network to improve the fairness of machine learning by generating fair data. An ANOVA-based latent factor is constructed in the input for reducing the accuracy loss, and the joint adversarial training between the generator and classifier can better solve the indirect discrimination and achieve fair classification. Extensive experiments on various environments show the effectiveness of the proposed method.<\/jats:p>","DOI":"10.1007\/s11063-024-11457-8","type":"journal-article","created":{"date-parts":[[2024,2,15]],"date-time":"2024-02-15T07:02:39Z","timestamp":1707980559000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":6,"title":["Information-Minimizing Generative Adversarial Network for Fair Generation and Classification"],"prefix":"10.1007","volume":"56","author":[{"given":"Qiuling","family":"Chen","sequence":"first","affiliation":[]},{"given":"Ayong","family":"Ye","sequence":"additional","affiliation":[]},{"given":"Yuexin","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Jianwei","family":"Chen","sequence":"additional","affiliation":[]},{"given":"Chuan","family":"Huang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,2,15]]},"reference":[{"issue":"6","key":"11457_CR1","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3457607","volume":"54","author":"N Mehrabi","year":"2022","unstructured":"Mehrabi N, Morstatter F, Saxena N, Lerman K, Galstyan A (2022) A survey on bias and fairness in machine learning. ACM Comput Surv 54(6):1\u201335","journal-title":"ACM Comput Surv"},{"key":"11457_CR2","unstructured":"Alabi D, Immorlica N, Kalai A (2018) Unleashing linear optimizers for group-fair learning and optimization. Conference on learning theory. PMLR, 2043\u20132066"},{"key":"11457_CR3","doi-asserted-by":"crossref","unstructured":"Algaba A, Mazijn C, Prunkl C, Danckaert J, Ginis V (2022) LUCID-GAN: conditional generative models to locate unfairness. Available at SSRN","DOI":"10.2139\/ssrn.4289597"},{"key":"11457_CR4","doi-asserted-by":"crossref","unstructured":"Lu Zhang, Yongkai Wu, Xintao Wu (2017) A causal framework for discovering and removing direct and indirect discrimination (IJCAI\u201917), pp 3929\u20133935","DOI":"10.24963\/ijcai.2017\/549"},{"key":"11457_CR5","unstructured":"Calmon F, Wei D, Vinzamuri B, Natesan Ramamurthy, K, Varshney KR (2017) Optimized pre-processing for discrimination prevention. Advances in neural information processing systems, pp 3992\u20134001"},{"key":"11457_CR6","unstructured":"Song C, Shmatikov V (2019) Overlearning reveals sensitive attributes, arXiv preprint arXiv:1905.11742"},{"key":"11457_CR7","unstructured":"Goodfellow IJ, Pouget-Abadie J, Mirza M et al. (2014) Generative adversarial nets. NIPS"},{"key":"11457_CR8","unstructured":"Brock A, Donahue J, Simonyan K (2018) Large scale GAN training for high fidelity natural image synthesis. International conference on learning representations, arXiv preprint arXiv:1809.11096"},{"key":"11457_CR9","first-page":"1877","volume":"33","author":"T Brown","year":"2020","unstructured":"Brown T, Mann B, Ryder N et al (2020) Language models are few-shot learners. Adv Neural Inf Process Syst 33:1877\u20131901","journal-title":"Adv Neural Inf Process Syst"},{"key":"11457_CR10","doi-asserted-by":"crossref","unstructured":"Xu D, Yuan S, Zhang L et al. (2018) Fairgan: fairness-aware generative adversarial networks. 2018 IEEE international conference on big data (big data). IEEE, pp 570\u2013575","DOI":"10.1109\/BigData.2018.8622525"},{"key":"11457_CR11","doi-asserted-by":"publisher","first-page":"1970","DOI":"10.1109\/TIFS.2022.3170265","volume":"17","author":"P Kairouz","year":"2022","unstructured":"Kairouz P, Liao J, Huang C et al (2022) Generating fair universal representations using adversarial models. IEEE Trans Inf Forensics Secur 17:1970\u20131985","journal-title":"IEEE Trans Inf Forensics Secur"},{"key":"11457_CR12","doi-asserted-by":"crossref","unstructured":"Xu D, Wu Y, Yuan S et al. (2019) Achieving causal fairness through generative adversarial networks. Proceedings of the 28th international joint conference on artificial intelligence","DOI":"10.24963\/ijcai.2019\/201"},{"key":"11457_CR13","doi-asserted-by":"publisher","first-page":"55592","DOI":"10.1109\/ACCESS.2020.2981912","volume":"8","author":"M Ngxande","year":"2020","unstructured":"Ngxande M, Tapamo JR, Burke M (2020) Bias remediation in driver drowsiness detection systems using generative adversarial networks. IEEE Access 8:55592\u201355601","journal-title":"IEEE Access"},{"issue":"4\/5","key":"11457_CR14","doi-asserted-by":"publisher","first-page":"3:1","DOI":"10.1147\/JRD.2019.2945519","volume":"63","author":"P Sattigeri","year":"2019","unstructured":"Sattigeri P, Hoffman SC, Chenthamarakshan V et al (2019) Fairness GAN: Generating datasets with fairness properties using a generative adversarial network. IBM J Res Dev 63(4\/5):3:1-3:9","journal-title":"IBM J Res Dev"},{"key":"11457_CR15","doi-asserted-by":"crossref","unstructured":"Adeli E, Zhao Q, Pfefferbaum A et al. (2021) Representation learning with statistical independence to mitigate bias. Proceedings of the IEEE\/CVF winter conference on applications of computer vision, pp 2513\u20132523","DOI":"10.1109\/WACV48630.2021.00256"},{"key":"11457_CR16","doi-asserted-by":"crossref","unstructured":"Xu D, Yuan S, Zhang L et al. (2019) Fairgan+: achieving fair data generation and classification through generative adversarial nets. 2019 IEEE international conference on big sata (Big Data). IEEE, pp 1401\u20131406","DOI":"10.1109\/BigData47090.2019.9006322"},{"key":"11457_CR17","unstructured":"Zafar MB, Valera I, Rogriguez MG et al. (2017) Fairness constraints: mechanisms for fair classification. Artificial intelligence and statistics. PMLR, pp 962\u2013970"},{"key":"11457_CR18","unstructured":"Hardt M, Price E, Srebro N (2016) Equality of opportunity in supervised learning. Advances in neural information processing systems, p 29"},{"key":"11457_CR19","unstructured":"Beutel A, Chen J, Zhao Z et al. (2017) Data decisions and theoretical implications when adversarially learning fair representations, arXiv preprint arXiv: 1707.00075"},{"key":"11457_CR20","doi-asserted-by":"crossref","unstructured":"Zhang BH, Lemoine B, Mitchell M (2018) Mitigating unwanted biases with adversarial learning. Proceedings of the 2018 AAAI\/ACM conference on AI, ethics, and society, pp 335\u2013340","DOI":"10.1145\/3278721.3278779"},{"key":"11457_CR21","unstructured":"Abusitta A, Aimeur E, Abdel Wahab O (2020) Generative adversarial networks for mitigating biases in machine learning systems. ECAI 2020. IOS Press, pp 937\u2013944"},{"issue":"1","key":"11457_CR22","doi-asserted-by":"publisher","first-page":"32","DOI":"10.1145\/3468507.3468513","volume":"23","author":"P Delobelle","year":"2021","unstructured":"Delobelle P, Temple P, Perrouin G et al (2021) Ethical adversaries: towards mitigating unfairness with adversarial machine learning. ACM SIGKDD Explorations Newsl 23(1):32\u201341","journal-title":"ACM SIGKDD Explorations Newsl"},{"key":"11457_CR23","unstructured":"Dhar P, Gleason J, Souri H et al. (2020) Towards gender-neutral face descriptors for mitigating bias in face recognition, arXiv preprint arXiv: 2006.07845"},{"key":"11457_CR24","doi-asserted-by":"crossref","unstructured":"Han M, Wu J, Bashir AK et al. (2020) Adversarial learning-based bias mitigation for fatigue driving detection in fair-intelligent iov. GLOBECOM 2020-2020 IEEE global communications conference. IEEE, pp 1\u20136","DOI":"10.1109\/GLOBECOM42002.2020.9322194"},{"key":"11457_CR25","unstructured":"Odena A, Olah C, Shlens J. (2017) Conditional image synthesis with auxiliary classifier gans. International conference on machine learning. PMLR, pp 2642\u20132651"},{"key":"11457_CR26","unstructured":"Chen X, Duan Y, Houthooft R et al. (2016) Infogan: interpretable representation learning by information maximizing generative adversarial nets. Advances in neural information processing systems, p 29"},{"key":"11457_CR27","doi-asserted-by":"crossref","unstructured":"Feldman M, Friedler SA, Moeller J et al. (2015) Certifying and removing disparate impact. Proceedings of the 21th ACM SIGKDD international conference on knowledge discovery and data mining, pp 259\u2013268","DOI":"10.1145\/2783258.2783311"},{"key":"11457_CR28","unstructured":"Hardt M, Price E, Srebro N (2016) Equality of opportunity in supervised learning. Advances in neural information processing systems, p 29"},{"key":"11457_CR29","unstructured":"Jiang H, Nachum O (2020) Identifying and correcting label bias in machine learning. In international conference on artificial intelligence and statistics. PMLR, pp 702\u2013712"},{"issue":"6","key":"11457_CR30","doi-asserted-by":"publisher","first-page":"1638","DOI":"10.2307\/1939922","volume":"74","author":"RG Shaw","year":"1993","unstructured":"Shaw RG, Mitchell-Olds T (1993) ANOVA for unbalanced data: an overview. Ecology 74(6):1638\u20131645","journal-title":"Ecology"},{"key":"11457_CR31","volume-title":"UCI machine learning repository","author":"Dua Dheeru and Efi Karra Taniskidou","year":"2017","unstructured":"Dua Dheeru and Efi Karra Taniskidou (2017) UCI machine learning repository. University of California, Irvine"},{"key":"11457_CR32","doi-asserted-by":"crossref","unstructured":"Zafar MB, Valera I, Gomez RM et al. (2017) Fairness beyond disparate treatment and disparate impact: Learning classification without disparate mistreatment. Proceedings of the 26th international conference on World Wide Web. International World Wide Web conferences steering committee, pp 1171\u20131180","DOI":"10.1145\/3038912.3052660"}],"container-title":["Neural Processing Letters"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11063-024-11457-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11063-024-11457-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11063-024-11457-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,2,29]],"date-time":"2024-02-29T20:16:04Z","timestamp":1709237764000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11063-024-11457-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,2,15]]},"references-count":32,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2024,2]]}},"alternative-id":["11457"],"URL":"https:\/\/doi.org\/10.1007\/s11063-024-11457-8","relation":{},"ISSN":["1573-773X"],"issn-type":[{"value":"1573-773X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,2,15]]},"assertion":[{"value":"27 November 2023","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 February 2024","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors have no competing interests to declare that are relevant to the content of the article.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}},{"value":"Not Applicable","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethical Approval"}}],"article-number":"36"}}