{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,19]],"date-time":"2026-03-19T04:20:12Z","timestamp":1773894012698,"version":"3.50.1"},"reference-count":62,"publisher":"Wiley","issue":"4","license":[{"start":{"date-parts":[[2026,3,12]],"date-time":"2026-03-12T00:00:00Z","timestamp":1773273600000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0\/"},{"start":{"date-parts":[[2026,3,12]],"date-time":"2026-03-12T00:00:00Z","timestamp":1773273600000},"content-version":"tdm","delay-in-days":0,"URL":"http:\/\/doi.wiley.com\/10.1002\/tdm_license_1.1"}],"funder":[{"DOI":"10.13039\/100009092","name":"Universidad de Alicante","doi-asserted-by":"publisher","id":[{"id":"10.13039\/100009092","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003359","name":"Generalitat Valenciana","doi-asserted-by":"publisher","id":[{"id":"10.13039\/501100003359","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100031478","name":"NextGenerationEU","doi-asserted-by":"publisher","id":[{"id":"10.13039\/100031478","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["Expert Systems"],"published-print":{"date-parts":[[2026,4]]},"abstract":"<jats:title>ABSTRACT<\/jats:title>\n                  <jats:p>Recent advances in text\u2010to\u2010image generation have enabled generative models to produce realistic visuals from textual descriptions, transforming creative workflows in domains like fashion. However, these systems may encode and reproduce societal biases, particularly in gender representation. This study proposes a systematic and interpretable methodology for analysing gender bias in text\u2010to\u2010image generation models. The framework is model\u2010agnostic and applicable to any generative system, combining quantitative evaluation with interpretable analysis. Our proposal is structured in two main components: (1) the creation of a controlled corpus; and (2) the evaluation of the generated outputs through manual annotations and three complementary analyses: (i) model neutrality, assessing gender balance under neutral prompts; (ii) model accuracy, measuring adherence to gendered instructions; and (iii) interpretable pattern discovery, uncovering the semantic attributes that drive gendered generations via decision tree modelling. Concretely, we focus on the fashion domain and employ Stable Diffusion as a representative state\u2010of\u2010the\u2010art text\u2010to\u2010image model, given the relevance of fashion and the scarcity of resources addressing bias in this field. To this end, we build a controlled corpus of 300 fashion\u2010related descriptions, each adapted into neutral, male and female versions. Empirically, experiments show that Stable Diffusion exhibits significant gender imbalances when generating images from neutral prompts, associating traditionally masculine outfits with male figures and traditionally feminine outfits with female figures. Theoretically, this methodology offers a reproducible approach for detecting and interpreting bias in multimodal generative models, and the resources created in this research are publicly available to scientific community, contributing to the development of fairer and more transparent AI systems.<\/jats:p>","DOI":"10.1111\/exsy.70232","type":"journal-article","created":{"date-parts":[[2026,3,12]],"date-time":"2026-03-12T12:02:02Z","timestamp":1773316922000},"update-policy":"https:\/\/doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Understanding Gender Bias in Text\u2010to\u2010Image Models Through Quantitative and Interpretable Analysis: A Fashion Case Study"],"prefix":"10.1111","volume":"43","author":[{"ORCID":"https:\/\/orcid.org\/0009-0005-0091-7884","authenticated-orcid":false,"given":"Mar\u00eda","family":"Villalba\u2010Os\u00e9s","sequence":"first","affiliation":[{"name":"University Institute of Computer Research (IUII), University of Alicante  Alicante Spain"}]},{"given":"Juan Pablo","family":"Consuegra\u2010Ayala","sequence":"additional","affiliation":[{"name":"Department of Language and Computing Systems University of Alicante  Alicante Spain"}]},{"given":"Manuel","family":"Palomar","sequence":"additional","affiliation":[{"name":"Digital Intelligence Center (CENID), University of Alicante  Alicante Spain"}]}],"member":"311","published-online":{"date-parts":[[2026,3,12]]},"reference":[{"key":"e_1_2_10_2_1","doi-asserted-by":"publisher","DOI":"10.1177\/1478077118800982"},{"key":"e_1_2_10_3_1","unstructured":"Barve S. A.Mao J. M.Shi P.Juneja andK.Saha.2025.\u201cCan We Debias Social Stereotypes in AI\u2010Generated Images? Examining Text\u2010to\u2010Image Outputs and User Perceptions.\u201darXiv:2505.20692.https:\/\/arxiv.org\/abs\/2505.20692."},{"key":"e_1_2_10_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/3593013.3594095"},{"key":"e_1_2_10_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3600211.3604722"},{"key":"e_1_2_10_6_1","doi-asserted-by":"publisher","DOI":"10.52202\/075280-0930"},{"key":"e_1_2_10_7_1","unstructured":"Birhane A. V. U.Prabhu andE.Kahembwe.2021.\u201cMultimodal Datasets: Misogyny Pornography and Malignant Stereotypes.\u201darXiv:2110.01963.https:\/\/arxiv.org\/abs\/2110.01963."},{"key":"e_1_2_10_8_1","first-page":"9","volume-title":"Visualising Gender Bias: The Use of Visual Analysis to Examine Fashion Images of Women","author":"Blanchard\u2010Emmerson J.","year":"2024"},{"key":"e_1_2_10_9_1","unstructured":"Chen M. Y.Liu J.Yi et\u00a0al.2024.\u201cEvaluating Text\u2010to\u2010Image Generative Models: An Empirical Study on Human Image Synthesis.\u201darXiv Preprint arXiv:2403.05125."},{"key":"e_1_2_10_10_1","first-page":"3043","volume-title":"Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV)","author":"Cho J.","year":"2023"},{"key":"e_1_2_10_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/3715336.3735749"},{"key":"e_1_2_10_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2024.3419159"},{"key":"e_1_2_10_13_1","unstructured":"Ding Y.2023.\u201cDeconstructing Beauty: Using AI to Highlight Bias in the Fashion Industry.\u201dPh.D. thesis.https:\/\/www.proquest.com\/dissertations\u2010theses\/deconstructing\u2010beauty\u2010using\u2010ai\u2010highlight\u2010bias\/docview\/2885425668\/se\u20102."},{"key":"e_1_2_10_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2025.3585745"},{"key":"e_1_2_10_15_1","first-page":"6957","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Garcia N.","year":"2023"},{"key":"e_1_2_10_16_1","unstructured":"Girrbach L. S.Alaniz G.Smith andZ.Akata.2025.\u201cA Large Scale Analysis of Gender Biases in Text\u2010to\u2010Image Generative Models.\u201darXiv:2503.23398.https:\/\/arxiv.org\/abs\/2503.23398."},{"key":"e_1_2_10_17_1","doi-asserted-by":"publisher","DOI":"10.4324\/9781032646930-18"},{"key":"e_1_2_10_18_1","doi-asserted-by":"publisher","DOI":"10.4324\/9781032646930-18"},{"key":"e_1_2_10_19_1","doi-asserted-by":"publisher","DOI":"10.1177\/00405175251328296"},{"key":"e_1_2_10_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3528223.3530104"},{"key":"e_1_2_10_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/3715275.3732169"},{"key":"e_1_2_10_22_1","first-page":"2611","volume-title":"Advances in Neural Information Processing Systems","author":"Kirk H. R.","year":"2021"},{"key":"e_1_2_10_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/3582269.3615599"},{"key":"e_1_2_10_24_1","unstructured":"Kumar C. V. A.Urlana G.Kanumolu B. M.Garlapati andP.Mishra.2025.\u201cNo Llm Is Free From Bias: A Comprehensive Study of Bias Evaluation in Large Language Models.\u201darXiv:2503.11985.https:\/\/arxiv.org\/abs\/2503.11985."},{"key":"e_1_2_10_25_1","unstructured":"Lai C. H. Y.Song D.Kim Y.Mitsufuji andS.Ermon.2025.\u201cThe Principles of Diffusion Models.\u201darXiv:2510.21890.https:\/\/arxiv.org\/abs\/2510.21890."},{"key":"e_1_2_10_26_1","unstructured":"Li M. H.Chen Y.Wang et\u00a0al.2025.\u201cUnderstanding and Mitigating the Bias Inheritance in Llm\u2010Based Data Augmentation on Downstream Tasks.\u201darXiv:2502.04419.https:\/\/arxiv.org\/abs\/2502.04419."},{"key":"e_1_2_10_27_1","doi-asserted-by":"publisher","DOI":"10.24963\/ijcai.2025\/163"},{"key":"e_1_2_10_28_1","first-page":"6565","volume-title":"Proceedings of the 38th International Conference on Machine Learning","author":"Liang P. P.","year":"2021"},{"key":"e_1_2_10_29_1","doi-asserted-by":"publisher","DOI":"10.3390\/digital4010013"},{"key":"e_1_2_10_30_1","unstructured":"Lin A. L. M.Paes S. H.Tanneru S.Srinivas andH.Lakkaraju.2023.\u201cWordlevel Explanations for Analyzing Bias in Text\u2010to\u2010Image Models.\u201darXiv:2306.05500.https:\/\/arxiv.org\/abs\/2306.05500."},{"key":"e_1_2_10_31_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v39i26.34961"},{"key":"e_1_2_10_32_1","unstructured":"Luccioni A. S. C.Akiki M.Mitchell andY.Jernite.2023.\u201cStable Bias: Analyzing Societal Representations in Diffusion Models.\u201darXiv:2303.11408.https:\/\/arxiv.org\/abs\/2303.11408."},{"key":"e_1_2_10_33_1","doi-asserted-by":"publisher","DOI":"10.3390\/systems13040264"},{"key":"e_1_2_10_34_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00146-024-02151-3"},{"key":"e_1_2_10_35_1","unstructured":"Mandal A. S.Leavy andS.Little.2023.\u201cMultimodal Composite Association Score: Measuring Gender Bias in Generative Multimodal Models.\u201darXiv:2304.13855.https:\/\/arxiv.org\/abs\/2304.13855."},{"key":"e_1_2_10_36_1","unstructured":"Mannering H.2023.\u201cAnalysing Gender Bias in Text\u2010to\u2010Image Models Using Object Detection.\u201darXiv:2307.08025.https:\/\/arxiv.org\/abs\/2307.08025."},{"key":"e_1_2_10_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/3600211.3604711"},{"key":"e_1_2_10_38_1","doi-asserted-by":"crossref","unstructured":"Navigli R. S.Conia andB.Ross.2023.\u201cBiases in Large Language Models: Origins Inventory and Discussion 15.\u201dhttps:\/\/doi.org\/10.1145\/3597307.","DOI":"10.1145\/3597307"},{"key":"e_1_2_10_39_1","unstructured":"OpenAI.2024.\u201cGpt\u20104 Technical Report.\u201darXiv:2303.08774.https:\/\/arxiv.org\/abs\/2303.08774."},{"key":"e_1_2_10_40_1","doi-asserted-by":"crossref","unstructured":"Patil D.2024.\u201cGenerative Artificial Intelligence in Marketing and Advertising: Advancing Personalization and Optimizing Consumer Engagement Strategies.\u201dSSRN Electronic Journal.https:\/\/doi.org\/10.2139\/ssrn.5057404.","DOI":"10.2139\/ssrn.5057404"},{"key":"e_1_2_10_41_1","unstructured":"Ramesh A. P.Dhariwal A.Nichol C.Chu andM.Chen.2022.\u201cHierarchical Text\u2010Conditional Image Generation With Clip Latents.\u201darXiv preprint arXiv:2204.06125 1 3."},{"key":"e_1_2_10_42_1","first-page":"10684","volume-title":"Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR)","author":"Rombach R.","year":"2022"},{"key":"e_1_2_10_43_1","unstructured":"Rosenbaum J. E.2024.\u201cAI Perceptions of Gender.\u201dPh.D. thesis RMIT University.https:\/\/research\u2010repository.rmit.edu.au\/articles\/thesis\/AI_perceptions_of_gender\/27597489."},{"key":"e_1_2_10_44_1","unstructured":"R\u00f6ttger P. M.Hinck V.Hofmann et\u00a0al.2025.\u201cIssuebench: Millions of Realistic Prompts for Measuring Issue Bias in Llm Writing Assistance.\u201darXiv:2502.08395.https:\/\/arxiv.org\/abs\/2502.08395."},{"key":"e_1_2_10_45_1","doi-asserted-by":"publisher","DOI":"10.1162\/tacl_a_00401"},{"key":"e_1_2_10_46_1","first-page":"25278","article-title":"Laion\u20105b: An Open Large\u2010Scale Dataset for Training Next Generation Image\u2010Text Models","volume":"35","author":"Schuhmann C.","year":"2022","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_10_47_1","unstructured":"Seshadri P. S.Singh andY.Elazar.2023.\u201cThe Bias Amplification Paradox in Text\u2010to\u2010Image Generation.\u201darXiv:2308.00755.https:\/\/arxiv.org\/abs\/2308.00755."},{"key":"e_1_2_10_48_1","unstructured":"Solaiman I. Z.Talat W.Agnew et\u00a0al.2024.\u201cEvaluating the Social Impact of Generative AI Systems in Systems and Society.\u201darXiv:2306.05949.https:\/\/arxiv.org\/abs\/2306.05949."},{"key":"e_1_2_10_49_1","doi-asserted-by":"publisher","DOI":"10.1093\/jcmc\/zmad045"},{"key":"e_1_2_10_50_1","unstructured":"Sun T. A.Gaut S.Tang et\u00a0al.2019.\u201cMitigating Gender Bias in Natural Language Processing: Literature Review.\u201darXiv:1906.08976.https:\/\/arxiv.org\/abs\/1906.08976."},{"key":"e_1_2_10_51_1","doi-asserted-by":"crossref","unstructured":"Ungless E. L. B.Ross andA.Lauscher.2023.\u201cStereotypes and Smut: The (Mis)Representation of Non\u2010Cisgender Identities by Text\u2010to\u2010Image Models.\u201darXiv:2305.17072.https:\/\/arxiv.org\/abs\/2305.17072.","DOI":"10.18653\/v1\/2023.findings-acl.502"},{"key":"e_1_2_10_52_1","doi-asserted-by":"publisher","DOI":"10.3390\/journalmedia6030110"},{"key":"e_1_2_10_53_1","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2025.3572115"},{"key":"e_1_2_10_54_1","doi-asserted-by":"publisher","DOI":"10.26615\/978-954-452-098-4-155"},{"key":"e_1_2_10_55_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2021.emnlp-main.151"},{"key":"e_1_2_10_56_1","doi-asserted-by":"publisher","DOI":"10.18653\/v1\/2025.latechclfl-1.3"},{"key":"e_1_2_10_57_1","first-page":"800","volume-title":"Proceedings of the 2022 AAAI\/ACM Conference on AI, Ethics, and Society, Association for Computing Machinery","author":"Wolfe R.","year":"2022"},{"key":"e_1_2_10_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/3531146.3533183"},{"key":"e_1_2_10_59_1","doi-asserted-by":"crossref","unstructured":"Wu Y. Y.Nakashima andN.Garcia.2023.\u201cStable Diffusion Exposed: Gender Bias From Prompt to Image.\u201darXiv preprint arXiv:2312.03027.","DOI":"10.1609\/aies.v7i1.31754"},{"key":"e_1_2_10_60_1","doi-asserted-by":"publisher","DOI":"10.3390\/jimaging11020035"},{"key":"e_1_2_10_61_1","doi-asserted-by":"crossref","unstructured":"Yang X. R.Zhan D. F.Wong S.Yang J.Wu andL. S.Chao.2025.\u201cRethinking Prompt\u2010Based Debiasing in Large Language Models.\u201darXiv:2503.09219.https:\/\/arxiv.org\/abs\/2503.09219.","DOI":"10.18653\/v1\/2025.findings-acl.1361"},{"key":"e_1_2_10_62_1","doi-asserted-by":"publisher","DOI":"10.1145\/3689904.3694710"},{"key":"e_1_2_10_63_1","unstructured":"Zhou M. V.Abhishek T.Derdenger J.Kim andK.Srinivasan.2024.\u201cBias in Generative AI.\u201darXiv:2403.02726.https:\/\/arxiv.org\/abs\/2403.02726."}],"container-title":["Expert Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1111\/exsy.70232","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/full-xml\/10.1111\/exsy.70232","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1111\/exsy.70232","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,19]],"date-time":"2026-03-19T02:01:30Z","timestamp":1773885690000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1111\/exsy.70232"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,12]]},"references-count":62,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2026,4]]}},"alternative-id":["10.1111\/exsy.70232"],"URL":"https:\/\/doi.org\/10.1111\/exsy.70232","archive":["Portico"],"relation":{},"ISSN":["0266-4720","1468-0394"],"issn-type":[{"value":"0266-4720","type":"print"},{"value":"1468-0394","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,3,12]]},"assertion":[{"value":"2025-11-10","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2026-02-14","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2026-03-12","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}],"article-number":"e70232"}}