{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,22]],"date-time":"2026-01-22T20:07:15Z","timestamp":1769112435115,"version":"3.49.0"},"reference-count":39,"publisher":"MDPI AG","issue":"10","license":[{"start":{"date-parts":[[2023,10,9]],"date-time":"2023-10-09T00:00:00Z","timestamp":1696809600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Qatar National Library"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Information"],"abstract":"<jats:p>Text-to-image synthesis is one of the most critical and challenging problems of generative modeling. It is of substantial importance in the area of automatic learning, especially for image creation, modification, analysis and optimization. A number of works have been proposed in the past to achieve this goal; however, current methods still lack scene understanding, especially when it comes to synthesizing coherent structures in complex scenes. In this work, we propose a model called CapGAN, to synthesize images from a given single text statement to resolve the problem of global coherent structures in complex scenes. For this purpose, skip-thought vectors are used to encode the given text into vector representation. This encoded vector is used as an input for image synthesis using an adversarial process, in which two models are trained simultaneously, namely: generator (G) and discriminator (D). The model G generates fake images, while the model D tries to predict what the sample is from training data rather than generated by G. The conceptual novelty of this work lies in the integrating capsules at the discriminator level to make the model understand the orientational and relative spatial relationship between different entities of an object in an image. The inception score (IS) along with the Fr\u00e9chet inception distance (FID) are used as quantitative evaluation metrics for CapGAN. IS recorded for images generated using CapGAN is 4.05 \u00b1 0.050, which is around 34% higher than images synthesized using traditional GANs, whereas the FID score calculated for synthesized images using CapGAN is 44.38, which is ab almost 9% improvement from the previous state-of-the-art models. The experimental results clearly demonstrate the effectiveness of the proposed CapGAN model, which is exceptionally proficient in generating images with complex scenes.<\/jats:p>","DOI":"10.3390\/info14100552","type":"journal-article","created":{"date-parts":[[2023,10,9]],"date-time":"2023-10-09T05:07:13Z","timestamp":1696828033000},"page":"552","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["CapGAN: Text-to-Image Synthesis Using Capsule GANs"],"prefix":"10.3390","volume":"14","author":[{"given":"Maryam","family":"Omar","sequence":"first","affiliation":[{"name":"Department of Computer Science, National University of Computing and Emerging Sciences, Hayatabad, Peshawar 24720, Pakistan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1386-5287","authenticated-orcid":false,"given":"Hafeez","family":"Ur Rehman","sequence":"additional","affiliation":[{"name":"Department of Computer Science, National University of Computing and Emerging Sciences, Hayatabad, Peshawar 24720, Pakistan"},{"name":"School of Computing and Data Sciences, Oryx Universal College with Liverpool John Moores University, Doha 34110, Qatar"}]},{"given":"Omar Bin","family":"Samin","sequence":"additional","affiliation":[{"name":"Center for Excellence in Information Technology, Institute of Management Sciences, Hayatabad, Peshawar 24720, Pakistan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2823-4776","authenticated-orcid":false,"given":"Moutaz","family":"Alazab","sequence":"additional","affiliation":[{"name":"School of Computing and Data Sciences, Oryx Universal College with Liverpool John Moores University, Doha 34110, Qatar"},{"name":"Department of Intelligent Systems, Faculty of Artificial Intelligence, Al-Balqa Applied University, Al-Salt 19117, Jordan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5268-9899","authenticated-orcid":false,"given":"Gianfranco","family":"Politano","sequence":"additional","affiliation":[{"name":"Department of Control and Computer Engineering (DAUIN), Politecnico di Torino, 10129 Turin, Italy"}]},{"given":"Alfredo","family":"Benso","sequence":"additional","affiliation":[{"name":"Department of Control and Computer Engineering (DAUIN), Politecnico di Torino, 10129 Turin, Italy"}]}],"member":"1968","published-online":{"date-parts":[[2023,10,9]]},"reference":[{"key":"ref_1","unstructured":"Reed, S., Akata, Z., Yan, X., Logeswaran, L., Schiele, B., and Lee, H. (2016, January 20\u201322). Generative Adversarial Text-to-Image Synthesis. Proceedings of the 33rd International Conference on Machine Learning, New York, NY, USA."},{"key":"ref_2","unstructured":"Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014, January 8\u201313). Generative adversarial nets. Proceedings of the Annual Conference on Neural Information Processing Systems 2014, Montreal, QC, Canada."},{"key":"ref_3","unstructured":"Dash, A., Gamboa, J.C.B., Ahmed, S., Liwicki, M., and Afzal, M.Z. (2017). TAC-GAN-text conditioned auxiliary classifier generative adversarial network. arXiv."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Zhang, H., Xu, T., Li, H., Zhang, S., Huang, X., Wang, X., and Metaxas, D. (2017, January 22\u201329). Stackgan: Text to photo-realistic image synthesis with stacked generative adversarial networks. Proceedings of the IEEE International Conference on Computer Vision 2017, Venice, Italy.","DOI":"10.1109\/ICCV.2017.629"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Dong, H., Zhang, J., McIlwraith, D., and Guo, Y. (2017, January 17\u201320). I2T2I: Learning text to image synthesis with textual data augmentation. Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China.","DOI":"10.1109\/ICIP.2017.8296635"},{"key":"ref_6","unstructured":"Sabour, S., Frosst, N., and Hinton, G.E. (2017, January 4\u20139). Dynamic routing between capsules. Proceedings of the 2017 Conference on Neural Information Processing Systems, Long Beach, CA, USA."},{"key":"ref_7","unstructured":"Dai, A.M., and Le, Q.V. (2015, January 7\u201310). Semi-supervised sequence learning. Proceedings of the 2015 Conference on Neural Information Processing Systems, Montreal, QC, Canada."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Nilsback, M.E., and Zisserman, A. (2008, January 16\u201319). Automated flower classification over a large number of classes. Proceedings of the 2008 Sixth Indian Conference on Computer Vision, Graphics & Image Processing, Bhubaneswar, India.","DOI":"10.1109\/ICVGIP.2008.47"},{"key":"ref_9","unstructured":"Welinder, P., Branson, S., Mita, T., Wah, C., Schroff, F., Belongie, S., and Perona, P. (2010). Caltech-UCSD Birds 200, California Institute of Technology. Technical Report CNS-TR-2010-001."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"211","DOI":"10.1007\/s11263-015-0816-y","article-title":"Imagenet large scale visual recognition challenge","volume":"115","author":"Russakovsky","year":"2015","journal-title":"Int. J. Comput. Vis."},{"key":"ref_11","unstructured":"Zhu, X., Goldberg, A.B., Eldawy, M., Dyer, C.R., and Strock, B. (2007, January 22\u201326). A text-to-picture synthesis system for augmenting communication. Proceedings of the AAAI 2007, Vancouver, BC, Canada."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Zhang, Z., Xie, Y., and Yang, L. (2018, January 18\u201323). Photographic text-to-image synthesis with a hierarchically-nested adversarial network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00649"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Chen, Q., and Koltun, V. (2017, January 22\u201329). Photographic Image Synthesis with Cascaded Refinement Networks. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.168"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Sangkloy, P., Lu, J., Fang, C., Yu, F., and Hays, J. (2017, January 21\u201326). Scribbler: Controlling Deep Image Synthesis With Sketch and Color. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.723"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Nie, D., Trullo, R., Lian, J., Petitjean, C., Ruan, S., Wang, Q., and Shen, D. (2017, January 11\u201313). Medical image synthesis with context-aware generative adversarial networks. Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Quebec City, QC, Canada.","DOI":"10.1007\/978-3-319-66179-7_48"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Dong, H., Yu, S., Wu, C., and Guo, Y. (2017, January 22\u201329). Semantic image synthesis via adversarial learning. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.608"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Wang, T.C., Liu, M.Y., Zhu, J.Y., Tao, A., Kautz, J., and Catanzaro, B. (2018, January 18\u201322). High-Resolution Image Synthesis and Semantic Manipulation With Conditional GANs. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00917"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Liang, X., Lee, L., Dai, W., and Xing, E.P. (2017, January 22\u201329). Dual motion GAN for future-flow embedded video prediction. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.194"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Ledig, C., Theis, L., Husz\u00e1r, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A.P., Tejani, A., Totz, J., and Wang, Z. (2017, January 21\u201326). Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. Proceedings of the CVPR 2017, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.19"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., and Zitnick, C.L. (2014, January 6\u201312). Microsoft coco: Common objects in context. Proceedings of the European Conference on Computer Vision, Zurich, Switzerland.","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"1947","DOI":"10.1109\/TPAMI.2018.2856256","article-title":"StackGAN++: Realistic Image Synthesis with Stacked Generative Adversarial Networks","volume":"41","author":"Zhang","year":"2018","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Xu, T., Zhang, P., Huang, Q., Zhang, H., Gan, Z., Huang, X., and He, X. (2017). Attngan: Fine-grained text to image generation with attentional generative adversarial networks. arXiv.","DOI":"10.1109\/CVPR.2018.00143"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Afshar, P., Mohammadi, A., and Plataniotis, K.N. (2018, January 7\u201310). Brain Tumor Type Classification via Capsule Networks. Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece.","DOI":"10.1109\/ICIP.2018.8451379"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"1729","DOI":"10.1093\/mnras\/stz1289","article-title":"Morphological classification of radio galaxies: Capsule networks versus convolutional neural networks","volume":"487","author":"Lukic","year":"2019","journal-title":"Mon. Not. R. Astron. Soc."},{"key":"ref_25","first-page":"87","article-title":"Classification of maritime vessels using capsule networks","volume":"Volume 10992","author":"Hilton","year":"2019","journal-title":"Geospatial Informatics IX"},{"key":"ref_26","unstructured":"Bass, C., Dai, T., Billot, B., Arulkumaran, K., Creswell, A., Clopath, C., De Paola, V., and Bharath, A.A. (2019, January 8\u201310). Image synthesis with a convolutional capsule generative adversarial network. Proceedings of the International Conference on Medical Imaging with Deep Learning, London, UK."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Jaiswal, A., AbdAlmageed, W., Wu, Y., and Natarajan, P. (2018, January 8\u201314). Capsulegan: Generative adversarial capsule network. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-11015-4_38"},{"key":"ref_28","unstructured":"Upadhyay, Y., and Schrater, P. (2018). Generative adversarial network architectures for image synthesis using capsule networks. arXiv."},{"key":"ref_29","unstructured":"Kiros, R., Zhu, Y., Salakhutdinov, R.R., Zemel, R., Urtasun, R., Torralba, A., and Fidler, S. (2015, January 7\u201310). Skip-thought vectors. Proceedings of the 2015 Conference on Neural Information Processing Systems, Montreal, QC, Canada."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"393","DOI":"10.1109\/COMST.2018.2866942","article-title":"A Survey of Machine Learning Techniques Applied to Software Defined Networking (SDN): Research Issues and Challenges","volume":"21","author":"Xie","year":"2019","journal-title":"IEEE Commun. Surv. Tutorials"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Nguyen, T., Vu, P., Pham, H., and Nguyen, T. (June, January 27). Deep learning UI design patterns of mobile apps. Proceedings of the 2018 IEEE\/ACM 40th International Conference on Software Engineering: New Ideas and Emerging Technologies Results (ICSE-NIER), Gothenburg, Sweden.","DOI":"10.1145\/3183399.3183422"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Hinton, G.E., Krizhevsky, A., and Wang, S.D. (2011, January 14\u201317). Transforming auto-encoders. Proceedings of the International Conference on Artificial Neural Networks, Espoo, Finland.","DOI":"10.1007\/978-3-642-21735-7_6"},{"key":"ref_33","unstructured":"Hinton, G.E., Sabour, S., and Frosst, N. (May, January 30). Matrix capsules with EM routing. Proceedings of the 6th International Conference on Learning Representations ICLR 2018, Vancouver, BC, Canada."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Li, S., Ren, X., and Yang, L. (2018, January 23\u201326). Fully CapsNet for Semantic Segmentation: First Chinese Conference. Proceedings of the PRCV 2018, Guangzhou, China.","DOI":"10.1007\/978-3-030-03335-4_34"},{"key":"ref_35","unstructured":"Nair, P., Doshi, R., and Keselj, S. (2018). Pushing the limits of capsule networks. arXiv."},{"key":"ref_36","unstructured":"Kingma, D.P., and Ba, J. (2015, January 7\u20139). Adam: A Method for Stochastic Optimization. Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA."},{"key":"ref_37","unstructured":"Barratt, S., and Sharma, R.K. (2018). A Note on the Inception Score. arXiv."},{"key":"ref_38","unstructured":"Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. (2017, January 4\u20139). Gans trained by a two time-scale update rule converge to a local nash equilibrium. Proceedings of the 2017 Conference on Neural Information Processing Systems, Long Beach, CA, USA."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"41","DOI":"10.1016\/j.cviu.2018.10.009","article-title":"Pros and cons of gan evaluation measures","volume":"179","author":"Borji","year":"2019","journal-title":"Comput. Vis. Image Underst."}],"container-title":["Information"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2078-2489\/14\/10\/552\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T21:03:18Z","timestamp":1760130198000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2078-2489\/14\/10\/552"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,10,9]]},"references-count":39,"journal-issue":{"issue":"10","published-online":{"date-parts":[[2023,10]]}},"alternative-id":["info14100552"],"URL":"https:\/\/doi.org\/10.3390\/info14100552","relation":{},"ISSN":["2078-2489"],"issn-type":[{"value":"2078-2489","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,10,9]]}}}