{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,8]],"date-time":"2025-10-08T00:24:59Z","timestamp":1759883099806,"version":"build-2065373602"},"reference-count":57,"publisher":"MDPI AG","issue":"1","license":[{"start":{"date-parts":[[2025,1,1]],"date-time":"2025-01-01T00:00:00Z","timestamp":1735689600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["52003134","201909099","XT2024102","2022KJ144"],"award-info":[{"award-number":["52003134","201909099","XT2024102","2022KJ144"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Young Taishan Scholars Program of Shandong Province","award":["52003134","201909099","XT2024102","2022KJ144"],"award-info":[{"award-number":["52003134","201909099","XT2024102","2022KJ144"]}]},{"name":"Systems Science Plus Joint Research Program of Qingdao University","award":["52003134","201909099","XT2024102","2022KJ144"],"award-info":[{"award-number":["52003134","201909099","XT2024102","2022KJ144"]}]},{"name":"Support Plan for Youth Innovation Team of Colleges in Shandong Province","award":["52003134","201909099","XT2024102","2022KJ144"],"award-info":[{"award-number":["52003134","201909099","XT2024102","2022KJ144"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Symmetry"],"abstract":"<jats:p>Novel view synthesis aims to generate new perspectives from a limited number of input views. Neural Radiance Field (NeRF) is a key method for this task, and it produces high-fidelity images from a comprehensive set of inputs. However, a NeRF\u2019s performance drops significantly with sparse views. To mitigate this, depth information can be used to guide training, with coarse depth maps often readily available in practical settings. We propose an improved sparse view NeRF model, ATGANNeRF, which integrates an enhanced U-Net generator with a dual-discriminator framework, CBAM, and Multi-Head Self-Attention mechanisms. The symmetric design enhances the model\u2019s ability to capture and preserve spatial relationships, ensuring a more consistent generation of novel views. Additionally, local depth ranking is employed to ensure depth consistency with coarse maps, and spatial continuity constraints are introduced to synthesize novel views from sparse samples. SSIM loss is also added to preserve local structural details like edges and textures. Evaluation on LLFF, DTU, and our own datasets shows that ATGANNeRF significantly outperforms state-of-the-art methods, both quantitatively and qualitatively.<\/jats:p>","DOI":"10.3390\/sym17010059","type":"journal-article","created":{"date-parts":[[2025,1,1]],"date-time":"2025-01-01T13:37:51Z","timestamp":1735738671000},"page":"59","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Novel View Synthesis with Depth Priors Using Neural Radiance Fields and CycleGAN with Attention Transformer"],"prefix":"10.3390","volume":"17","author":[{"given":"Yuxin","family":"Qin","sequence":"first","affiliation":[{"name":"Institute for Future, Shandong Key Laboratory of Industrial Control Technology, School of Automation, Qingdao University, Qingdao 266071, China"}]},{"given":"Xinlin","family":"Li","sequence":"additional","affiliation":[{"name":"College of Mechanical and Electrical Engineering, Qingdao University, Qingdao 266071, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-3087-4291","authenticated-orcid":false,"given":"Linan","family":"Zu","sequence":"additional","affiliation":[{"name":"College of Automation and Electronic Engineering, Qingdao University of Science and Technology, Qingdao 266061, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8446-1861","authenticated-orcid":false,"given":"Ming Liang","family":"Jin","sequence":"additional","affiliation":[{"name":"Institute for Future, Shandong Key Laboratory of Industrial Control Technology, School of Automation, Qingdao University, Qingdao 266071, China"}]}],"member":"1968","published-online":{"date-parts":[[2025,1,1]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"99","DOI":"10.1145\/3503250","article-title":"Nerf: Representing scenes as neural radiance fields for view synthesis","volume":"65","author":"Mildenhall","year":"2021","journal-title":"Commun. ACM"},{"key":"ref_2","unstructured":"Gao, K., Gao, Y., He, H., Lu, D., Xu, L., and Li, J. (2022). Nerf: Neural radiance field in 3D vision, a comprehensive review. arXiv."},{"key":"ref_3","unstructured":"Mittal, A. (2023). Neural Radiance Fields: Past, Present, and Future. arXiv."},{"key":"ref_4","unstructured":"Lin, J. (2024). Dynamic NeRF: A Review. arXiv."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Niemeyer, M., Barron, J.T., Mildenhall, B., Sajjadi, M.S.M., Geiger, A., and Radwan, N. (2022, January 21\u201324). Regnerf: Regularizing neural radiance fields for view synthesis from sparse inputs. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00540"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Kim, M., Seo, S., and Han, B. (2022, January 21\u201324). Infonerf: Ray entropy minimization for few-shot neural volume rendering. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01257"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Xu, D., Jiang, Y., Wang, P., Fan, Z., Shi, H., and Wang, Z. (2022, January 23\u201327). Sinnerf: Training neural radiance fields on complex scenes from a single image. Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel.","DOI":"10.1007\/978-3-031-20047-2_42"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Yang, J., Pavone, M., and Wang, Y. (2023, January 17\u201324). Freenerf: Improving few-shot neural rendering with free frequency regularization. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.00798"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Yu, A., Ye, V., Tancik, M., and Kanazawa, A. (2021, January 20\u201325). Pixelnerf: Neural radiance fields from one or few images. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00455"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Chen, A., Xu, Z., Zhao, F., Zhang, X., Xiang, F., Yu, J., and Su, H. (2021, January 10\u201317). Mvsnerf: Fast generalizable radiance field reconstruction from multi-view stereo. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.01386"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"45","DOI":"10.1111\/cgf.14340","article-title":"DONeRF: Towards real-time rendering of compact neural radiance fields using depth oracle networks","volume":"40","author":"Neff","year":"2021","journal-title":"Comput. Graph. Forum"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Xu, Q., Xu, Z., Philip, J., Bi, S., Shu, Z., Sunkavalli, K., and Neumann, U. (2022, January 19\u201325). Point-NeRF: Point-based neural radiance fields. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR52688.2022.00536"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Wang, G., Chen, Z., Loy, C.C., and Liu, Z. (2023, January 1\u20138). Sparsenerf: Distilling depth ranking for few-shot novel view synthesis. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Vancouver, BC, Canada.","DOI":"10.1109\/ICCV51070.2023.00832"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Woo, S., Park, J., Lee, J.-Y., and Kweon, I.S. (2018, January 8\u201314). CBAM: Convolutional block attention module. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01234-2_1"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3306346.3322980","article-title":"Local light field fusion: Practical view synthesis with prescriptive sampling guidelines","volume":"38","author":"Mildenhall","year":"2019","journal-title":"ACM Trans. Graph. (TOG)"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Jensen, R., Dahl, A., Vogiatzis, G., Tola, E., and Aan\u00e6s, H. (2014, January 23\u201328). Large scale multi-view stereopsis evaluation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA.","DOI":"10.1109\/CVPR.2014.59"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Chen, S.E., and Williams, L. (1993, January 10\u201314). View interpolation for image synthesis. Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques, Anaheim, CA, USA.","DOI":"10.1145\/166117.166153"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Flynn, J., Neulander, I., Philbin, J., and Snavely, N. (2016, January 27\u201330). Deepstereo: Learning to predict new views from the world\u2019s imagery. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.595"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Levoy, M., and Hanrahan, P. (1996). Light field rendering. Computer Graphics Proceedings, Annual Conference Series, New Orleans, LA, USA, 4\u20139 August 1996, Association for Computing Machinery SIGGRAPH.","DOI":"10.1145\/237170.237199"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"606","DOI":"10.1109\/TPAMI.2013.147","article-title":"Variational light field analysis for disparity estimation and super-resolution","volume":"36","author":"Wanner","year":"2013","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Shade, J., Gortler, S., He, L.-W., and Szeliski, R. (1998, January 2\u20136). Layered depth images. Proceedings of the 25th Annual Conference on Computer Graphics and Interactive Techniques, Los Angeles, CA, USA.","DOI":"10.1145\/280814.280882"},{"key":"ref_22","unstructured":"Sitzmann, V., Zollh\u00f6fer, M., and Wetzstein, G. (2019). Scene representation networks: Continuous 3d-structure-aware neural scene representations. Advances in Neural Information Processing Systems, NeurIPS."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Barron, J.T., Mildenhall, B., Tancik, M., Hedman, P., Martin-Brualla, R., and Srinivasan, P.P. (2021, January 10\u201317). Mip-nerf: A multiscale representation for anti-aliasing neural radiance fields. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00580"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Barron, J.T., Mildenhall, B., Verbin, D., Srinivasan, P.P., and Hedman, P. (2022, January 21\u201324). Mip-nerf 360: Unbounded anti-aliased neural radiance fields. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00539"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Verbin, D., Hedman, P., Mildenhall, B., Zickler, T., Barron, J.T., and Srinivasan, P.P. (2022, January 18\u201324). Ref-nerf: Structured view-dependent appearance for neural radiance fields. Proceedings of the 2022 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00541"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Mildenhall, B., Hedman, P., Martin-Brualla, R., Srinivasan, P.P., and Barron, J.T. (2022, January 21\u201324). Nerf in the dark: High dynamic range view synthesis from noisy raw images. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01571"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Zhang, J., Zhang, Y., Fu, H., Zhou, X., Cai, B., Huang, J., Jia, R., Zhao, B., and Tang, X. (2022, January 21\u201324). Ray priors through reprojection: Improving neural radiance fields for novel view extrapolation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01783"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Garbin, S.J., Kowalski, M., Johnson, M., Shotton, J., and Valentin, J. (2021, January 10\u201317). Fastnerf: High-fidelity neural rendering at 200 fps. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.01408"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Chen, A., Xu, Z., Geiger, A., Yu, J., and Su, H. (2022, January 23\u201327). Tensorf: Tensorial radiance fields. Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel.","DOI":"10.1007\/978-3-031-19824-3_20"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Reiser, C., Peng, S., Liao, Y., and Geiger, A. (2021, January 10\u201317). Kilonerf: Speeding up neural radiance fields with thousands of tiny mlps. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.01407"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Tancik, M., Casser, V., Yan, X., Pradhan, S., Mildenhall, B., Srinivasan, P.P., Barron, J.T., and Kretzschmar, H. (2022, January 21\u201324). Block-nerf: Scalable large scene neural view synthesis. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00807"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Turki, H., Ramanan, D., and Satyanarayanan, M. (2022, January 21\u201324). Mega-nerf: Scalable construction of large-scale nerfs for virtual fly-throughs. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01258"},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Chen, Z., Funkhouser, T., Hedman, P., and Tagliasacchi, A. (2023, January 17\u201324). Mobilenerf: Exploiting the polygon rasterization pipeline for efficient neural field rendering on mobile architectures. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada.","DOI":"10.1109\/CVPR52729.2023.01590"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Weng, C.-Y., Curless, B., Srinivasan, P.P., Barron, J.T., and Kemelmacher-Shlizerman, I. (2022, January 21\u201324). Humannerf: Free-viewpoint rendering of moving people from monocular video. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01573"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Shao, R., Zhang, H., Zhang, H., Chen, M., Cao, Y.-P., Yu, T., and Liu, Y. (2022, January 19\u201324). Doublefield: Bridging the neural surface and radiance fields for high-fidelity human reconstruction and rendering. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01541"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Peng, S., Dong, J., Wang, Q., Zhang, S., Shuai, Q., Zhou, X., and Bao, H. (2021, January 11\u201317). Animatable neural radiance fields for modeling dynamic human bodies. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.01405"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Yang, G., Vo, M., Neverova, N., Ramanan, D., Vedaldi, A., and Joo, H. (2022, January 19\u201324). Banmo: Building animatable 3D neural models from many casual videos. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.00288"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Yen-Chen, L., Florence, P., Barron, J.T., Rodriguez, A., Isola, P., and Lin, T.-Y. (October, January 27). Inerf: Inverting neural radiance fields for pose estimation. Proceedings of the 2021 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Prague, Czech Republic.","DOI":"10.1109\/IROS51168.2021.9636708"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Lin, C.-H., Ma, W.-C., Torralba, A., and Lucey, S. (2021, January 10\u201317). Barf: Bundle-adjusting neural radiance fields. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00569"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Chng, S.-F., Ramasinghe, S., Sherrah, J., and Lucey, S. (2022, January 23\u201327). Gaussian activated neural radiance fields for high fidelity reconstruction and pose estimation. Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel.","DOI":"10.1007\/978-3-031-19827-4_16"},{"key":"ref_41","first-page":"6840","article-title":"Denoising diffusion probabilistic models","volume":"33","author":"Ho","year":"2020","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Chen, Z., and Liu, Z. (2022, January 23\u201327). Relighting4d: Neural relightable human from videos. Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel.","DOI":"10.1007\/978-3-031-19781-9_35"},{"key":"ref_43","first-page":"1","article-title":"Text2light: Zero-shot text-driven HDR panorama generation","volume":"41","author":"Chen","year":"2022","journal-title":"ACM Trans. Graph."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Liu, S., Zhang, X., Zhang, Z., Zhang, R., Zhu, J.-Y., and Russell, B. (2021, January 11\u201317). Editing Conditional Radiance Fields. Proceedings of the International Conference on Computer Vision, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00572"},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"139","DOI":"10.1145\/3422622","article-title":"Generative adversarial networks","volume":"63","author":"Goodfellow","year":"2020","journal-title":"Commun. ACM"},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Zhu, J.-Y., Park, T., Isola, P., and Efros, A.A. (2017, January 22\u201329). Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.244"},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Hou, Y., Ma, X., Zhang, J., and Guo, C. (2024). Symmetric Connected U-Net with Multi-Head Self Attention (MHSA) and WGAN for Image Inpainting. Symmetry, 16.","DOI":"10.3390\/sym16111423"},{"key":"ref_48","unstructured":"Chen, W., Fu, Z., Yang, D., and Deng, J. (2016). Single-image depth perception in the wild. Advances in Neural Information Processing Systems, NeurIPS."},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"7088","DOI":"10.1109\/TPAMI.2024.3388004","article-title":"RDFC-GAN: RGB-Depth Fusion CycleGAN for Indoor Depth Completion","volume":"46","author":"Wang","year":"2024","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Bansal, A., Ma, S., Ramanan, D., and Sheikh, Y. (2018, January 8\u201314). Recycle-GAN: Unsupervised video retargeting. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01228-1_8"},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Liu, Y., Li, Y., You, S., and Lu, F. (2020, January 14\u201319). Unsupervised learning for intrinsic image decomposition from a single image. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00331"},{"key":"ref_52","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015, January 5\u20139). U-Net: Convolutional networks for biomedical image segmentation. Proceedings of the Medical Image Computing and Computer-Assisted Intervention\u2013MICCAI 2015: 18th International Conference, Munich, Germany. Proceedings, Part III 18."},{"key":"ref_53","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, \u0141., and Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, NeurIPS."},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"600","DOI":"10.1109\/TIP.2003.819861","article-title":"Image quality assessment: From error visibility to structural similarity","volume":"13","author":"Wang","year":"2004","journal-title":"IEEE Trans. Image Process."},{"key":"ref_55","doi-asserted-by":"crossref","unstructured":"Zhang, R., Isola, P., Efros, A.A., Shechtman, E., and Wang, O. (2018, January 18\u201322). The unreasonable effectiveness of deep features as a perceptual metric. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00068"},{"key":"ref_56","unstructured":"Kingma, D.P. (2014). Adam: A method for stochastic optimization. arXiv."},{"key":"ref_57","doi-asserted-by":"crossref","unstructured":"Mao, X., Li, Q., Xie, H., Lau, R.Y.K., Wang, Z., and Smolley, P. (2017, January 22\u201329). Least squares generative adversarial networks. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.304"}],"container-title":["Symmetry"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2073-8994\/17\/1\/59\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,7]],"date-time":"2025-10-07T15:23:13Z","timestamp":1759850593000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2073-8994\/17\/1\/59"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,1,1]]},"references-count":57,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,1]]}},"alternative-id":["sym17010059"],"URL":"https:\/\/doi.org\/10.3390\/sym17010059","relation":{},"ISSN":["2073-8994"],"issn-type":[{"type":"electronic","value":"2073-8994"}],"subject":[],"published":{"date-parts":[[2025,1,1]]}}}