{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,21]],"date-time":"2026-02-21T18:54:15Z","timestamp":1771700055085,"version":"3.50.1"},"reference-count":74,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2022,3,25]],"date-time":"2022-03-25T00:00:00Z","timestamp":1648166400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/100000006","name":"Office of Naval Research","doi-asserted-by":"publisher","award":["N00014-20-1-2002"],"award-info":[{"award-number":["N00014-20-1-2002"]}],"id":[{"id":"10.13039\/100000006","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000006","name":"Office of Naval Research","doi-asserted-by":"publisher","award":["N00014-22-1-2102"],"award-info":[{"award-number":["N00014-22-1-2102"]}],"id":[{"id":"10.13039\/100000006","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000893","name":"Simons Foundation","doi-asserted-by":"publisher","award":["#2031899"],"award-info":[{"award-number":["#2031899"]}],"id":[{"id":"10.13039\/100000893","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Entropy"],"abstract":"<jats:p>This work proposes a new computational framework for learning a structured generative model for real-world datasets. In particular, we propose to learn a Closed-loop Transcriptionbetween a multi-class, multi-dimensional data distribution and a Linear discriminative representation (CTRL) in the feature space that consists of multiple independent multi-dimensional linear subspaces. In particular, we argue that the optimal encoding and decoding mappings sought can be formulated as a two-player minimax game between the encoder and decoderfor the learned representation. A natural utility function for this game is the so-called rate reduction, a simple information-theoretic measure for distances between mixtures of subspace-like Gaussians in the feature space. Our formulation draws inspiration from closed-loop error feedback from control systems and avoids expensive evaluating and minimizing of approximated distances between arbitrary distributions in either the data space or the feature space. To a large extent, this new formulation unifies the concepts and benefits of Auto-Encoding and GAN and naturally extends them to the settings of learning a both discriminative and generative representation for multi-class and multi-dimensional real-world data. Our extensive experiments on many benchmark imagery datasets demonstrate tremendous potential of this new closed-loop formulation: under fair comparison, visual quality of the learned decoder and classification performance of the encoder is competitive and arguably better than existing methods based on GAN, VAE, or a combination of both. Unlike existing generative models, the so-learned features of the multiple classes are structured instead of hidden: different classes are explicitly mapped onto corresponding independent principal subspaces in the feature space, and diverse visual attributes within each class are modeled by the independent principal components within each subspace.<\/jats:p>","DOI":"10.3390\/e24040456","type":"journal-article","created":{"date-parts":[[2022,3,25]],"date-time":"2022-03-25T15:31:21Z","timestamp":1648222281000},"page":"456","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":17,"title":["CTRL: Closed-Loop Transcription to an LDR via Minimaxing Rate Reduction"],"prefix":"10.3390","volume":"24","author":[{"given":"Xili","family":"Dai","sequence":"first","affiliation":[{"name":"Department of EECS, University of California Berkeley, Berkeley, CA 94720, USA"},{"name":"School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610056, China"}]},{"given":"Shengbang","family":"Tong","sequence":"additional","affiliation":[{"name":"Department of EECS, University of California Berkeley, Berkeley, CA 94720, USA"}]},{"given":"Mingyang","family":"Li","sequence":"additional","affiliation":[{"name":"Tsinghua-Berkeley Shenzhen Institute, Shenzhen 518055, China"}]},{"given":"Ziyang","family":"Wu","sequence":"additional","affiliation":[{"name":"International Digital Economy Academy, Shenzhen 518048, China"}]},{"given":"Michael","family":"Psenka","sequence":"additional","affiliation":[{"name":"Department of EECS, University of California Berkeley, Berkeley, CA 94720, USA"}]},{"given":"Kwan Ho Ryan","family":"Chan","sequence":"additional","affiliation":[{"name":"Mathematical Institute for Data Science, Johns Hopkins University, Baltimore, MD 21218, USA"}]},{"given":"Pengyuan","family":"Zhai","sequence":"additional","affiliation":[{"name":"Institute for Applied Computational Science, Harvard University, Cambridge, MA 02138, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0540-8526","authenticated-orcid":false,"given":"Yaodong","family":"Yu","sequence":"additional","affiliation":[{"name":"Department of EECS, University of California Berkeley, Berkeley, CA 94720, USA"}]},{"given":"Xiaojun","family":"Yuan","sequence":"additional","affiliation":[{"name":"School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 610056, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4684-911X","authenticated-orcid":false,"given":"Heung-Yeung","family":"Shum","sequence":"additional","affiliation":[{"name":"International Digital Economy Academy, Shenzhen 518048, China"}]},{"given":"Yi","family":"Ma","sequence":"additional","affiliation":[{"name":"Department of EECS, University of California Berkeley, Berkeley, CA 94720, USA"},{"name":"Tsinghua-Berkeley Shenzhen Institute, Shenzhen 518055, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,3,25]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Lee, J.M. (2002). Introduction to Smooth Manifolds, Springer.","DOI":"10.1007\/978-0-387-21752-9"},{"key":"ref_2","unstructured":"Chan, K.H.R., Yu, Y., You, C., Qi, H., Wright, J., and Ma, Y. (2021). ReduNet: A White-box Deep Network from the Principle of Maximizing Rate Reduction. arXiv."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"233","DOI":"10.1002\/aic.690370209","article-title":"Nonlinear principal component analysis using autoassociative neural networks","volume":"37","author":"Kramer","year":"1991","journal-title":"AICHE J."},{"key":"ref_4","unstructured":"Hinton, G.E., and Zemel, R.S. (1993, January 13\u201316). Autoencoders, Minimum Description Length and Helmholtz Free Energy. Proceedings of the 6th International Conference on Neural Information Processing Systems (NIPS\u201993), Siem Reap, Cambodia."},{"key":"ref_5","unstructured":"Kingma, D.P., and Welling, M. (2013). Auto-encoding variational Bayes. arXiv."},{"key":"ref_6","unstructured":"Zhao, S., Song, J., and Ermon, S. (2017). InfoVAE: Information maximizing variational autoencoders. arXiv."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"600","DOI":"10.1109\/TIP.2003.819861","article-title":"Image quality assessment: From error visibility to structural similarity","volume":"13","author":"Wang","year":"2004","journal-title":"IEEE Trans. Image Process."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Tu, Z. (2007, January 18\u201323). Learning Generative Models via Discriminative Approaches. Proceedings of the Conference on Computer Vision and Pattern Recognition, Minneapolis, MN, USA.","DOI":"10.1109\/CVPR.2007.383035"},{"key":"ref_9","unstructured":"Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014). Generative adversarial nets. Advances in Neural Information Processing Systems, MIT Press."},{"key":"ref_10","unstructured":"Arjovsky, M., Chintala, S., and Bottou, L. (2017, January 6\u201311). Wasserstein generative adversarial networks. Proceedings of the International Conference on Machine Learning, Sydney, Australia."},{"key":"ref_11","unstructured":"Salmona, A., Delon, J., and Desolneux, A. (2021). Gromov-Wasserstein Distances between Gaussian Distributions. arXiv."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Wright, J., and Ma, Y. (2021). High-Dimensional Data Analysis with Low-Dimensional Models: Principles, Computation, and Applications, Cambridge University Press.","DOI":"10.1017\/9781108779302"},{"key":"ref_13","unstructured":"Yu, Y., Chan, K.H.R., You, C., Song, C., and Ma, Y. (2020). Learning Diverse and Discriminative Representations via the Principle of Maximal Coding Rate Reduction. Advances in Neural Information Processing Systems, MIT Press."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"1798","DOI":"10.1109\/TPAMI.2013.50","article-title":"Representation learning: A review and new perspectives","volume":"35","author":"Bengio","year":"2013","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_15","unstructured":"Srivastava, A., Valkoz, L., Russell, C., Gutmann, M.U., and Sutton, C. (2017). VeeGAN: Reducing mode collapse in GANs using implicit variational learning. Advances in Neural Information Processing Systems, MIT Press."},{"key":"ref_16","unstructured":"Mirza, M., and Osindero, S. (2014). Conditional generative adversarial nets. arXiv."},{"key":"ref_17","unstructured":"Sohn, K., Lee, H., and Yan, X. (2015). Learning structured output representation using deep conditional generative models. Advances in Neural Information Processing Systems, MIT Press."},{"key":"ref_18","unstructured":"Mathieu, M.F., Zhao, J.J., Zhao, J., Ramesh, A., Sprechmann, P., and LeCun, Y. (2016). Disentangling factors of variation in deep representation using adversarial training. Advances in Neural Information Processing Systems, MIT Press."},{"key":"ref_19","unstructured":"Van den Oord, A., Kalchbrenner, N., Espeholt, L., Vinyals, O., Graves, A., and Kavukcuoglu, K. (2016). Conditional image generation with PixelCNN decoders. Advances in Neural Information Processing Systems, MIT Press."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Wang, T.C., Liu, M.Y., Zhu, J.Y., Tao, A., Kautz, J., and Catanzaro, B. (2018, January 18\u201322). High-resolution image synthesis and semantic manipulation with conditional GANs. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00917"},{"key":"ref_21","unstructured":"Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., and Abbeel, P. (2016). InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets. Advances in Neural Information Processing Systems, MIT Press."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Tang, S., Zhou, X., He, X., and Ma, Y. (2021, January 10\u201315). Disentangled Representation Learning for Controllable Image Synthesis: An Information-Theoretic Perspective. Proceedings of the 25th International Conference on Pattern Recognition (ICPR), Milan, Italy.","DOI":"10.1109\/ICPR48806.2021.9411925"},{"key":"ref_23","unstructured":"Li, K., and Malik, J. (2018). Implicit Maximum Likelihood Estimation. arXiv."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"2607","DOI":"10.1007\/s11263-020-01325-y","article-title":"Multimodal Image Synthesis with Conditional Implicit Maximum Likelihood Estimation","volume":"128","author":"Li","year":"2020","journal-title":"Int. J. Comput. Vis."},{"key":"ref_25","unstructured":"Odena, A., Olah, C., and Shlens, J. (2017, January 6\u201311). Conditional image synthesis with auxiliary classifier GANs. Proceedings of the International Conference on Machine Learning, Sydney, Australia."},{"key":"ref_26","unstructured":"Dumoulin, V., Shlens, J., and Kudlur, M. (2016). A learned representation for artistic style. arXiv."},{"key":"ref_27","unstructured":"Brock, A., Donahue, J., and Simonyan, K. (2018). Large scale GAN training for high fidelity natural image synthesis. arXiv."},{"key":"ref_28","unstructured":"Wu, Y., Rosca, M., and Lillicrap, T. (2019, January 9\u201315). Deep compressed sensing. Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA."},{"key":"ref_29","unstructured":"Wu, Y., Donahue, J., Balduzzi, D., Simonyan, K., and Lillicrap, T. (2019). Logan: Latent optimisation for generative adversarial networks. arXiv."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Papyan, V., Han, X., and Donoho, D.L. (2020). Prevalence of Neural Collapse during the terminal phase of deep learning training. arXiv.","DOI":"10.1073\/pnas.2015509117"},{"key":"ref_31","unstructured":"Miyato, T., Kataoka, T., Koyama, M., and Yoshida, Y. (May, January 30). Spectral Normalization for Generative Adversarial Networks. Proceedings of the International Conference on Learning Representations, Vancouver, BC, Canada."},{"key":"ref_32","unstructured":"Lin, Z., Khetan, A., Fanti, G., and Oh, S. (2018). Pacgan: The power of two samples in generative adversarial networks. Advances in Neural Information Processing Systems, MIT Press."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"304","DOI":"10.1109\/JSAIT.2020.2991375","article-title":"Understanding GANs in the LQG Setting: Formulation, Generalization and Stability","volume":"1","author":"Feizi","year":"2020","journal-title":"IEEE J. Sel. Areas Inf. Theory"},{"key":"ref_34","unstructured":"Larsen, A.B.L., S\u00f8nderby, S.K., Larochelle, H., and Winther, O. (2015). Autoencoding beyond pixels using a learned similarity metric. arXiv."},{"key":"ref_35","unstructured":"Rosca, M., Lakshminarayanan, B., Warde-Farley, D., and Mohamed, S. (2017). Variational Approaches for Auto-Encoding Generative Adversarial Networks. arXiv."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Bao, J., Chen, D., Wen, F., Li, H., and Hua, G. (2017, January 22\u201329). CVAE-GAN: Fine-grained image generation through asymmetric training. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.299"},{"key":"ref_37","unstructured":"Huang, H., He, R., Sun, Z., Tan, T., and Li, Z. (2018). IntroVAE: Introspective Variational Autoencoders for Photographic Image Synthesis. Advances in Neural Information Processing Systems, MIT Press."},{"key":"ref_38","unstructured":"Donahue, J., Kr\u00e4henb\u00fchl, P., and Darrell, T. (2016). Adversarial feature learning. arXiv."},{"key":"ref_39","unstructured":"Dumoulin, V., Belghazi, I., Poole, B., Mastropietro, O., Lamb, A., Arjovsky, M., and Courville, A. (2016). Adversarially learned inference. arXiv."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Ulyanov, D., Vedaldi, A., and Lempitsky, V. (2018, January 2\u20137). It takes (only) two: Adversarial generator-encoder networks. Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, New Orleans, LA, USA.","DOI":"10.1609\/aaai.v32i1.11449"},{"key":"ref_41","unstructured":"Vahdat, A., and Kautz, J. (2020). Nvae: A deep hierarchical variational autoencoder. arXiv."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Parmar, G., Li, D., Lee, K., and Tu, Z. (2021, January 21\u201324). Dual contradistinctive generative autoencoder. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00088"},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"2619","DOI":"10.1090\/S0002-9939-10-10340-2","article-title":"Approximation of probability distributions by convex mixtures of Gaussian measures","volume":"138","author":"Bacharoglou","year":"2010","journal-title":"Proc. Am. Math. Soc."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Hastie, T. (1984). Principal Curves and Surfaces, Stanford University. Technical Report.","DOI":"10.2172\/1453999"},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"502","DOI":"10.1080\/01621459.1989.10478797","article-title":"Principal Curves","volume":"84","author":"Hastie","year":"1987","journal-title":"J. Am. Stat. Assoc."},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Vidal, R., Ma, Y., and Sastry, S. (2016). Generalized Principal Component Analysis, Springer.","DOI":"10.1007\/978-0-387-87811-9"},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"9","DOI":"10.1109\/TPAMI.2007.1085","article-title":"Segmentation of multivariate mixed data via lossy data coding and compression","volume":"29","author":"Ma","year":"2007","journal-title":"PAMI"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Jolliffe, I. (1986). Principal Component Analysis, Springer.","DOI":"10.1007\/978-1-4757-1904-8"},{"key":"ref_49","unstructured":"Hong, D., Sheng, Y., and Dobriban, E. (2020). Selecting the number of components in PCA via random signflips. arXiv."},{"key":"ref_50","unstructured":"Farnia, F., and Ozdaglar, A.E. (2020). GANs May Have No Nash Equilibria. arXiv."},{"key":"ref_51","unstructured":"Dai, Y.H., and Zhang, L. (2020). Optimality Conditions for Constrained Minimax Optimization. arXiv."},{"key":"ref_52","first-page":"747","article-title":"The extragradient method for finding saddle points and other problems","volume":"12","author":"Korpelevich","year":"1976","journal-title":"Matecon"},{"key":"ref_53","unstructured":"Fiez, T., and Ratliff, L.J. (2020). Gradient Descent-Ascent Provably Converges to Strict Local Minmax Equilibria with a Finite Timescale Separation. arXiv."},{"key":"ref_54","unstructured":"Bai, S., Kolter, J.Z., and Koltun, V. (2019). Deep Equilibrium Models. arXiv."},{"key":"ref_55","unstructured":"Ghaoui, L.E., Gu, F., Travacca, B., and Askari, A. (2019). Implicit Deep Learning. arXiv."},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Zhu, J.Y., Park, T., Isola, P., and Efros, A.A. (2017, January 22\u201329). Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.244"},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"2278","DOI":"10.1109\/5.726791","article-title":"Gradient-based learning applied to document recognition","volume":"86","author":"LeCun","year":"1998","journal-title":"Proc. IEEE"},{"key":"ref_58","unstructured":"Krizhevsky, A., and Hinton, G. (2022, February 09). Learning Multiple Layers of Features from Tiny Images. Available online: https:\/\/www.cs.toronto.edu\/~kriz\/learning-features-2009-TR.pdf."},{"key":"ref_59","unstructured":"Coates, A., Ng, A., and Lee, H. (2011, January 11\u201313). An analysis of single-layer networks in unsupervised feature learning. Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, Ft. Lauderdale, FL, USA."},{"key":"ref_60","doi-asserted-by":"crossref","unstructured":"Liu, Z., Luo, P., Wang, X., and Tang, X. (2015, January 7\u201313). Deep Learning Face Attributes in the Wild. Proceedings of the International Conference on Computer Vision (ICCV), Santiago, Chile.","DOI":"10.1109\/ICCV.2015.425"},{"key":"ref_61","unstructured":"Yu, F., Seff, A., Zhang, Y., Song, S., Funkhouser, T., and Xiao, J. (2015). Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv."},{"key":"ref_62","doi-asserted-by":"crossref","first-page":"211","DOI":"10.1007\/s11263-015-0816-y","article-title":"Imagenet large scale visual recognition challenge","volume":"115","author":"Russakovsky","year":"2015","journal-title":"Int. J. Comput. Vis."},{"key":"ref_63","unstructured":"Radford, A., Metz, L., and Chintala, S. (2015). Unsupervised representation learning with deep convolutional generative adversarial networks. arXiv."},{"key":"ref_64","unstructured":"Larsen, A.B.L., S\u00f8nderby, S.K., Larochelle, H., and Winther, O. (2016, January 19\u201324). Autoencoding beyond pixels using a learned similarity metric. Proceedings of the International Conference on Machine Learning, PMLR, New York City, NY, USA."},{"key":"ref_65","unstructured":"Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., and Chen, X. (2016). Improved techniques for training GANs. Advances in Neural Information Processing Systems, MIT Press."},{"key":"ref_66","unstructured":"Heusel, M., Ramsauer, H., Unterthiner, T., Nessler, B., and Hochreiter, S. (2017). GANs trained by a two time-scale update rule converge to a local nash equilibrium. Advances in Neural Information Processing Systems, MIT Press."},{"key":"ref_67","first-page":"1343","article-title":"The Brascamp-Lieb Inequalities: Finiteness, Structure and Extremals","volume":"17","author":"Carbery","year":"2007","journal-title":"Geom. Funct. Anal."},{"key":"ref_68","unstructured":"Ditria, L., Meyer, B.J., and Drummond, T. (2020). OpenGAN: Open Set Generative Adversarial Networks. arXiv."},{"key":"ref_69","unstructured":"Fiez, T., and Ratliff, L.J. (2021, January 3\u20137). Local Convergence Analysis of Gradient Descent Ascent with Finite Timescale Separation. Proceedings of the International Conference on Learning Representations, Virtual."},{"key":"ref_70","unstructured":"H\u00e4rk\u00f6nen, E., Hertzmann, A., Lehtinen, J., and Paris, S. (2020). Ganspace: Discovering interpretable GAN controls. arXiv."},{"key":"ref_71","doi-asserted-by":"crossref","unstructured":"Wu, Z., Baek, C., You, C., and Ma, Y. (2021, January 20\u201325). Incremental Learning via Rate Reduction. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00118"},{"key":"ref_72","unstructured":"Tong, S., Dai, X., Wu, Z., Li, M., Yi, B., and Ma, Y. (2022). Incremental Learning of Structured Memory via Closed-Loop Transcription. arXiv."},{"key":"ref_73","unstructured":"Lee, K.S., and Town, C. (2020). Mimicry: Towards the Reproducibility of GAN Research. arXiv."},{"key":"ref_74","unstructured":"Kingma, D.P., and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv."}],"container-title":["Entropy"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1099-4300\/24\/4\/456\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T22:43:33Z","timestamp":1760136213000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1099-4300\/24\/4\/456"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,3,25]]},"references-count":74,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2022,4]]}},"alternative-id":["e24040456"],"URL":"https:\/\/doi.org\/10.3390\/e24040456","relation":{},"ISSN":["1099-4300"],"issn-type":[{"value":"1099-4300","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,3,25]]}}}