{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,28]],"date-time":"2026-03-28T15:35:14Z","timestamp":1774712114539,"version":"3.50.1"},"reference-count":64,"publisher":"MDPI AG","issue":"12","license":[{"start":{"date-parts":[[2021,6,20]],"date-time":"2021-06-20T00:00:00Z","timestamp":1624147200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"JSPS KAKENHI","award":["JP19K23364"],"award-info":[{"award-number":["JP19K23364"]}]},{"name":"Japan Science and Technology Agency-Mirai Program","award":["JPMJMI20D7"],"award-info":[{"award-number":["JPMJMI20D7"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. This study compared the performance of three systems (FaceReader, OpenFace, AFARtoolbox) that detect each facial movement corresponding to an action unit (AU) derived from the Facial Action Coding System. All machines could detect the presence of AUs from the dynamic facial database at a level above chance. Moreover, OpenFace and AFAR provided higher area under the receiver operating characteristic curve values compared to FaceReader. In addition, several confusion biases of facial components (e.g., AU12 and AU14) were observed to be related to each automated AU detection system and the static mode was superior to dynamic mode for analyzing the posed facial database. These findings demonstrate the features of prediction patterns for each system and provide guidance for research on facial expressions.<\/jats:p>","DOI":"10.3390\/s21124222","type":"journal-article","created":{"date-parts":[[2021,6,20]],"date-time":"2021-06-20T21:50:15Z","timestamp":1624225815000},"page":"4222","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":43,"title":["Assessing Automated Facial Action Unit Detection Systems for Analyzing Cross-Domain Facial Expression Databases"],"prefix":"10.3390","volume":"21","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-7768-2738","authenticated-orcid":false,"given":"Shushi","family":"Namba","sequence":"first","affiliation":[{"name":"Psychological Process Team, BZP, Robotics Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 6190288, Japan"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5335-1272","authenticated-orcid":false,"given":"Wataru","family":"Sato","sequence":"additional","affiliation":[{"name":"Psychological Process Team, BZP, Robotics Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 6190288, Japan"}]},{"given":"Masaki","family":"Osumi","sequence":"additional","affiliation":[{"name":"KOHINATA Limited Liability Company, 2-7-3, Tateba, Naniwa-ku, Osaka 5560020, Japan"}]},{"given":"Koh","family":"Shimokawa","sequence":"additional","affiliation":[{"name":"KOHINATA Limited Liability Company, 2-7-3, Tateba, Naniwa-ku, Osaka 5560020, Japan"}]}],"member":"1968","published-online":{"date-parts":[[2021,6,20]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Mandal, M.K., and Awasthi, A. (2015). Understanding Facial Expressions in Communication: Cross-Cultural and Multidisciplinary Perspectives, Springer.","DOI":"10.1007\/978-81-322-1934-7"},{"key":"ref_2","unstructured":"Ekman, P., Friesen, W.V., and Hager, J.C. (2002). Facial Action Coding System, Research Nexus eBook. [2nd ed.]."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Ekman, P., and Rosenberg, E.L. (2005). What the Face Reveals: Basic and Applied Studies of Spontaneous Expression Using the Facial Action Coding System (FACS), Oxford University Press. [2nd ed.].","DOI":"10.1093\/acprof:oso\/9780195179644.001.0001"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"593","DOI":"10.1007\/s12144-016-9448-9","article-title":"Spontaneous facial expressions are different from posed facial expressions: Morphological properties and dynamic sequences","volume":"36","author":"Namba","year":"2017","journal-title":"Curr. Psychol."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"158","DOI":"10.1109\/TBIOM.2020.2977225","article-title":"Crossing domains for AU coding: Perspectives, approaches, and measures","volume":"2","author":"Ertugrul","year":"2020","journal-title":"IEEE Trans. Biom. Behav. Identity Sci."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Baltru\u0161aitis, T., Mahmoud, M., and Robinson, P. (2015, January 4\u20138). Cross-dataset learning and person-specific normalisation for automatic action unit detection. Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition (FG), Ljubljana, Slovenia.","DOI":"10.1109\/FG.2015.7284869"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Baltru\u0161aitis, T., Zadeh, A., Lim, Y.C., and Morency, L.P. (2018, January 15\u201319). OpenFace 2.0: Facial behavior analysis toolkit. Proceedings of the 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG), Xi\u2019an, China.","DOI":"10.1109\/FG.2018.00019"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Ertugrul, I.O., Cohn, J.F., Jeni, L.A., Zhang, Z., Yin, L., and Ji, Q. (2019, January 14\u201318). Cross-domain AU detection: Domains, learning approaches, and measures. Proceedings of the 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG), Lille, France.","DOI":"10.1109\/FG.2019.8756543"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Ertugrul, I.O., Jeni, L.A., Ding, W., and Cohn, J.F. (2019, January 14\u201318). AFAR: A deep learning based tool for automated facial affect recognition. Proceedings of the 14th IEEE International Conference on Automatic Face & Gesture Recognition (FG), Lille, France.","DOI":"10.1109\/FG.2019.8756623"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"227","DOI":"10.1037\/npe0000028","article-title":"Automated facial coding: Validation of basic emotions and FACS AUs in FaceReader","volume":"7","author":"Lewinski","year":"2014","journal-title":"J. Neurosci. Psychol. Econ."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Skiendziel, T., R\u00f6sch, A.G., and Schultheiss, O.C. (2019). Assessing the convergent validity between the automated emotion recognition software Noldus FaceReader 7 and Facial Action Coding System Scoring. PLoS ONE, 14.","DOI":"10.1371\/journal.pone.0223905"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"270","DOI":"10.1111\/1467-9280.00054","article-title":"The face of time: Temporal cues in facial expressions of emotion","volume":"9","author":"Edwards","year":"1998","journal-title":"Psychol. Sci."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"41","DOI":"10.1177\/1754073912451349","article-title":"Effects of dynamic aspects of facial expressions: A review","volume":"5","author":"Krumhuber","year":"2013","journal-title":"Emot. Rev."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Perusqu\u00eda-Hern\u00e1ndez, M., Ayabe-Kanamura, S., and Suzuki, K. (2019). Human perception and biosignal-based identification of posed and spontaneous smiles. PLoS ONE, 14.","DOI":"10.1371\/journal.pone.0226328"},{"key":"ref_15","first-page":"57","article-title":"Are people happy when they smile? Affective assessments based on automatic smile genuineness identification","volume":"6","year":"2021","journal-title":"Emot. Stud."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"447","DOI":"10.1037\/emo0000712","article-title":"Emotion recognition from posed and spontaneous dynamic expressions: Human observers versus machine analysis","volume":"21","author":"Krumhuber","year":"2019","journal-title":"Emotion"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"202","DOI":"10.3389\/fpsyg.2018.00202","article-title":"The dynamic features of lip corners in genuine and posed smiles","volume":"9","author":"Guo","year":"2018","journal-title":"Front. Psychol."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Hemamou, L., Felhi, G., Vandenbussche, V., Martin, J.C., and Clavel, C. (2019, January 23). Hirenet: A hierarchical attention model for the automatic analysis of asynchronous video job interviews. Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA.","DOI":"10.1609\/aaai.v33i01.3301573"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Perusquia-Hernandez, M., Dollack, F., Tan, C.K., Namba, S., Ayabe-Kanamura, S., and Suzuki, K. (2020). Facial movement synergies and action unit detection from distal wearable electromyography and computer vision. arXiv.","DOI":"10.1109\/FG52635.2021.9667047"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Cohn, J.F., Ertugrul, I.O., Chu, W.S., Girard, J.M., Jeni, L.A., and Hammal, Z. (2019). Affective facial computing: Generalizability across domains. Multimodal Behav. Anal. Wild, 407\u2013441.","DOI":"10.1016\/B978-0-12-814601-9.00026-2"},{"key":"ref_21","unstructured":"Jeni, L.A., Cohn, J.F., and De La Torre, F. (2015, January 2\u20135). Facing imbalanced data\u2014Recommendations for the use of performance metrics. Proceedings of the Humaine Association Conference on Affective Computing and Intelligent Interaction, Washington, DC, USA."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Zadeh, A., Chong, L.Y., Baltrusaitis, T., and Morency, L.P. (2017, January 22\u201329). Convolutional experts constrained local model for 3d facial landmark detection. Proceedings of the IEEE International Conference on Computer Vision Workshops, Venice, Italy.","DOI":"10.1109\/ICCVW.2017.296"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Baltrusaitis, T., Robinson, P., and Morency, L.P. (2013, January 1\u20138). Constrained local neural fields for robust facial landmark detection in the wild. Proceedings of the IEEE International Conference On Computer Vision Workshops, Sydney, Australia.","DOI":"10.1109\/ICCVW.2013.54"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Kollias, D., Nicolaou, M.A., Kotsia, I., Zhao, G., and Zafeiriou, S. (2017, January 21\u201326). Recognition of affect in the wild using deep neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA.","DOI":"10.1109\/CVPRW.2017.247"},{"key":"ref_25","unstructured":"Kollias, D., and Zafeiriou, S. (2018). Aff-wild2: Extending the Aff-wild database for affect recognition. arXiv."},{"key":"ref_26","unstructured":"Kollias, D., and Zafeiriou, S. (2018). A multi-task learning & generation framework: Valence\u2013arousal, action units & primary expressions. arXiv."},{"key":"ref_27","unstructured":"Kollias, D., and Zafeiriou, S. (2019). Expression, affect, action unit recognition: Aff-wild2, multi-task learning and ArcFace. arXiv."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"907","DOI":"10.1007\/s11263-019-01158-4","article-title":"Deep affect prediction in-the-wild: Aff-wild database and challenge, deep architectures, and beyond","volume":"127","author":"Kollias","year":"2019","journal-title":"Int. J. Comput. Vis."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Zafeiriou, S., Kollias, D., Nicolaou, M.A., Papaioannou, A., Zhao, G., and Kotsia, I. (2017, January 21\u201326). Aff-wild: Valence and arousal \u2018n-the-Wild\u2019 challenge. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA.","DOI":"10.1109\/CVPRW.2017.248"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Mavadati, M., Sanger, P., and Mahoor, M.H. (2016, January 27\u201330). Extended DISFA dataset: Investigating posed and spontaneous facial expressions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Las Vegas, NV, USA.","DOI":"10.1109\/CVPRW.2016.182"},{"key":"ref_31","unstructured":"Girard, J.M., Chu, W.S., Jeni, L.A., and Cohn, J.F. (June, January 30). Sayette group formation task (GFT) spontaneous facial expression database. Proceedings of the 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG), Washington, DC, USA."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"5","DOI":"10.1109\/T-AFFC.2011.20","article-title":"The semaine database: Annotated multimodal records of emotionally colored conversations between a person and a limited agent","volume":"3","author":"McKeown","year":"2011","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"692","DOI":"10.1016\/j.imavis.2014.06.002","article-title":"Bp4d-spontaneous: A high-resolution spontaneous 3D dynamic facial expression database","volume":"32","author":"Zhang","year":"2014","journal-title":"Image Vis. Comput."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Savran, A., Aly\u00fcz, N., Dibeklio\u011flu, H., \u00c7eliktutan, O., G\u00f6kberk, B., Sankur, B., and Akarun, L. (2008, January 7\u20138). Bosphorus database for 3D face analysis. Proceedings of the European Workshop on Biometrics and Identity Management, Roskilde, Denmark.","DOI":"10.1007\/978-3-540-89991-4_6"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"151","DOI":"10.1109\/T-AFFC.2013.4","article-title":"DISFA: A spontaneous facial action intensity database","volume":"4","author":"Mavadati","year":"2013","journal-title":"IEEE Trans. Affect. Comput."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Valstar, M.F., Jiang, B., Mehu, M., Pantic, M., and Scherer, K. (2011, January 21\u201325). The first facial expression recognition and analysis challenge. Proceedings of the 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG), Santa Barbara, CA, USA.","DOI":"10.1109\/FG.2011.5771374"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Lucey, P., Cohn, J.F., Prkachin, K.M., Solomon, P.E., and Matthews, I. (2011, January 21\u201325). Painful data: The UNBC-McMaster shoulder pain expression archive database. Proceedings of the 2011 IEEE International Conference on Automatic Face & Gesture Recognition (FG), Santa Barbara, CA, USA.","DOI":"10.1109\/FG.2011.5771462"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"13","DOI":"10.1016\/j.imavis.2016.05.009","article-title":"Dense 3D face alignment from 2D video for real-time use","volume":"58","author":"Jeni","year":"2017","journal-title":"Image Vis. Comput."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Zhang, Z., Girard, J.M., Wu, Y., Zhang, X., Liu, P., Ciftci, U., Canavan, S., Reale, M., Horowitz, A., and Yang, H. (2016, January 27\u201330). Multimodal spontaneous emotion corpus for human behavior analysis. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.374"},{"key":"ref_40","unstructured":"Dowle, M., and Srinivasan, A. (2021, June 19). data.table: Extension of \u2018data.frame\u2019. R Package, Version 1.13.2. Available online: Https:\/\/CRAN.R-project.org\/package=data.table."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1186\/1471-2105-12-77","article-title":"pROC: An open-source package for R and S+ to analyze and compare ROC curves","volume":"12","author":"Robin","year":"2011","journal-title":"BMC Bioinform."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"1686","DOI":"10.21105\/joss.01686","article-title":"Welcome to the Tidyverse","volume":"4","author":"Wickham","year":"2019","journal-title":"J. Open Source Softw."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"284","DOI":"10.1037\/1040-3590.6.4.284","article-title":"Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology","volume":"6","author":"Cicchetti","year":"1994","journal-title":"Psychol. Assess."},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"251","DOI":"10.1038\/s41586-020-3037-7","article-title":"Sixteen facial expressions occur in similar contexts worldwide","volume":"589","author":"Cowen","year":"2021","journal-title":"Nature"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Ekman, P. (2003). Emotions Revealed, Times Books.","DOI":"10.1136\/sbmj.0405184"},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"16","DOI":"10.1177\/1754073912457228","article-title":"Coherence between emotion and facial expression: Evidence from laboratory experiments","volume":"5","author":"Reisenzein","year":"2013","journal-title":"Emot. Rev."},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"1081","DOI":"10.1080\/02699931.2015.1049124","article-title":"Perceptual and affective mechanisms in facial expression recognition: An integrative review","volume":"30","author":"Calvo","year":"2016","journal-title":"Cogn. Emot."},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1007\/s42761-020-00030-w","article-title":"Reconsidering the Duchenne smile: Formalizing and testing hypotheses about eye constriction and positive emotion","volume":"2","author":"Girard","year":"2021","journal-title":"Affect. Sci."},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"3799","DOI":"10.3389\/fpsyg.2020.612654","article-title":"A novel test of the Duchenne marker: Smiles after botulinum toxin treatment for crow\u2019s feet wrinkles","volume":"11","author":"Etcoff","year":"2021","journal-title":"Front. Psychol."},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"234","DOI":"10.1037\/emo0000410","article-title":"Generalizing Duchenne to sad expressions with binocular rivalry and perception ratings","volume":"19","author":"Malek","year":"2019","journal-title":"Emotion"},{"key":"ref_51","unstructured":"Miller, E.J., Krumhuber, E.G., and Dawel, A. (2020). Observers perceive the Duchenne marker as signaling only intensity for sad expressions, not genuine emotion. Emotion."},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"29","DOI":"10.3389\/frobt.2021.540193","article-title":"Comparison between the facial flow lines of androids and humans","volume":"8","author":"Ishihara","year":"2021","journal-title":"Front. Robot. AI"},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"1842","DOI":"10.3389\/fpsyg.2020.01842","article-title":"The 4D space-time dimensions of facial perception","volume":"11","author":"Burt","year":"2020","journal-title":"Front. Psychol."},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3272127.3275073","article-title":"Practical dynamic facial appearance modeling and acquisition","volume":"37","author":"Gotardo","year":"2018","journal-title":"ACM Trans. Graph."},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1038\/s41598-021-83077-4","article-title":"Distinct temporal features of genuine and deliberate facial expressions of surprise","volume":"11","author":"Namba","year":"2021","journal-title":"Sci. Rep."},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1007\/s10919-010-0095-9","article-title":"FACSGen: A tool to synthesize emotional facial expressions through systematic manipulation of facial action units","volume":"35","author":"Roesch","year":"2011","journal-title":"J. Nonverbal Behav."},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"351","DOI":"10.1037\/a0026632","article-title":"FACSGen 2.0 animation software: Generating three-dimensional FACS-valid facial expressions for emotion research","volume":"12","author":"Krumhuber","year":"2012","journal-title":"Emotion"},{"key":"ref_58","doi-asserted-by":"crossref","first-page":"1187","DOI":"10.1037\/emo0000287","article-title":"Gently does it: Humans outperform a software classifier in recognizing subtle, nonstereotypical facial expressions","volume":"17","author":"Yitzhak","year":"2017","journal-title":"Emotion"},{"key":"ref_59","doi-asserted-by":"crossref","first-page":"686","DOI":"10.3758\/s13428-020-01443-y","article-title":"Human and machine validation of 14 databases of dynamic facial expressions","volume":"53","author":"Krumhuber","year":"2021","journal-title":"Behav. Res. Methods"},{"key":"ref_60","doi-asserted-by":"crossref","unstructured":"Yan, Y., Lu, K., Xue, J., Gao, P., and Lyu, J. (2019, January 8\u201312). Feafa: A well-annotated dataset for facial expression analysis and 3D facial animation. Proceedings of the IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Shanghai, China.","DOI":"10.1109\/ICMEW.2019.0-104"},{"key":"ref_61","doi-asserted-by":"crossref","unstructured":"Dupr\u00e9, D., Krumhuber, E.G., K\u00fcster, D., and McKeown, G.J. (2020). A performance comparison of eight commercially available automatic classifiers for facial affect recognition. PLoS ONE, 15.","DOI":"10.1371\/journal.pone.0231968"},{"key":"ref_62","doi-asserted-by":"crossref","first-page":"990","DOI":"10.25046\/aj0602114","article-title":"A New Video Based Emotions Analysis System (VEMOS): An Efficient Solution Compared to iMotions Affectiva Analysis Software","volume":"6","author":"Jmour","year":"2021","journal-title":"Adv. Sci. Technol. Eng. Syst. J."},{"key":"ref_63","unstructured":"Ong, D., Wu, Z., Tan, Z.X., Reddan, M., Kahhale, I., Mattek, A., and Zaki, J. (2019). Modeling emotion in complex stories: The Stanford Emotional Narratives Dataset. IEEE Trans. Affect. Comput., 1\u201316."},{"key":"ref_64","unstructured":"Cheong, J.H., Xie, T., Byrne, S., and Chang, L.J. (2021). Py-Feat: Python Facial Expression Analysis Toolbox. arXiv."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/12\/4222\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T06:19:37Z","timestamp":1760163577000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/12\/4222"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,6,20]]},"references-count":64,"journal-issue":{"issue":"12","published-online":{"date-parts":[[2021,6]]}},"alternative-id":["s21124222"],"URL":"https:\/\/doi.org\/10.3390\/s21124222","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,6,20]]}}}