{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,6]],"date-time":"2026-04-06T12:16:18Z","timestamp":1775477778391,"version":"3.50.1"},"reference-count":51,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2025,1,28]],"date-time":"2025-01-28T00:00:00Z","timestamp":1738022400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,1,28]],"date-time":"2025-01-28T00:00:00Z","timestamp":1738022400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001215","name":"La Trobe University","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100001215","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J Comput Vis"],"published-print":{"date-parts":[[2025,6]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Self-supervised facial representation learning (SFRL) methods, especially contrastive learning (CL) methods, have been increasingly popular due to their ability to perform face understanding without heavily relying on large-scale well-annotated datasets. However, analytically, current CL-based SFRL methods still perform unsatisfactorily in learning facial representations due to their tendency to learn pose-insensitive features, resulting in the loss of some useful pose details. This could be due to the inappropriate positive\/negative pair selection within CL. To conquer this challenge, we propose a Pose-disentangled Contrastive Facial Representation Learning (PCFRL) framework to enhance pose awareness for SFRL. We achieve this by explicitly disentangling the pose-aware features from non-pose face-aware features and introducing appropriate sample calibration schemes for better CL with the disentangled features. In PCFRL, we first devise a pose-disentangled decoder with a delicately designed orthogonalizing regulation to perform the disentanglement; therefore, the learning on the pose-aware and non-pose face-aware features would not affect each other. Then, we introduce a false-negative pair calibration module to overcome the issue that the two types of disentangled features may not share the same negative pairs for CL. Our calibration employs a novel neighborhood-cohesive pair alignment method to identify pose and face false-negative pairs, respectively, and further help calibrate them to appropriate positive pairs. Lastly, we devise two calibrated CL losses, namely calibrated pose-aware and face-aware CL losses, for adaptively learning the calibrated pairs more effectively, ultimately enhancing the learning with the disentangled features and providing robust facial representations for various downstream tasks. In the experiments, we perform linear evaluations on four challenging downstream facial tasks with SFRL using our method, including facial expression recognition, face recognition, facial action unit detection, and head pose estimation. Experimental results show that PCFRL outperforms existing state-of-the-art methods by a substantial margin, demonstrating the importance of improving pose awareness for SFRL. Our evaluation code and model will be available at <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"https:\/\/github.com\/fulaoze\/CV\/tree\/main\" ext-link-type=\"uri\">https:\/\/github.com\/fulaoze\/CV\/tree\/main<\/jats:ext-link>.\n<\/jats:p>","DOI":"10.1007\/s11263-025-02348-z","type":"journal-article","created":{"date-parts":[[2025,1,28]],"date-time":"2025-01-28T14:19:55Z","timestamp":1738073995000},"page":"3727-3745","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":23,"title":["Sample-Cohesive Pose-Aware Contrastive Facial Representation Learning"],"prefix":"10.1007","volume":"133","author":[{"given":"Yuanyuan","family":"Liu","sequence":"first","affiliation":[]},{"given":"Shaoze","family":"Feng","sequence":"additional","affiliation":[]},{"given":"Shuyang","family":"Liu","sequence":"additional","affiliation":[]},{"given":"Yibing","family":"Zhan","sequence":"additional","affiliation":[]},{"given":"Dapeng","family":"Tao","sequence":"additional","affiliation":[]},{"given":"Zijing","family":"Chen","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5004-8975","authenticated-orcid":false,"given":"Zhe","family":"Chen","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,1,28]]},"reference":[{"key":"2348_CR1","first-page":"9912","volume":"33","author":"M Caron","year":"2020","unstructured":"Caron, M., Misra, I., Mairal, J., Goyal, P., Bojanowski, P., & Joulin, A. (2020). Unsupervised learning of visual features by contrasting cluster assignments. Advances in Neural Information Processing Systems, 33, 9912\u20139924.","journal-title":"Advances in Neural Information Processing Systems"},{"key":"2348_CR2","first-page":"9660","volume":"2021","author":"J-R Chang","year":"2021","unstructured":"Chang, J.-R., Chen, Y., & Chiu, W.-C. (2021). Learning facial representations from the cycle-consistency of face. IEEE\/CVF International Conference on Computer Vision (ICCV), 2021, 9660\u20139669.","journal-title":"IEEE\/CVF International Conference on Computer Vision (ICCV)"},{"key":"2348_CR3","unstructured":"Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. (2020). A simple framework for contrastive learning of visual representations. International conference on machine learning (pp. 1597\u20131607)."},{"key":"2348_CR4","doi-asserted-by":"crossref","unstructured":"Chen, X., & He, K. (2021). Exploring simple siamese representation learning. Proc. ieee conf. comput. vis. pattern recognit. (pp. 15750\u201315758).","DOI":"10.1109\/CVPR46437.2021.01549"},{"key":"2348_CR5","doi-asserted-by":"crossref","unstructured":"Chen, X., Xie, S., & He, K. (2021). An empirical study of training self-supervised vision transformers. In 2021 IEEE\/CVF International Conference on Computer Vision (ICCV), pp. 9620\u20139629.","DOI":"10.1109\/ICCV48922.2021.00950"},{"key":"2348_CR6","unstructured":"Coates, A., Ng, A., & Lee, H. (2011). An analysis of single-layer networks in unsupervised feature learning. Proceedings of the fourteenth international conference on artificial intelligence and statistics (pp. 215\u2013223)."},{"key":"2348_CR7","doi-asserted-by":"crossref","unstructured":"Dalal, N., & Triggs, B. (2005). Histograms of oriented gradients for human detection. In 2005 IEEE computer society conference on computer vision and pattern recognition (cvpr\u201905) (vol.\u00a01, pp. 886\u2013893).","DOI":"10.1109\/CVPR.2005.177"},{"key":"2348_CR8","doi-asserted-by":"crossref","unstructured":"Datta, S., Sharma, G., & Jawahar, C. (2018). Unsupervised learning of face representations. In 2018 13th IEEE international conference on automatic face and gesture recognition (fg 2018) (pp. 135\u2013142).","DOI":"10.1109\/FG.2018.00029"},{"key":"2348_CR9","doi-asserted-by":"crossref","unstructured":"Deng, J., Guo, J., Xue, N., & Zafeiriou, S. (2019). Arcface: Additive angular margin loss for deep face recognition. Proc. IEEE Conf. Comput. vis. pattern Recognit. (pp. 4690\u20134699).","DOI":"10.1109\/CVPR.2019.00482"},{"key":"2348_CR10","doi-asserted-by":"crossref","unstructured":"Dwibedi, D., Aytar, Y., Tompson, J., Sermanet, P., & Zisserman, A. (2021). With a little help from my friends: Nearest-neighbor contrastive learning of visual representations. Proc. IEEE Conf. Comput. vis. Pattern Recognit. (pp. 9588\u20139597).","DOI":"10.1109\/ICCV48922.2021.00945"},{"key":"2348_CR11","unstructured":"Florea, C., Florea, L., Badea, M.-S., Vertan, C., & Racoviteanu, A. (2019). Annealed label transfer for face expression recognition. Bmvc (p.\u00a0104)."},{"key":"2348_CR12","doi-asserted-by":"crossref","unstructured":"Gamble, J. A., & Huang, J. (2020). Convolutional neural network for human activity recognition and identification. In 2020 IEEE International Systems Conference (syscon) (p.\u00a01-7).","DOI":"10.1109\/SysCon47679.2020.9275924"},{"key":"2348_CR13","unstructured":"GE, C., Wang, J., Tong, Z., Chen, S., Song, Y., & Luo, P. (2023). Soft neighbors are positive supporters in contrastive visual representation learning. The eleventh international conference on learning representations."},{"key":"2348_CR14","doi-asserted-by":"crossref","unstructured":"Goodfellow, I. J., Erhan, D., Carrier, P. L., Courville, A., Mirza, M., Hamner, B., ... & Bengio, Y. (2013). Challenges in representation learning: A report on three machine learning contests. In Neural information processing: 20th international conference, daegu, korea. Proceedings, Part III 20 (pp. 117\u2013124). Springer berlin heidelberg.","DOI":"10.1007\/978-3-642-42051-1_16"},{"key":"2348_CR15","doi-asserted-by":"crossref","unstructured":"Harini, R., & Chandrasekar, C. (2012). Image segmentation using nearest neighbor classifiers based on kernel formation for medical images. International conference on pattern recognition, informatics and medical engineering (prime-2012) (p.\u00a0261-265).","DOI":"10.1109\/ICPRIME.2012.6208355"},{"key":"2348_CR16","doi-asserted-by":"crossref","unstructured":"He, K., Fan, H., Wu, Y., Xie, S., & Girshick, R. (2020). Momentum contrast for unsupervised visual representation learning. Proc. IEEE Conf. Comput. vis. Pattern Recognit. (pp. 9729\u20139738).","DOI":"10.1109\/CVPR42600.2020.00975"},{"key":"2348_CR17","volume-title":"Advances in neural information processing systems (vol. 31)","author":"T Jakab","year":"2018","unstructured":"Jakab, T., Gupta, A., Bilen, H., & Vedaldi, A. (2018). Unsupervised learning of object landmarks through conditional image generation. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, & R. Garnett (Eds.), Advances in neural information processing systems (vol. 31). Newry: Curran Associates, Inc."},{"issue":"4","key":"2348_CR18","first-page":"66","volume":"1","author":"A Krizhevsky","year":"2009","unstructured":"Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images. Handbook Systemic Autoimmune Diseases, 1(4), 66.","journal-title":"Handbook Systemic Autoimmune Diseases"},{"key":"2348_CR19","doi-asserted-by":"crossref","unstructured":"Li, W., Abtahi, F., Zhu, Z., & Yin, L. (2017). Eac-net: A region-based deep enhancing and cropping approach for facial action unit detection. In 2017 12th IEEE international conference on automatic face and gesture recognition (fg 2017) (pp. 103\u2013110).","DOI":"10.1109\/FG.2017.136"},{"key":"2348_CR20","doi-asserted-by":"publisher","first-page":"3212","DOI":"10.1109\/TIP.2023.3279978","volume":"32","author":"Y Li","year":"2023","unstructured":"Li, Y., & Shan, S. (2023). Contrastive learning of person-independent representations for facial action unit detection. IEEE Transactions on Image Processing, 32, 3212\u20133225.","journal-title":"IEEE Transactions on Image Processing"},{"issue":"1","key":"2348_CR21","doi-asserted-by":"publisher","first-page":"302","DOI":"10.1109\/TPAMI.2020.3011063","volume":"44","author":"Y Li","year":"2020","unstructured":"Li, Y., Zeng, J., & Shan, S. (2020). Learning representations for facial actions from unlabeled videos. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(1), 302\u2013317.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"issue":"1","key":"2348_CR22","doi-asserted-by":"publisher","first-page":"302","DOI":"10.1109\/TPAMI.2020.3011063","volume":"44","author":"Y Li","year":"2022","unstructured":"Li, Y., Zeng, J., & Shan, S. (2022). Learning representations for facial actions from unlabeled videos. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(1), 302\u2013317.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"2348_CR23","doi-asserted-by":"crossref","unstructured":"Li, Y., Zeng, J., Shan, S., & Chen, X. (2019). Self-supervised representation learning from videos for facial action unit detection. In Proc. IEEE Conf. Comput. vis. Pattern Recognit. (pp. 10924\u201310933).","DOI":"10.1109\/CVPR.2019.01118"},{"key":"2348_CR24","doi-asserted-by":"crossref","unstructured":"Liu, S., Johns, E., & Davison, A. J. (2019). End-to-end multi-task learning with attention. In Proc. IEEE Conf. Comput. vis. Pattern Recognit. (pp. 1871\u20131880).","DOI":"10.1109\/CVPR.2019.00197"},{"key":"2348_CR25","doi-asserted-by":"crossref","unstructured":"Liu, W., Wen, Y., Yu, Z., Li, M., Raj, B., & Song, L. (2017). Sphereface: Deep hypersphere embedding for face recognition. In Proc. IEEE Conf. Comput. vis. Pattern Recognit. (pp. 212\u2013220).","DOI":"10.1109\/CVPR.2017.713"},{"key":"2348_CR26","doi-asserted-by":"publisher","first-page":"195","DOI":"10.1016\/j.ins.2021.07.034","volume":"578","author":"Y Liu","year":"2021","unstructured":"Liu, Y., Dai, W., Fang, F., Chen, Y., Huang, R., Wang, R., & Wan, B. (2021). Dynamic multi-channel metric network for joint pose-aware and identity-invariant facial expression recognition. Information Sciences, 578, 195\u2013213.","journal-title":"Information Sciences"},{"key":"2348_CR27","doi-asserted-by":"crossref","unstructured":"Liu, Y., Wang, W., Zhan, Y., Feng, S., Liu, K., & Chen, Z. (2023, June). Pose-disentangled contrastive learning for self-supervised facial representation. In Proc. IEEE Conf. Comput. vis. Pattern Recognit. (pp.\u00a09717-9728).","DOI":"10.1109\/CVPR52729.2023.00937"},{"key":"2348_CR28","unstructured":"Lu, L., Tavabi, L., & Soleymani, M. (2020). Self-supervised learning for facial action unit recognition through temporal consistency. British machine vision conference."},{"key":"2348_CR29","doi-asserted-by":"publisher","first-page":"4149","DOI":"10.1109\/TIP.2022.3181496","volume":"31","author":"PC Madhusudana","year":"2022","unstructured":"Madhusudana, P. C., Birkbeck, N., Wang, Y., Adsumilli, B., & Bovik, A. C. (2022). Image quality assessment using contrastive learning. IEEE Transactions on Image Processing, 31, 4149\u20134161.","journal-title":"IEEE Transactions on Image Processing"},{"issue":"2","key":"2348_CR30","doi-asserted-by":"publisher","first-page":"151","DOI":"10.1109\/T-AFFC.2013.4","volume":"4","author":"SM Mavadati","year":"2013","unstructured":"Mavadati, S. M., Mahoor, M. H., Bartlett, K., Trinh, P., & Cohn, J. F. (2013). Disfa: A spontaneous facial action intensity database. IEEE Transactions on Affective Computing, 4(2), 151\u2013160.","journal-title":"IEEE Transactions on Affective Computing"},{"key":"2348_CR31","doi-asserted-by":"crossref","unstructured":"McCann, S., & Lowe, D. G. (2012). Local naive bayes nearest neighbor for image classification. In 2012 IEEE Conference on Computer Vision and Pattern Recognition (pp.\u00a03650\u20133656).","DOI":"10.1109\/CVPR.2012.6248111"},{"key":"2348_CR32","doi-asserted-by":"publisher","unstructured":"Nagrani, A., Chung, J. S., & Zisserman, A. (2017). VoxCeleb: A Large-Scale Speaker Identification Dataset. Proc. interspeech 2017 (pp. 2616\u20132620). https:\/\/doi.org\/10.21437\/Interspeech.2017-950","DOI":"10.21437\/Interspeech.2017-950"},{"issue":"7","key":"2348_CR33","doi-asserted-by":"publisher","first-page":"971","DOI":"10.1109\/TPAMI.2002.1017623","volume":"24","author":"T Ojala","year":"2002","unstructured":"Ojala, T., Pietikainen, M., & Maenpaa, T. (2002). Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7), 971\u2013987.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"2348_CR34","doi-asserted-by":"crossref","unstructured":"Parkhi, O., Vedaldi, A., & Zisserman, A. (2015). Deep face recognition. Bmvc 2015-proceedings of the british machine vision conference 2015.","DOI":"10.5244\/C.29.41"},{"key":"2348_CR35","unstructured":"Rahutomo, F., Kitasuka, T., & Aritsugi, M. (2012). Semantic cosine similarity. In The 7th international student conference on advanced science and technology icast (vol.\u00a04, p.\u00a01)."},{"key":"2348_CR36","doi-asserted-by":"crossref","unstructured":"Roy, S., & Etemad, A. (2021). Self-supervised contrastive learning of multi-view facial expressions. In ICMI - proc. int. conf. multimodal interact. (pp. 253\u2013257).","DOI":"10.1145\/3462244.3479955"},{"key":"2348_CR37","doi-asserted-by":"crossref","unstructured":"Sagonas, C., Tzimiropoulos, G., Zafeiriou, S., & Pantic, M. (2013). 300 faces in-the-wild challenge: The first facial landmark localization challenge. In Proceedings of the IEEE international conference on computer vision workshops (pp. 397\u2013403).","DOI":"10.1109\/ICCVW.2013.59"},{"key":"2348_CR38","doi-asserted-by":"crossref","unstructured":"Samanta, A., & Guha, T. (2017). On the role of head motion in affective expression. In 2017 IEEE international conference on acoustics, speech and signal processing (icassp) (pp. 2886\u20132890).","DOI":"10.1109\/ICASSP.2017.7952684"},{"key":"2348_CR39","doi-asserted-by":"crossref","unstructured":"Shao, Z., Liu, Z., Cai, J., & Ma, L. (2018). Deep adaptive attention for joint facial action unit detection and face alignment. In Proceedings of the European conference on computer vision (ECCV) (pp. 705\u2013720).","DOI":"10.1007\/978-3-030-01261-8_43"},{"key":"2348_CR40","unstructured":"Shu, Y., Gu, X., Yang, G.-Z., & Lo, B. P. L. (2022). Revisiting self-supervised contrastive learning for facial expression recognition. In 33rd British machine vision conference 2022, BMVC 2022, London, November 21\u201324, 2022. BMVA Press."},{"key":"2348_CR41","doi-asserted-by":"crossref","unstructured":"Shu, Z., Sahasrabudhe, M., Guler, R. A., Samaras, D., Paragios, N., & Kokkinos, I. (2018). Deforming autoencoders: Unsupervised disentangling of shape and appearance. In Proceedings of the european conference on computer vision (ECCV) (pp. 650\u2013665).","DOI":"10.1007\/978-3-030-01249-6_40"},{"key":"2348_CR42","doi-asserted-by":"crossref","unstructured":"Wiles, O., Koepke, A. S., & Zisserman, A. (2018). Self-supervised learning of a facial attribute embedding from video. British machine vision conference.","DOI":"10.1109\/ICCVW.2019.00364"},{"key":"2348_CR43","doi-asserted-by":"crossref","unstructured":"Yang, S., Wang, Y., van\u00a0de Weijer, J., Herranz, L., & Jui, S. (2021). Exploiting the intrinsic neighborhood structure for source-free domain adaptation. CoRR, arXiv:2110.04202.","DOI":"10.1109\/ICCV48922.2021.00885"},{"key":"2348_CR44","unstructured":"Yin, L., Wei, X., Sun, Y., Wang, J., & Rosato, M. (2006). A 3d facial expression database for facial behavior research. In 7th international conference on automatic face and gesture recognition (fgr06) (pp.\u00a0211-216)."},{"key":"2348_CR45","doi-asserted-by":"crossref","unstructured":"Yu, J., Zhou, H., Zhan, Y., & Tao, D. (2021). Deep graph-neighbor coherence preserving network for unsupervised cross-modal hashing. InProceedings of the aaai conference on artificial intelligence (vol.\u00a035, pp. 4626\u20134634).","DOI":"10.1609\/aaai.v35i5.16592"},{"key":"2348_CR46","doi-asserted-by":"crossref","unstructured":"Zhang, R., Isola, P., & Efros, A. A. (2017). Split-brain autoencoders: Unsupervised learning by cross-channel prediction. In Proc. IEEE conf. comput. vis. pattern recognit. (pp. 1058\u20131067).","DOI":"10.1109\/CVPR.2017.76"},{"key":"2348_CR47","doi-asserted-by":"crossref","unstructured":"Zhao, K., Chu, W.-S., & Zhang, H. (2016). Deep region and multi-label learning for facial action unit detection. In Proc. IEEE conf. comput. vis. pattern recognit. (pp. 3391\u20133399).","DOI":"10.1109\/CVPR.2016.369"},{"key":"2348_CR48","unstructured":"Zhao, S., Cai, H., Liu, H., Zhang, J., & Chen, S. (2018). Feature selection mechanism in cnns for facial expression recognition. Bmvc (vol.\u00a012, p.\u00a0317)."},{"issue":"5","key":"2348_CR49","doi-asserted-by":"publisher","first-page":"347","DOI":"10.1080\/02564602.2015.1017542","volume":"32","author":"X Zhao","year":"2015","unstructured":"Zhao, X., Shi, X., & Zhang, S. (2015). Facial expression recognition via deep learning. IETE Technical Review, 32(5), 347\u2013355.","journal-title":"IETE Technical Review"},{"key":"2348_CR50","unstructured":"Zheng, T., & Deng, W. (2018). Cross-pose lfw: A database for studying cross-pose face recognition in unconstrained environments. Beijing University of Posts and Telecommunications, Tech. Rep, 5(7)."},{"key":"2348_CR51","doi-asserted-by":"publisher","unstructured":"Zhu, X., Lei, Z., Liu, X., Shi, H., & Li, S. Z. (2016). Face alignment across large poses: A 3d solution. In 2016 IEEE conference on computer vision and pattern recognition (cvpr) (pp.\u00a0146\u2013155). https:\/\/doi.org\/10.1109\/CVPR.2016.23","DOI":"10.1109\/CVPR.2016.23"}],"container-title":["International Journal of Computer Vision"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-025-02348-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11263-025-02348-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11263-025-02348-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,5,10]],"date-time":"2025-05-10T06:57:10Z","timestamp":1746860230000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11263-025-02348-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,1,28]]},"references-count":51,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2025,6]]}},"alternative-id":["2348"],"URL":"https:\/\/doi.org\/10.1007\/s11263-025-02348-z","relation":{},"ISSN":["0920-5691","1573-1405"],"issn-type":[{"value":"0920-5691","type":"print"},{"value":"1573-1405","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,1,28]]},"assertion":[{"value":"29 March 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"6 January 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"28 January 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"There is no conflict of interest in our work.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}