{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,24]],"date-time":"2026-03-24T15:00:59Z","timestamp":1774364459989,"version":"3.50.1"},"reference-count":44,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2022,10,27]],"date-time":"2022-10-27T00:00:00Z","timestamp":1666828800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,10,27]],"date-time":"2022-10-27T00:00:00Z","timestamp":1666828800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61871196"],"award-info":[{"award-number":["61871196"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62001176"],"award-info":[{"award-number":["62001176"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"National Key Research and Development Program of China","award":["2019YFC1604700"],"award-info":[{"award-number":["2019YFC1604700"]}]},{"DOI":"10.13039\/501100003392","name":"Natural Science Foundation of Fujian Province","doi-asserted-by":"publisher","award":["2020J01085"],"award-info":[{"award-number":["2020J01085"]}],"id":[{"id":"10.13039\/501100003392","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2023,4]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Can a computer evaluate an athlete\u2019s performance automatically? Many action quality assessment (AQA) methods have been proposed in recent years. Limited by the randomness of video sampling and the simple strategy of model training, the performance of the existing AQA methods can still be further improved. To achieve this goal, a Gaussian guided frame sequence encoder network is proposed in this paper. In the proposed method, the image feature of each video frame is extracted by Resnet model. And then, a frame sequence encoder network is applied to model temporal information and generate action quality feature. Finally, a fully connected network is designed to predict action quality score. To train the proposed method effectively, inspired by the final score calculation rule in Olympic game, Gaussian loss function is employed to compute the error between the predicted score and the label score. The proposed method is implemented on the AQA-7 and MTL\u2013AQA datasets. The experimental results confirm that compared with the state-of-the-art methods, our proposed method achieves the better performance. And detailed ablation experiments are conducted to verify the effectiveness of each component in the module.<\/jats:p>","DOI":"10.1007\/s40747-022-00892-6","type":"journal-article","created":{"date-parts":[[2022,10,27]],"date-time":"2022-10-27T11:07:47Z","timestamp":1666868867000},"page":"1963-1974","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":8,"title":["Gaussian guided frame sequence encoder network for action quality assessment"],"prefix":"10.1007","volume":"9","author":[{"given":"Ming-Zhe","family":"Li","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5536-5224","authenticated-orcid":false,"given":"Hong-Bo","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Li-Jia","family":"Dong","sequence":"additional","affiliation":[]},{"given":"Qing","family":"Lei","sequence":"additional","affiliation":[]},{"given":"Ji-Xiang","family":"Du","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,10,27]]},"reference":[{"issue":"1","key":"892_CR1","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/s41598-022-09293-8","volume":"12","author":"H Basak","year":"2022","unstructured":"Basak H, Kundu R, Singh PK, Ijaz MF, Wo\u017aniak M, Sarkar R (2022) A union of deep learning and swarm-based optimization for 3d human action recognition. Sci Rep 12(1):1\u201317","journal-title":"Sci Rep"},{"issue":"8","key":"892_CR2","doi-asserted-by":"publisher","first-page":"1798","DOI":"10.1109\/TPAMI.2013.50","volume":"35","author":"Y Bengio","year":"2013","unstructured":"Bengio Y, Courville A, Vincent P (2013) Representation learning: a review and new perspectives. IEEE Trans Pattern Anal Mach Intell 35(8):1798\u20131828. https:\/\/doi.org\/10.1109\/TPAMI.2013.50","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"892_CR3","doi-asserted-by":"publisher","unstructured":"Carreira J, Zisserman A (2017) Quo vadis, action recognition? a new model and the kinetics dataset. In: 2017 IEEE conference on computer vision and pattern recognition, pp 4724\u20134733. https:\/\/doi.org\/10.1109\/CVPR.2017.502","DOI":"10.1109\/CVPR.2017.502"},{"key":"892_CR4","doi-asserted-by":"publisher","unstructured":"Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L (2009) Imagenet: a large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, pp 248\u2013255. https:\/\/doi.org\/10.1109\/CVPR.2009.5206848","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"892_CR5","doi-asserted-by":"publisher","unstructured":"Dong L-J, Zhang H-B, Shi Q, Lei Q, Du J-X, Gao S (2021) Learning and fusing multiple hidden substages for action quality assessment. Knowl Based Syst 107388. https:\/\/doi.org\/10.1016\/j.knosys.2021.107388. https:\/\/www.sciencedirect.com\/science\/article\/pii\/S095070512100650X (ISSN 0950-7051)","DOI":"10.1016\/j.knosys.2021.107388"},{"key":"892_CR6","doi-asserted-by":"publisher","unstructured":"Doughty H, Damen D, Mayol-Cuevas W (2018) Who\u2019s better? who\u2019s best? pairwise deep ranking for skill determination. In: 2018 IEEE\/CVF conference on computer vision and pattern recognition, pp 6057\u20136066. https:\/\/doi.org\/10.1109\/CVPR.2018.00634","DOI":"10.1109\/CVPR.2018.00634"},{"issue":"2","key":"892_CR7","doi-asserted-by":"publisher","first-page":"203","DOI":"10.1175\/1520-0450(1981)020<0203:AACC>2.0.CO;2","volume":"20","author":"AJ Faller","year":"1981","unstructured":"Faller AJ (1981) An average correlation coefficient. J Appl Meteorol Climatol 20(2):203\u2013205. https:\/\/doi.org\/10.1175\/1520-0450(1981)020<0203:AACC>2.0.CO;2","journal-title":"J Appl Meteorol Climatol"},{"key":"892_CR8","doi-asserted-by":"crossref","unstructured":"Farabi S, Himel HH, Gazzali F, Hasan B, Kabir M, Farazi M et\u00a0al (2021) Improving action quality assessment using resnets and weighted aggregation. arXiv preprint arXiv:2102.10555","DOI":"10.1007\/978-3-031-04881-4_46"},{"issue":"1","key":"892_CR9","doi-asserted-by":"publisher","DOI":"10.1002\/rcs.1850","volume":"14","author":"J Fard Mahtab","year":"2018","unstructured":"Fard Mahtab J, Sattar A, Darin Ellis R, Chinnam Ratna B, Pandya Abhilash K, Klein Michael D (2018) Automated robot-assisted surgical skill evaluation: predictive analytics approach. Int J Med Robot Comp Assist Surg 14(1):e1850. https:\/\/doi.org\/10.1002\/rcs.1850","journal-title":"Int J Med Robot Comp Assist Surg"},{"key":"892_CR10","doi-asserted-by":"publisher","unstructured":"Feichtenhofer C, Fan H, Malik J, He K (2019) Slowfast networks for video recognition. In: 2019 IEEE\/CVF international conference on computer vision (ICCV), pp 6201\u20136210. https:\/\/doi.org\/10.1109\/ICCV.2019.00630","DOI":"10.1109\/ICCV.2019.00630"},{"key":"892_CR11","doi-asserted-by":"publisher","first-page":"222","DOI":"10.1007\/978-3-030-58577-8_14","volume-title":"Computer vision\u2013ECCV 2020","author":"J Gao","year":"2020","unstructured":"Gao J, Zheng W-S, Pan J-H, Gao C, Wang Y, Zeng W, Lai J (2020) An asymmetric modeling for action assessment. In: Vedaldi A, Bischof H, Brox T, Frahm J-M (eds) Computer vision\u2013ECCV 2020. Springer International Publishing, Cham, pp 222\u2013238 (ISBN 978-3-030-58577-8)"},{"key":"892_CR12","doi-asserted-by":"publisher","unstructured":"Hara K, Kataoka H, Satoh Y (2018) Can spatiotemporal 3d cnns retrace the history of 2d cnns and imagenet? In: 2018 IEEE\/CVF conference on computer vision and pattern recognition, pp 6546\u20136555. https:\/\/doi.org\/10.1109\/CVPR.2018.00685","DOI":"10.1109\/CVPR.2018.00685"},{"key":"892_CR13","doi-asserted-by":"publisher","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 770\u2013778. https:\/\/doi.org\/10.1109\/CVPR.2016.90","DOI":"10.1109\/CVPR.2016.90"},{"key":"892_CR14","unstructured":"Kingma DP, Ba JL (2015) Adam: a method for stochastic optimization. In: 3rd international conference on learning representations, San Diego, CA, USA"},{"key":"892_CR15","doi-asserted-by":"publisher","unstructured":"Lea C, Flynn MD, Vidal R, Reiter A, Hager GD (2017) Temporal convolutional networks for action segmentation and detection. In: 2017 IEEE conference on computer vision and pattern recognition (CVPR), pp 1003\u20131012. https:\/\/doi.org\/10.1109\/CVPR.2017.113","DOI":"10.1109\/CVPR.2017.113"},{"key":"892_CR16","doi-asserted-by":"publisher","DOI":"10.3390\/s19194129","author":"Q Lei","year":"2019","unstructured":"Lei Q, Du J-X, Zhang H-B, Ye S, Chen D-S (2019) A survey of vision-based human action evaluation methods. Sensors. https:\/\/doi.org\/10.3390\/s19194129 (ISSN 1424-8220)","journal-title":"Sensors"},{"key":"892_CR17","doi-asserted-by":"publisher","first-page":"125","DOI":"10.1007\/978-3-030-00767-6_12","volume-title":"Advances in multimedia information processing\u2013PCM 2018","author":"Y Li","year":"2018","unstructured":"Li Y, Chai X, Chen X (2018) End-to-end learning for action quality assessment. In: Hong R, Cheng W-H, Yamasaki T, Wang M, Ngo C-W (eds) Advances in multimedia information processing\u2013PCM 2018. Springer International Publishing, Cham, pp 125\u2013134 (ISBN 978-3-030-00767-6)"},{"key":"892_CR18","doi-asserted-by":"publisher","unstructured":"Li Y, Ji B, Shi X, Zhang J, Kang B, Wang L (2020) Tea: temporal excitation and aggregation for action recognition. In: 2020 IEEE\/CVF conference on computer vision and pattern recognition (CVPR), pp 906\u2013915. https:\/\/doi.org\/10.1109\/CVPR42600.2020.00099","DOI":"10.1109\/CVPR42600.2020.00099"},{"key":"892_CR19","doi-asserted-by":"publisher","unstructured":"Liu D, Li Q, Jiang T, Wang Y, Miao R, Shan F, Li Z (2021) Towards unified surgical skill assessment. In: 2021 IEEE\/CVF conference on computer vision and pattern recognition (CVPR), pp 9517\u20139526. https:\/\/doi.org\/10.1109\/CVPR46437.2021.00940","DOI":"10.1109\/CVPR46437.2021.00940"},{"key":"892_CR20","doi-asserted-by":"publisher","first-page":"138","DOI":"10.1007\/978-3-319-07521-1_15","volume-title":"Information processing in computer-assisted interventions","author":"A Malpani","year":"2014","unstructured":"Malpani A, Vedula SS, Chen CCG, Hager GD (2014) Pairwise comparison-based objective score for automated skill assessment of segments in a surgical task. In: Stoyanov D, Collins DL, Sakuma I, Abolmaesumi P, Jannin P (eds) Information processing in computer-assisted interventions. Springer International Publishing, Cham, pp 138\u2013147 (ISBN 978-3-319-07521-1)"},{"key":"892_CR21","doi-asserted-by":"publisher","unstructured":"Nekoui M, Tito\u00a0CFO, Cheng L (2021) Eagle-eye: extreme-pose action grader using detail bird\u2019s-eye view. In: 2021 IEEE winter conference on applications of computer vision (WACV), pp 394\u2013402. https:\/\/doi.org\/10.1109\/WACV48630.2021.00044","DOI":"10.1109\/WACV48630.2021.00044"},{"key":"892_CR22","doi-asserted-by":"publisher","unstructured":"Pan J-H, Gao J, Zheng W-S (2019) Action assessment by joint relation graphs. In: 2019 IEEE\/CVF international conference on computer vision (ICCV), pp 6330\u20136339. https:\/\/doi.org\/10.1109\/ICCV.2019.00643","DOI":"10.1109\/ICCV.2019.00643"},{"key":"892_CR23","doi-asserted-by":"publisher","unstructured":"Parmar P, Morris BT (2017) Learning to score olympic events. In: 2017 IEEE conference on computer vision and pattern recognition workshops (CVPRW), pp 76\u201384. https:\/\/doi.org\/10.1109\/CVPRW.2017.16","DOI":"10.1109\/CVPRW.2017.16"},{"key":"892_CR24","doi-asserted-by":"publisher","unstructured":"Parmar P, Morris B (2019) Action quality assessment across multiple actions. In: 2019 IEEE winter conference on applications of computer vision (WACV), pp 1468\u20131476. https:\/\/doi.org\/10.1109\/WACV.2019.00161","DOI":"10.1109\/WACV.2019.00161"},{"key":"892_CR25","doi-asserted-by":"publisher","unstructured":"Parmar P, Morris BT (2019) What and how well you performed? a multitask learning approach to action quality assessment. In: 2019 IEEE\/CVF conference on computer vision and pattern recognition (CVPR), pp 304\u2013313. https:\/\/doi.org\/10.1109\/CVPR.2019.00039","DOI":"10.1109\/CVPR.2019.00039"},{"key":"892_CR26","doi-asserted-by":"publisher","unstructured":"Parmar P, Reddy J, Morris B (2021) Piano skills assessment. In: 2021 IEEE 23rd international workshop on multimedia signal processing (MMSP), pp 1\u20135. https:\/\/doi.org\/10.1109\/MMSP53017.2021.9733638","DOI":"10.1109\/MMSP53017.2021.9733638"},{"key":"892_CR27","unstructured":"Paszke A, Gross S, Chintala S, Chanan G, Yang E, DeVito Z, Lin Z, Desmaison A, Antiga L, Lerer A (2017) Automatic differentiation in pytorch. In: NIPS 2017 workshop on Autodiff,.https:\/\/openreview.net\/forum?id=BJJsrmfCZ"},{"key":"892_CR28","doi-asserted-by":"publisher","first-page":"556","DOI":"10.1007\/978-3-319-10599-4_36","volume-title":"Computer vision\u2013ECCV 2014","author":"H Pirsiavash","year":"2014","unstructured":"Pirsiavash H, Vondrick C, Torralba A (2014) Assessing the quality of actions. In: Fleet D, Pajdla T, Schiele B, Tuytelaars T (eds) Computer vision\u2013ECCV 2014. Springer International Publishing, Cham, pp 556\u2013571 (ISBN 978-3-319-10599-4)"},{"key":"892_CR29","first-page":"435","volume-title":"Medical image computing and computer-assisted intervention\u2013MICCAI 2009","author":"CE Reiley","year":"2009","unstructured":"Reiley CE, Hager GD (2009) Task versus subtask surgical skill evaluation of robotic minimally invasive surgery. In: Yang G-Z, Hawkes D, Rueckert D, Noble A, Taylor C (eds) Medical image computing and computer-assisted intervention\u2013MICCAI 2009. Springer, Berlin Heidelberg, pp 435\u2013442 (ISBN 978-3-642-04268-3)"},{"key":"892_CR30","doi-asserted-by":"publisher","unstructured":"Roditakis K, Makris A, Argyros A (2021) Towards improved and interpretable action quality assessment with self-supervised alignment. In: The 14th PErvasive technologies related to assistive environments conference, PETRA 2021, pp 507-513. Association for Computing Machinery, New York, NY, USA. https:\/\/doi.org\/10.1145\/3453892.3461624. https:\/\/doi.org\/10.1145\/3453892.3461624 (ISBN 9781450387927)","DOI":"10.1145\/3453892.3461624"},{"key":"892_CR31","doi-asserted-by":"publisher","DOI":"10.3390\/s20185258","author":"F Sardari","year":"2020","unstructured":"Sardari F, Paiement A, Hannuna S, Mirmehdi M (2020) Vi-net-view-invariant quality of human movement assessment. Sensors. https:\/\/doi.org\/10.3390\/s20185258 (ISSN 1424-8220)","journal-title":"Sensors"},{"key":"892_CR32","doi-asserted-by":"publisher","unstructured":"Shi Q, Zhang H-B, Li Z, Du J-X, Lei Q, Liu J-H(2022) Shuffle-invariant network for action recognition in videos. ACM Trans. Multimedia Comput Commun Appl, 18(3). https:\/\/doi.org\/10.1145\/3485665. https:\/\/doi.org\/10.1145\/3485665. ISSN 1551-6857","DOI":"10.1145\/3485665"},{"key":"892_CR33","doi-asserted-by":"publisher","unstructured":"Tang Y, Ni Z, Zhou J, Zhang D, Lu J, Wu Y, Zhou J (2020) Uncertainty-aware score distribution learning for action quality assessment. In 2020 IEEE\/CVF conference on computer vision and pattern recognition (cVPR), pp 9836\u20139845. https:\/\/doi.org\/10.1109\/CVPR42600.2020.00986","DOI":"10.1109\/CVPR42600.2020.00986"},{"key":"892_CR34","doi-asserted-by":"publisher","unstructured":"Tran D, Bourdev L, Fergus R, Torresani L, Paluri M (2015) Learning spatiotemporal features with 3d convolutional networks. In 2015 IEEE international conference on computer vision (ICCV), pp 4489\u20134497. https:\/\/doi.org\/10.1109\/ICCV.2015.510","DOI":"10.1109\/ICCV.2015.510"},{"key":"892_CR35","doi-asserted-by":"crossref","unstructured":"Varadarajan B, Reiley C, Lin H, Khudanpur S, Hager G (2009) Data-derived models for segmentation with application to surgical assessment and training. In: G-Z Yang, D Hawkes, D Rueckert, A Noble, and C Taylor, editors, Medical image computing and computer-assisted intervention\u2014MICCAI, pp 426\u2013434. Springer, Berlin, Heidelberg (ISBN 978-3-642-04268-3)","DOI":"10.1007\/978-3-642-04268-3_53"},{"key":"892_CR36","doi-asserted-by":"publisher","first-page":"3","DOI":"10.1007\/978-3-030-60639-8_1","volume-title":"Pattern recognition and computer vision","author":"J Wang","year":"2020","unstructured":"Wang J, Du Z, Li A, Wang Y (2020) Assessing action quality via attentive spatio-temporal convolutional networks. In: Peng Y, Liu Q, Lu H, Sun Z, Liu C, Chen X, Zha H, Yang J (eds) Pattern recognition and computer vision. Springer International Publishing, Cham, pp 3\u201316 (ISBN 978-3-030-60639-8)"},{"key":"892_CR37","doi-asserted-by":"publisher","first-page":"20","DOI":"10.1007\/978-3-319-46484-8_2","volume-title":"Computer vision\u2013ECCV 2016","author":"L Wang","year":"2016","unstructured":"Wang L, Xiong Y, Wang Z, Qiao Y, Lin D, Tang X, Van Gool L (2016) Temporal segment networks: towards good practices for deep action recognition. In: Leibe B, Matas J, Sebe N, Welling M (eds) Computer vision\u2013ECCV 2016. Springer International Publishing, Cham, pp 20\u201336 (ISBN 978-3-319-46484-8)"},{"key":"892_CR38","doi-asserted-by":"publisher","first-page":"668","DOI":"10.1007\/978-3-030-59716-0_64","volume-title":"Medical image computing and computer assisted intervention-MICCAI 2020","author":"T Wang","year":"2020","unstructured":"Wang T, Wang Y, Li M (2020) Towards accurate and interpretable surgical skill assessment: a video-based method incorporating recognized surgical gestures and skill levels. In: Martel AL, Abolmaesumi P, Stoyanov D, Mateus D, Zuluaga MA, Zhou SK, Racoceanu D, Joskowicz L (eds) Medical image computing and computer assisted intervention-MICCAI 2020. Springer International Publishing, Cham, pp 668\u2013678 (ISBN 978-3-030-59716-0)"},{"issue":"7","key":"892_CR39","doi-asserted-by":"publisher","first-page":"4820","DOI":"10.1109\/TII.2021.3129629","volume":"18","author":"Micha\u0142 Wieczorek","year":"2021","unstructured":"Wieczorek Micha\u0142, Si\u0142ka Jakub, Wo\u017aniak Marcin, Garg Sahil, Hassan Mohammad\u00a0Mehedi (2021) Lightweight convolutional neural network model for human face detection in risk situations. IEEE Trans Ind Inf 18(7):4820\u20134829","journal-title":"IEEE Trans Ind Inf"},{"key":"892_CR40","doi-asserted-by":"publisher","unstructured":"Xiang X, Tian Y, Reiter A, Hager GD, Tran TD (2018) S3d: stacking segmental p3d for action quality assessment. In: 2018 25th IEEE international conference on image processing (ICIP), pp 928\u2013932. https:\/\/doi.org\/10.1109\/ICIP.2018.8451364","DOI":"10.1109\/ICIP.2018.8451364"},{"key":"892_CR41","doi-asserted-by":"crossref","unstructured":"Yan G, Wo\u017aniak M (2022) Accurate key frame extraction algorithm of video action for aerobics online teaching. In: Mobile networks and applications, pp 1\u201310","DOI":"10.1007\/s11036-022-01939-1"},{"key":"892_CR42","doi-asserted-by":"crossref","unstructured":"Yan S, Xiong Y, Lin D (2018) Spatial temporal graph convolutional networks for skeleton-based action recognition. In: Proceedings of the thirty-second AAAI conference on artificial intelligence and thirtieth innovative applications of artificial intelligence conference and eighth AAAI symposium on educational advances in artificial intelligence, AAAI\u201918\/IAAI\u201918\/EAAI\u201918. AAAI Press (ISBN 978-1-57735-800-8)","DOI":"10.1609\/aaai.v32i1.12328"},{"key":"892_CR43","doi-asserted-by":"publisher","unstructured":"Yang C, Xu Y, Shi J, Dai B, Zhou B (2020) Temporal pyramid network for action recognition. In: 2020 IEEE\/CVF conference on computer vision and pattern recognition (CVPR), pp 588\u2013597. https:\/\/doi.org\/10.1109\/CVPR42600.2020.00067","DOI":"10.1109\/CVPR42600.2020.00067"},{"key":"892_CR44","doi-asserted-by":"crossref","unstructured":"Zeng L-A, Hong F-T, Zheng W-S, Yu Q-Z, Zeng W, Wang Y-W, Lai J-H (2020) Hybrid dynamic-static context-aware attention network for action assessment in long videos. In: Proceedings of the 28th ACM international conference on multimedia. Association for Computing Machinery, New York, NY, USA, pp 2526\u20132534. https:\/\/doi.org\/10.1145\/3394171.3413560 (ISBN 9781450379885)","DOI":"10.1145\/3394171.3413560"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-022-00892-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-022-00892-6\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-022-00892-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,4,18]],"date-time":"2023-04-18T09:42:46Z","timestamp":1681810966000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-022-00892-6"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,10,27]]},"references-count":44,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2023,4]]}},"alternative-id":["892"],"URL":"https:\/\/doi.org\/10.1007\/s40747-022-00892-6","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"value":"2199-4536","type":"print"},{"value":"2198-6053","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,10,27]]},"assertion":[{"value":"17 August 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"12 October 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"27 October 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"On behalf of all authors, the corresponding author states that there is no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}