{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,16]],"date-time":"2026-01-16T03:16:16Z","timestamp":1768533376891,"version":"3.49.0"},"reference-count":51,"publisher":"Springer Science and Business Media LLC","issue":"9","license":[{"start":{"date-parts":[[2022,8,26]],"date-time":"2022-08-26T00:00:00Z","timestamp":1661472000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,8,26]],"date-time":"2022-08-26T00:00:00Z","timestamp":1661472000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100014440","name":"Ministerio de Ciencia, Innovaci\u00f3n y Universidades","doi-asserted-by":"publisher","id":[{"id":"10.13039\/100014440","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003451","name":"Universidad del Pa\u00eds Vasco","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100003451","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Appl Intell"],"published-print":{"date-parts":[[2023,5]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>In recent years, estimating beauty of faces has attracted growing interest in the fields of computer vision and machine learning. This is due to the emergence of face beauty datasets (such as SCUT-FBP, SCUT-FBP5500 and KDEF-PT) and the prevalence of deep learning methods in many tasks. The goal of this work is to leverage the advances in Deep Learning architectures to provide stable and accurate face beauty estimation from static face images. To this end, our proposed approach has three main contributions. To deal with the complicated high-level features associated with the FBP problem by using more than one pre-trained Convolutional Neural Network (CNN) model, we propose an architecture with two backbones (2B-IncRex). In addition to 2B-IncRex, we introduce a parabolic dynamic law to control the behavior of the robust loss parameters during training. These robust losses are ParamSmoothL1, Huber, and Tukey. As a third contribution, we propose an ensemble regression based on five regressors, namely Resnext-50, Inception-v3 and three regressors based on our proposed 2B-IncRex architecture. These models are trained with the following dynamic loss functions: Dynamic ParamSmoothL1, Dynamic Tukey, Dynamic ParamSmoothL1, Dynamic Huber, and Dynamic Tukey, respectively. To evaluate the performance of our approach, we used two datasets: SCUT-FBP5500 and KDEF-PT. The dataset SCUT-FBP5500 contains two evaluation scenarios provided by the database developers: 60-40% split and five-fold cross-validation. Our approach outperforms state-of-the-art methods on several metrics in both evaluation scenarios of SCUT-FBP5500. Moreover, experiments on the KDEF-PT dataset demonstrate the efficiency of our approach for estimating facial beauty using transfer learning, despite the presence of facial expressions and limited data. These comparisons highlight the effectiveness of the proposed solutions for FBP. They also show that the proposed Dynamic robust losses lead to more flexible and accurate estimators.<\/jats:p>","DOI":"10.1007\/s10489-022-03943-0","type":"journal-article","created":{"date-parts":[[2022,8,26]],"date-time":"2022-08-26T14:02:55Z","timestamp":1661522575000},"page":"10825-10842","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":8,"title":["CNN based facial aesthetics analysis through dynamic robust losses and ensemble regression"],"prefix":"10.1007","volume":"53","author":[{"given":"Fares","family":"Bougourzi","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-6581-9680","authenticated-orcid":false,"given":"Fadi","family":"Dornaika","sequence":"additional","affiliation":[]},{"given":"Nagore","family":"Barrena","sequence":"additional","affiliation":[]},{"given":"Cosimo","family":"Distante","sequence":"additional","affiliation":[]},{"given":"Abdelmalik","family":"Taleb-Ahmed","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,8,26]]},"reference":[{"issue":"3","key":"3943_CR1","doi-asserted-by":"publisher","first-page":"285","DOI":"10.1037\/h0033731","volume":"24","author":"K Dion","year":"1972","unstructured":"Dion K, Berscheid E, Walster E (1972) What is beautiful is good. J Personal Soc Psychol 24(3):285. Publisher: American Psychological Association","journal-title":"J Personal Soc Psychol"},{"key":"3943_CR2","doi-asserted-by":"publisher","first-page":"20245","DOI":"10.1109\/ACCESS.2020.2968837","volume":"8","author":"J Gan","year":"2020","unstructured":"Gan J, Xiang L, Zhai Y, Mai C, He G, Zeng J, Bai Z, Labati RD, Piuri V, Scotti F (2020) 2M BeautyNet: facial beauty prediction based on multi-task transfer learning. IEEE Access 8:20245\u201320256. Publisher: IEEE","journal-title":"IEEE Access"},{"issue":"1","key":"3943_CR3","doi-asserted-by":"publisher","first-page":"119","DOI":"10.1162\/089976606774841602","volume":"18","author":"Y Eisenthal","year":"2006","unstructured":"Eisenthal Y, Dror G, Ruppin E (2006) Facial attractiveness: Beauty and the machine. Neural Comput 18(1):119\u2013142. Publisher: MIT Press","journal-title":"Neural Comput"},{"key":"3943_CR4","doi-asserted-by":"crossref","unstructured":"Liu X, Li T, Peng H, Ouyang IC, Kim T, Wang R (2019) Understanding beauty via deep facial features. In: 2019 IEEE\/CVF Conference on computer vision and pattern recognition workshops (CVPRW), pp 246\u2013256","DOI":"10.1109\/CVPRW.2019.00034"},{"key":"3943_CR5","doi-asserted-by":"crossref","unstructured":"Alashkar T, Jiang S, Fu Y (2017) Rule-based facial makeup recommendation system. In: IEEE. 2017 12th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2017)","DOI":"10.1109\/FG.2017.47"},{"key":"3943_CR6","doi-asserted-by":"publisher","first-page":"184","DOI":"10.1016\/j.cviu.2014.04.006","volume":"125","author":"A Laurentini","year":"2014","unstructured":"Laurentini A, Bottino A (2014) Computer analysis of face beauty: A survey. Comput Vis Image Underst 125:184\u2013199. Publisher: Elsevier","journal-title":"Comput Vis Image Underst"},{"issue":"12","key":"3943_CR7","doi-asserted-by":"publisher","first-page":"2600","DOI":"10.1109\/TCYB.2014.2311033","volume":"44","author":"L Liang","year":"2014","unstructured":"Liang L, Jin L, Li X (2014) Facial skin beautification using adaptive region-aware masks. IEEE Trans Cybern 44(12):2600\u20132612. Publisher: IEEE","journal-title":"IEEE Trans Cybern"},{"key":"3943_CR8","doi-asserted-by":"crossref","unstructured":"Xu L, Fan H, Xiang J (2019) Hierarchical multi-task network for race, gender and facial attractiveness recognition. In: 2019 IEEE International Conference on Image Processing (ICIP), pp 3861\u20133865. IEEE","DOI":"10.1109\/ICIP.2019.8803614"},{"issue":"5","key":"3943_CR9","doi-asserted-by":"publisher","first-page":"1742","DOI":"10.3390\/s21051742","volume":"21","author":"E Vantaggiato","year":"2021","unstructured":"Vantaggiato E, Paladini E, Bougourzi F, Distante C, Hadid A, Taleb-Ahmed A (2021) COVID-19 recognition using ensemble-CNNs in two new chest X-ray databases. Sensors 21(5):1742. Publisher: Multidisciplinary Digital Publishing Institute. Accessed 2022-03-24","journal-title":"Sensors"},{"issue":"9","key":"3943_CR10","doi-asserted-by":"publisher","first-page":"189","DOI":"10.3390\/jimaging7090189","volume":"7","author":"F Bougourzi","year":"2021","unstructured":"Bougourzi F, Distante C, Ouafi A, Dornaika F, Hadid A, Taleb-Ahmed A (2021) Per-COVID-19: A Benchmark Dataset for COVID-19 Percentage Estimation from CT-Scans. Journal of Imaging 7 (9):189. https:\/\/doi.org\/10.3390\/jimaging7090189. Publisher: Multidisciplinary Digital Publishing Institute. Accessed 2022-03-24","journal-title":"Journal of Imaging"},{"key":"3943_CR11","doi-asserted-by":"crossref","unstructured":"Garrido MV, Prada M (2017) KDEF-PT: valence, emotional intensity, familiarity and attractiveness ratings of angry, neutral, and happy faces","DOI":"10.3389\/fpsyg.2017.02181"},{"issue":"3","key":"3943_CR12","doi-asserted-by":"publisher","first-page":"211","DOI":"10.1007\/s11263-015-0816-y","volume":"115","author":"O Russakovsky","year":"2015","unstructured":"Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, Huang Z, Karpathy A, Khosla A, Bernstein M (2015) Imagenet large scale visual recognition challenge. Int J Comput Vis 115 (3):211\u2013252. Publisher: Springer","journal-title":"Int J Comput Vis"},{"key":"3943_CR13","doi-asserted-by":"crossref","unstructured":"Bottino A, Laurentini A (2010) The analysis of facial beauty: an emerging area of research in pattern analysis. In: International conference image analysis and recognition, pp 425\u2013435. Springer","DOI":"10.1007\/978-3-642-13772-3_43"},{"key":"3943_CR14","doi-asserted-by":"crossref","unstructured":"Gan J, Zhou L, Zhai Y (2015) A study for facial beauty prediction model. In: 2015 International conference on wavelet analysis and pattern recognition (ICWAPR), pp 8\u201313. IEEE","DOI":"10.1109\/ICWAPR.2015.7295918"},{"issue":"1-2","key":"3943_CR15","first-page":"179","volume":"18","author":"ME Rhazi","year":"2019","unstructured":"Rhazi ME, Zarghili A, Majda A, Bouzalmat A, Oufkir AA (2019) Facial beauty analysis by age and gender. Int J Intell Syst Technol Appl 18(1-2):179\u2013203","journal-title":"Int J Intell Syst Technol Appl"},{"key":"3943_CR16","doi-asserted-by":"crossref","unstructured":"Xie D, Liang L, Jin L, Xu J, Li M (2015) Scut-fbp: A benchmark dataset for facial beauty perception","DOI":"10.1109\/SMC.2015.319"},{"key":"3943_CR17","unstructured":"Xu L, Xiang J, Yuan X (2018) Transferring rich deep features for facial beauty prediction"},{"key":"3943_CR18","doi-asserted-by":"crossref","unstructured":"Liang L, Lin L, Jin L, Xie D, Li M (2018) SCUT-FBP5500: a diverse benchmark dataset for multi-paradigm facial beauty prediction","DOI":"10.1109\/ICPR.2018.8546038"},{"key":"3943_CR19","doi-asserted-by":"crossref","unstructured":"Gray D, Yu K, Xu W, Gong Y (2010) Predicting facial beauty without landmarks","DOI":"10.1007\/978-3-642-15567-3_32"},{"issue":"4","key":"3943_CR20","doi-asserted-by":"publisher","first-page":"940","DOI":"10.1016\/j.patcog.2010.10.013","volume":"44","author":"D Zhang","year":"2011","unstructured":"Zhang D, Zhao Q, Chen F (2011) Quantitative analysis of human facial beauty using geometric features. Pattern Recogn 44(4):940\u2013950","journal-title":"Pattern Recogn"},{"key":"3943_CR21","unstructured":"Aarabi P, Hughes D, Mohajer K, Emami M (2001) The automatic measurement of facial beauty"},{"key":"3943_CR22","doi-asserted-by":"publisher","first-page":"334","DOI":"10.1016\/j.neucom.2013.09.025","volume":"129","author":"H Yan","year":"2014","unstructured":"Yan H (2014) Cost-sensitive ordinal regression for fully automatic facial beauty assessment. Neurocomputing 129:334\u2013342. Publisher: Elsevier","journal-title":"Neurocomputing"},{"issue":"3","key":"3943_CR23","doi-asserted-by":"publisher","first-page":"1249","DOI":"10.1016\/j.patcog.2013.09.007","volume":"47","author":"W-C Chiang","year":"2014","unstructured":"Chiang W-C, Lin H-H, Huang C-S, Lo L-J, Wan S-Y (2014) The cluster assessment of facial attractiveness using fuzzy neural network classifier based on 3D Moir\u00e9 features. Pattern Recogn 47 (3):1249\u20131260. https:\/\/doi.org\/10.1016\/j.patcog.2013.09.007. Accessed 2021-01-23","journal-title":"Pattern Recogn"},{"issue":"6","key":"3943_CR24","doi-asserted-by":"publisher","first-page":"2326","DOI":"10.1016\/j.patcog.2011.11.024","volume":"45","author":"J Fan","year":"2012","unstructured":"Fan J, Chau KP, Wan X, Zhai L, Lau E (2012) Prediction of facial attractiveness from facial proportions. Pattern Recogn 45(6):2326\u20132334. https:\/\/doi.org\/10.1016\/j.patcog.2013.09.007. Accessed 2021-01-23","journal-title":"Pattern Recogn"},{"issue":"8","key":"3943_CR25","doi-asserted-by":"publisher","first-page":"391","DOI":"10.3390\/info11080391","volume":"11","author":"K Cao","year":"2020","unstructured":"Cao K, Choi K-n, Jung H, Duan L (2020) Deep learning for facial beauty prediction. Information 11(8):391. Publisher: Multidisciplinary Digital Publishing Institute","journal-title":"Information"},{"key":"3943_CR26","doi-asserted-by":"crossref","unstructured":"Lin L, Liang L, Jin L, Chen W (2019) Attribute-aware convolutional neural networks for facial beauty prediction. In: IJCAI, pp 847\u2013853","DOI":"10.24963\/ijcai.2019\/119"},{"key":"3943_CR27","doi-asserted-by":"crossref","unstructured":"Lin L, Liang L, Jin L (2019) Regression guided by relative ranking using convolutional neural network (R3CNN) for facial beauty prediction","DOI":"10.24963\/ijcai.2019\/119"},{"issue":"12","key":"3943_CR28","doi-asserted-by":"publisher","first-page":"2037","DOI":"10.1109\/TPAMI.2006.244","volume":"28","author":"T Ahonen","year":"2006","unstructured":"Ahonen T, Hadid A, Pietikainen M (2006) Face description with local binary patterns: Application to face recognition. IEEE Trans Pattern Anal Mach Intell 28(12):2037\u20132041","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"issue":"2","key":"3943_CR29","doi-asserted-by":"publisher","first-page":"91","DOI":"10.1023\/B:VISI.0000029664.99615.94","volume":"60","author":"DG Lowe","year":"2004","unstructured":"Lowe DG (2004) Distinctive image features from scale-invariant keypoints. Int J Comput Vis 60 (2):91\u2013110. Publisher: Springer","journal-title":"Int J Comput Vis"},{"key":"3943_CR30","doi-asserted-by":"crossref","unstructured":"Cao Z, Yin Q, Tang X, Sun J (2010) Face recognition with learning-based descriptor","DOI":"10.1109\/CVPR.2010.5539992"},{"key":"3943_CR31","unstructured":"Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems, pp 1097\u20131105"},{"key":"3943_CR32","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"key":"3943_CR33","doi-asserted-by":"crossref","unstructured":"Xie S, Girshick R, Doll\u00e1r P, Tu Z, He K (2017) Aggregated residual transformations for deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1492\u20131500","DOI":"10.1109\/CVPR.2017.634"},{"key":"3943_CR34","doi-asserted-by":"publisher","first-page":"103831","DOI":"10.1016\/j.engappai.2020.103831","volume":"95","author":"F Dornaika","year":"2020","unstructured":"Dornaika F, Moujahid A, Wang K, Feng X (2020) Efficient deep discriminant embedding: Application to face beauty prediction and classification. Eng Appl Artif Intell 95:103831. https:\/\/doi.org\/10.1016\/j.engappai.2020.103831. Accessed 2020-08-30","journal-title":"Eng Appl Artif Intell"},{"key":"3943_CR35","doi-asserted-by":"publisher","first-page":"112990","DOI":"10.1016\/j.eswa.2019.112990","volume":"142","author":"F Dornaika","year":"2020","unstructured":"Dornaika F, Wang K, Arganda-Carreras I, Elorza A, Moujahid A (2020) Toward graph-based semi-supervised face beauty prediction. Expert Syst Appl 142:112990. Publisher: Elsevier","journal-title":"Expert Syst Appl"},{"key":"3943_CR36","doi-asserted-by":"crossref","unstructured":"Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818\u20132826","DOI":"10.1109\/CVPR.2016.308"},{"key":"3943_CR37","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770\u2013778","DOI":"10.1109\/CVPR.2016.90"},{"key":"3943_CR38","doi-asserted-by":"crossref","unstructured":"Szegedy C, Liu W, Jia Y, Sermanet P, Reed S, Anguelov D, Erhan D, Vanhoucke V, Rabinovich A (2015) Going deeper with convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1\u20139","DOI":"10.1109\/CVPR.2015.7298594"},{"issue":"9","key":"3943_CR39","doi-asserted-by":"publisher","first-page":"1479","DOI":"10.1049\/iet-ipr.2018.6235","volume":"13","author":"F Bougourzi","year":"2019","unstructured":"Bougourzi F, Mokrani K, Ruichek Y, Dornaika F, Ouafi A, Taleb-Ahmed A (2019) Fusion of transformed shallow features for facial expression recognition. IET Image Process 13(9):1479\u20131489. https:\/\/doi.org\/10.1049\/iet-ipr.2018.6235. Publisher: IET Digital Library. Accessed 2020-10-19","journal-title":"IET Image Process"},{"key":"3943_CR40","doi-asserted-by":"publisher","first-page":"113459","DOI":"10.1016\/j.eswa.2020.113459","volume":"156","author":"F Bougourzi","year":"2020","unstructured":"Bougourzi F, Dornaika F, Mokrani K, Taleb-Ahmed A, Ruichek Y (2020) Fusing Transformed Deep and Shallow features (FTDS) for image-based facial expression recognition. Expert Syst Appl 156:113459","journal-title":"Expert Syst Appl"},{"issue":"Jul","key":"3943_CR41","first-page":"1755","volume":"10","author":"DE King","year":"2009","unstructured":"King DE (2009) Dlib-ml: A machine learning toolkit. J Mach Learn Res 10(Jul):1755\u20131758","journal-title":"J Mach Learn Res"},{"key":"3943_CR42","doi-asserted-by":"crossref","unstructured":"Girshick R (2015) Fast r-cnn. In: Proceedings of the IEEE International Conference on Computer Vision, pp 1440\u20131448","DOI":"10.1109\/ICCV.2015.169"},{"key":"3943_CR43","unstructured":"Loshchilov I, Hutter F (2017) SGDR: Stochastic gradient descent with warm restarts. In: International conference on learning representation"},{"key":"3943_CR44","doi-asserted-by":"crossref","unstructured":"Huber PJ (1992) Robust estimation of a location parameter","DOI":"10.1007\/978-1-4612-4380-9_35"},{"issue":"1","key":"3943_CR45","doi-asserted-by":"publisher","first-page":"57","DOI":"10.1007\/BF00131148","volume":"19","author":"MJ Black","year":"1996","unstructured":"Black MJ, Rangarajan A (1996) On the unification of line processes, outlier rejection, and robust statistics with applications in early vision. Int J Comput Vis 19(1):57\u201391. Publisher: Springer","journal-title":"Int J Comput Vis"},{"key":"3943_CR46","doi-asserted-by":"crossref","unstructured":"Belagiannis V, Rupprecht C, Carneiro G, Navab N (2015) Robust optimization for deep regression. In: Proceedings of the IEEE international conference on computer Vision, pp 2830\u20132838","DOI":"10.1109\/ICCV.2015.324"},{"issue":"347-352","key":"3943_CR47","doi-asserted-by":"publisher","first-page":"240","DOI":"10.1098\/rspl.1895.0041","volume":"58","author":"K Pearson","year":"1895","unstructured":"Pearson K (1895) VII. Note On regression and inheritance in the case of two parents. Proc R Soc London 58(347-352):240\u2013242","journal-title":"Proc R Soc London"},{"key":"3943_CR48","unstructured":"Paszke A, Gross S, Massa F, Lerer A, Bradbury J, Chanan G, Killeen T, Lin Z, Gimelshein N, Antiga L (2019) Pytorch: An imperative style, high-performance deep learning library. In: Advances in neural information processing systems, pp 8026\u20138037"},{"key":"3943_CR49","unstructured":"Kingma DP, Ba J (2014) Adam: A, method for stochastic optimization"},{"issue":"8","key":"3943_CR50","doi-asserted-by":"publisher","first-page":"2196","DOI":"10.1109\/TMM.2017.2780762","volume":"20","author":"Y-Y Fan","year":"2017","unstructured":"Fan Y-Y, Liu S, Li B, Guo Z, Samal A, Wan J, Li SZ (2017) Label distribution-based facial attractiveness computation by deep residual learning. IEEE Trans Multimedia 20(8):2196\u20132208","journal-title":"IEEE Trans Multimedia"},{"key":"3943_CR51","doi-asserted-by":"crossref","unstructured":"Xu J, Jin L, Liang L, Feng Z, Xie D, Mao H (2017) Facial attractiveness prediction using psychologically inspired convolutional neural network (PI-CNN)","DOI":"10.1109\/ICASSP.2017.7952438"}],"container-title":["Applied Intelligence"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-022-03943-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10489-022-03943-0\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10489-022-03943-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,5,19]],"date-time":"2023-05-19T11:05:21Z","timestamp":1684494321000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10489-022-03943-0"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,8,26]]},"references-count":51,"journal-issue":{"issue":"9","published-print":{"date-parts":[[2023,5]]}},"alternative-id":["3943"],"URL":"https:\/\/doi.org\/10.1007\/s10489-022-03943-0","relation":{},"ISSN":["0924-669X","1573-7497"],"issn-type":[{"value":"0924-669X","type":"print"},{"value":"1573-7497","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,8,26]]},"assertion":[{"value":"7 June 2022","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"26 August 2022","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}