{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,24]],"date-time":"2026-02-24T18:33:22Z","timestamp":1771958002872,"version":"3.50.1"},"reference-count":62,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2022,8,23]],"date-time":"2022-08-23T00:00:00Z","timestamp":1661212800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,8,23]],"date-time":"2022-08-23T00:00:00Z","timestamp":1661212800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100002345","name":"Eberhard Karls Universit\u00e4t T\u00fcbingen","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100002345","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int J Data Sci Anal"],"published-print":{"date-parts":[[2023,6]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Although deep neural networks (DNNs) constitute the state of the art in many tasks based on visual, audio, or text data, their performance on heterogeneous, tabular data is typically inferior to that of decision tree ensembles. To bridge the gap between the difficulty of DNNs to handle tabular data and leverage the flexibility of deep learning under input heterogeneity, we propose<jats:italic>DeepTLF<\/jats:italic>, a framework for deep tabular learning. The core idea of our method is to transform the heterogeneous input data into homogeneous data to boost the performance of DNNs considerably. For the transformation step, we develop a novel knowledge distillations approach,<jats:italic>TreeDrivenEncoder<\/jats:italic>, which exploits the structure of decision trees trained on the available heterogeneous data to map the original input vectors onto homogeneous vectors that a DNN can use to improve the predictive performance. Within the proposed framework, we also address the issue of the multimodal learning, since it is challenging to apply decision tree ensemble methods when other data modalities are present. Through extensive and challenging experiments on various real-world datasets, we demonstrate that the DeepTLF pipeline leads to higher predictive performance. On average, our framework shows 19.6% performance improvement in comparison to DNNs. The DeepTLF code is<jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/github.com\/unnir\/DeepTLF\">publicly available<\/jats:ext-link>.<\/jats:p>","DOI":"10.1007\/s41060-022-00350-z","type":"journal-article","created":{"date-parts":[[2022,8,23]],"date-time":"2022-08-23T04:02:41Z","timestamp":1661227361000},"page":"85-100","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":30,"title":["DeepTLF: robust deep neural networks for heterogeneous tabular data"],"prefix":"10.1007","volume":"16","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-4889-9989","authenticated-orcid":false,"given":"Vadim","family":"Borisov","sequence":"first","affiliation":[]},{"given":"Klaus","family":"Broelemann","sequence":"additional","affiliation":[]},{"given":"Enkelejda","family":"Kasneci","sequence":"additional","affiliation":[]},{"given":"Gjergji","family":"Kasneci","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,8,23]]},"reference":[{"key":"350_CR1","doi-asserted-by":"crossref","unstructured":"Borisov, V., Leemann, T., Se\u00dfler, K., Haug, J., Pawelczyk, M., Kasneci, G.: Deep neural networks and tabular data: a survey. arXiv preprint arXiv:2110.01889 (2021)","DOI":"10.1109\/TNNLS.2022.3229161"},{"issue":"01","key":"350_CR2","first-page":"1","volume":"9","author":"M Fatima","year":"2017","unstructured":"Fatima, M., Pasha, M., et al.: Survey of machine learning algorithms for disease diagnostic. J. Intell. Learn. Syst. Appl. 9(01), 1 (2017)","journal-title":"J. Intell. Learn. Syst. Appl."},{"key":"350_CR3","doi-asserted-by":"publisher","DOI":"10.1016\/j.asoc.2020.106263","volume":"91","author":"X Dastile","year":"2020","unstructured":"Dastile, X., Celik, T., Potsane, M.: Statistical and machine learning models in credit scoring: a systematic literature survey. Appl. Soft Comput. 91, 106263 (2020)","journal-title":"Appl. Soft Comput."},{"issue":"2","key":"350_CR4","doi-asserted-by":"publisher","first-page":"1153","DOI":"10.1109\/COMST.2015.2494502","volume":"18","author":"AL Buczak","year":"2015","unstructured":"Buczak, A.L., Guven, E.: A survey of data mining and machine learning methods for cyber security intrusion detection. IEEE Commun. Surv. Tutor. 18(2), 1153\u20131176 (2015)","journal-title":"IEEE Commun. Surv. Tutor."},{"key":"350_CR5","unstructured":"Goodfellow, I., Bengio, Y., Courville, A.: Deep learning (2016). http:\/\/www.deeplearningbook.org"},{"key":"350_CR6","doi-asserted-by":"crossref","unstructured":"Shwartz-Ziv, R., Armon, A.: Tabular data: deep learning is not all you need (2021)","DOI":"10.1016\/j.inffus.2021.11.011"},{"key":"350_CR7","unstructured":"Mitchell, B.R., et\u00a0al.: The spatial inductive bias of deep learning. PhD thesis, Johns Hopkins University (2017)"},{"key":"350_CR8","unstructured":"Katzir, L., Elidan, G., El-Yaniv, R.: Net-DNF: effective deep modeling of tabular data. In: International Conference on Learning Representations (2020)"},{"key":"350_CR9","doi-asserted-by":"crossref","unstructured":"Garc\u00eda, S., Luengo, J., Herrera, F.: Data preprocessing in data mining, vol. 72. Springer, Cham, Switzerland (2015)","DOI":"10.1007\/978-3-319-10247-4"},{"key":"350_CR10","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s40537-020-00305-w","volume":"7","author":"JT Hancock","year":"2020","unstructured":"Hancock, J.T., Khoshgoftaar, T.M.: Survey on categorical data for neural networks. J. Big Data 7, 1\u201341 (2020)","journal-title":"J. Big Data"},{"key":"350_CR11","unstructured":"Gorishniy, Y., Rubachev, I., Babenko, A.: On embeddings for numerical features in tabular deep learning. arXiv preprint arXiv:2203.05556 (2022)"},{"key":"350_CR12","unstructured":"Nielsen, D.: Tree boosting with xgboost-why does xgboost win \u201cevery\u201d machine learning competition? Master\u2019s thesis, NTNU (2016)"},{"issue":"1","key":"350_CR13","doi-asserted-by":"publisher","first-page":"5","DOI":"10.1023\/A:1010933404324","volume":"45","author":"L Breiman","year":"2001","unstructured":"Breiman, L.: Random forests. Mach. Learn. 45(1), 5\u201332 (2001)","journal-title":"Mach. Learn."},{"issue":"4","key":"350_CR14","doi-asserted-by":"publisher","first-page":"367","DOI":"10.1016\/S0167-9473(01)00065-2","volume":"38","author":"JH Friedman","year":"2002","unstructured":"Friedman, J.H.: Stochastic gradient boosting. Comput. Stat. Data Anal. 38(4), 367\u2013378 (2002)","journal-title":"Comput. Stat. Data Anal."},{"key":"350_CR15","doi-asserted-by":"crossref","unstructured":"Chen, T., Guestrin, C.: XGBoost: a scalable tree boosting system. In: Proceedings of the 22nd Acm Sigkdd International Conference on Knowledge Discovery and Data Mining, pp. 785\u2013794 (2016)","DOI":"10.1145\/2939672.2939785"},{"key":"350_CR16","unstructured":"Ke, G., Meng, Q., Finley, T., Wang, T., Chen, W., Ma, W., Ye, Q., Liu, T.-Y.: Lightgbm: a highly efficient gradient boosting decision tree. In: Advances in Neural Information Processing Systems, pp. 3146\u20133154 (2017)"},{"key":"350_CR17","unstructured":"Prokhorenkova, L., Gusev, G., Vorobev, A., Dorogush, A.V., Gulin, A.: CatBoost: unbiased boosting with categorical features. In: Advances in Neural Information Processing Systems, pp. 6638\u20136648 (2018)"},{"key":"350_CR18","doi-asserted-by":"crossref","unstructured":"Gu, K., Budhkar, A.: A package for learning on tabular and text data with transformers. In: Proceedings of the Third Workshop on Multimodal Artificial Intelligence, pp. 69\u201373 (2021)","DOI":"10.18653\/v1\/2021.maiworkshop-1.10"},{"key":"350_CR19","unstructured":"Arik, S.O., Pfister, T.: TabNet: attentive interpretable tabular learning. arXiv preprint arXiv:1908.07442 (2019)"},{"key":"350_CR20","unstructured":"Popov, S., Morozov, S., Babenko, A.: Neural oblivious decision ensembles for deep learning on tabular data. arXiv preprint arXiv:1909.06312 (2019)"},{"key":"350_CR21","doi-asserted-by":"crossref","unstructured":"Yin, P., Neubig, G., Yih, W.-T., Riedel, S.: Tabert: pretraining for joint understanding of textual and tabular data. arXiv preprint arXiv:2005.08314 (2020)","DOI":"10.18653\/v1\/2020.acl-main.745"},{"key":"350_CR22","unstructured":"Huang, X., Khetan, A., Cvitkovic, M., Karnin, Z.: Tabtransformer: tabular data modeling using contextual embeddings. arXiv preprint arXiv:2012.06678 (2020)"},{"key":"350_CR23","doi-asserted-by":"crossref","unstructured":"Guo, H., Tang, R., Ye, Y., Li, Z., He, X.: DeepFM: a factorization-machine based neural network for CTR prediction. arXiv preprint arXiv:1703.04247 (2017)","DOI":"10.24963\/ijcai.2017\/239"},{"key":"350_CR24","doi-asserted-by":"crossref","unstructured":"Ke, G., Xu, Z., Zhang, J., Bian, J., Liu, T.-Y.: DeepGBM: a deep learning framework distilled by GBDT for online prediction tasks. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 384\u2013394 (2019)","DOI":"10.1145\/3292500.3330858"},{"key":"350_CR25","doi-asserted-by":"crossref","unstructured":"He, X., Pan, J., Jin, O., Xu, T., Liu, B., Xu, T., Shi, Y., Atallah, A., Herbrich, R., Bowers, S., et\u00a0al.: Practical lessons from predicting clicks on ads at facebook. In: Proceedings of the Eighth International Workshop on Data Mining for Online Advertising, pp. 1\u20139 (2014)","DOI":"10.1145\/2648584.2648589"},{"key":"350_CR26","unstructured":"Shavitt, I., Segal, E.: Regularization learning networks: deep learning for tabular datasets. In: Advances in Neural Information Processing Systems, pp. 1379\u20131389 (2018)"},{"key":"350_CR27","first-page":"11033","volume":"33","author":"J Yoon","year":"2020","unstructured":"Yoon, J., Zhang, Y., Jordon, J., van der Schaar, M.: Vime: extending the success of self-and semi-supervised learning to tabular domaindim. Adv. Neural Inf. Process. Syst. 33, 11033\u201311043 (2020)","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"350_CR28","doi-asserted-by":"crossref","unstructured":"Padhi, I., Schiff, Y., Melnyk, I., Rigotti, M., Mroueh, Y., Dognin, P., Ross, J., Nair, R., Altman, E.: Tabular transformers for modeling multivariate time series. arXiv preprint arXiv:2011.01843 (2020)","DOI":"10.1109\/ICASSP39728.2021.9414142"},{"key":"350_CR29","unstructured":"Levy, E., Mathov, Y., Katzir, Z., Shabtai, A., Elovici, Y.: Not all datasets are born equal: on heterogeneous data and adversarial examples. arXiv preprint arXiv:2010.03180 (2020)"},{"key":"350_CR30","unstructured":"Ballet, V., Renard, X., Aigrain, J., Laugel, T., Frossard, P., Detyniecki, M.: Imperceptible adversarial attacks on tabular data. arXiv preprint arXiv:1911.03274 (2019)"},{"key":"350_CR31","unstructured":"Akrami, H., Aydore, S., Leahy, R.M., Joshi, A.A.: Robust variational autoencoder for tabular data with beta divergence. arXiv preprint arXiv:2006.08204 (2020)"},{"key":"350_CR32","doi-asserted-by":"crossref","unstructured":"Gupta, K., Pesquet-Popescu, B., Kaakai, F., Pesquet, J.-C.: A quantitative analysis of the robustness of neural networks for tabular data. In: ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 8057\u20138061 (2021). IEEE","DOI":"10.1109\/ICASSP39728.2021.9413858"},{"key":"350_CR33","doi-asserted-by":"crossref","unstructured":"Rendle, S.: Factorization machines. In: 2010 IEEE International Conference on Data Mining, pp. 995\u20131000 (2010). IEEE","DOI":"10.1109\/ICDM.2010.127"},{"key":"350_CR34","doi-asserted-by":"crossref","unstructured":"Lian, J., Zhou, X., Zhang, F., Chen, Z., Xie, X., Sun, G.: xDeepFM: combining explicit and implicit feature interactions for recommender systems. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1754\u20131763 (2018)","DOI":"10.1145\/3219819.3220023"},{"key":"350_CR35","doi-asserted-by":"crossref","unstructured":"Rota\u00a0Bulo, S., Kontschieder, P.: Neural decision forests for semantic image labelling. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 81\u201388 (2014)","DOI":"10.1109\/CVPR.2014.18"},{"key":"350_CR36","unstructured":"Denoyer, L., Gallinari, P.: Deep sequential neural network. arXiv preprint arXiv:1410.0510 (2014)"},{"key":"350_CR37","doi-asserted-by":"crossref","unstructured":"Wang, S., Aggarwal, C., Liu, H.: Using a random forest to inspire a neural network and improving on it. In: Proceedings of the 2017 SIAM International Conference on Data Mining, pp. 1\u20139 (2017). SIAM","DOI":"10.1137\/1.9781611974973.1"},{"key":"350_CR38","doi-asserted-by":"crossref","unstructured":"Peters, B., Niculae, V., Martins, A.F.: Sparse sequence-to-sequence models. arXiv preprint arXiv:1905.05702 (2019)","DOI":"10.18653\/v1\/P19-1146"},{"key":"350_CR39","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, L., Polosukhin, I.: Attention is all you need. arXiv preprint arXiv:1706.03762 (2017)"},{"key":"350_CR40","doi-asserted-by":"crossref","unstructured":"Baylor, D., Breck, E., Cheng, H.-T., Fiedel, N., Foo, C.Y., Haque, Z., Haykal, S., Ispir, M., Jain, V., Koc, L., et\u00a0al.: Tfx: a tensorflow-based production-scale machine learning platform. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1387\u20131395 (2017)","DOI":"10.1145\/3097983.3098021"},{"key":"350_CR41","doi-asserted-by":"crossref","unstructured":"Moosmann, F., Triggs, B., Jurie, F.: Fast discriminative visual codebooks using randomized clustering forests. In: Advances in Neural Information Processing Systems, pp. 985\u2013992 (2007)","DOI":"10.7551\/mitpress\/7503.003.0128"},{"issue":"1","key":"350_CR42","doi-asserted-by":"publisher","first-page":"3","DOI":"10.1007\/s10994-006-6226-1","volume":"63","author":"P Geurts","year":"2006","unstructured":"Geurts, P., Ernst, D., Wehenkel, L.: Extremely randomized trees. Mach. Learn. 63(1), 3\u201342 (2006)","journal-title":"Mach. Learn."},{"key":"350_CR43","doi-asserted-by":"crossref","unstructured":"Medvedev, D., D\u2019yakonov, A.: New properties of the data distillation method when working with tabular data. arXiv preprint arXiv:2010.09839 (2020)","DOI":"10.1007\/978-3-030-72610-2_29"},{"key":"350_CR44","unstructured":"Bruch, S., Pfeifer, J., Guillame-bert, M.: Learning representations for axis-aligned decision forests through input perturbation. arXiv preprint arXiv:2007.14761 (2020)"},{"key":"350_CR45","first-page":"3592","volume":"33","author":"T Pedapati","year":"2020","unstructured":"Pedapati, T., Balakrishnan, A., Shanmugam, K., Dhurandhar, A.: Learning global transparent models consistent with local contrastive explanations. Adv. Neural Inf. Process. Syst. 33, 3592\u20133602 (2020)","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"350_CR46","unstructured":"Ngiam, J., Khosla, A., Kim, M., Nam, J., Lee, H., Ng, A.Y.: Multimodal deep learning. In: ICML (2011)"},{"issue":"6","key":"350_CR47","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s00138-021-01249-8","volume":"32","author":"SY Boulahia","year":"2021","unstructured":"Boulahia, S.Y., Amamra, A., Madi, M.R., Daikh, S.: Early, intermediate and late fusion strategies for robust deep learning-based multimodal action recognition. Mach. Vis. Appl. 32(6), 1\u201318 (2021)","journal-title":"Mach. Vis. Appl."},{"key":"350_CR48","doi-asserted-by":"crossref","unstructured":"Ma, M., Ren, J., Zhao, L., Testuggine, D., Peng, X.: Are multimodal transformers robust to missing modality? In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 18177\u201318186 (2022)","DOI":"10.1109\/CVPR52688.2022.01764"},{"key":"350_CR49","first-page":"14200","volume":"34","author":"A Nagrani","year":"2021","unstructured":"Nagrani, A., Yang, S., Arnab, A., Jansen, A., Schmid, C., Sun, C.: Attention bottlenecks for multimodal fusion. Adv. Neural Inf. Process. Syst. 34, 14200\u201314213 (2021)","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"350_CR50","doi-asserted-by":"crossref","unstructured":"Fix, E.: Discriminatory analysis: nonparametric discrimination, consistency properties (1951)","DOI":"10.1037\/e471672008-001"},{"issue":"56","key":"350_CR51","first-page":"1929","volume":"15","author":"N Srivastava","year":"2014","unstructured":"Srivastava, N., Hinton, G., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(56), 1929\u20131958 (2014)","journal-title":"J. Mach. Learn. Res."},{"issue":"7","key":"350_CR52","doi-asserted-by":"publisher","first-page":"1895","DOI":"10.1162\/089976698300017197","volume":"10","author":"TG Dietterich","year":"1998","unstructured":"Dietterich, T.G.: Approximate statistical tests for comparing supervised classification learning algorithms. Neural Comput. 10(7), 1895\u20131923 (1998)","journal-title":"Neural Comput."},{"key":"350_CR53","unstructured":"Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11), 2579\u20132605 (2008)"},{"key":"350_CR54","unstructured":"Brooks, N.: Women\u2019s E-commerce clothing reviews. Data retrieved from Kaggle, https:\/\/www.kaggle.com\/datasets\/nicapotato\/womens-ecommerce-clothing-reviews (2018)"},{"key":"350_CR55","unstructured":"PetFinder.my: PetFinder.my adoption prediction. data retrieved from Kaggle, https:\/\/www.kaggle.com\/competitions\/petfinder-adoption-prediction (2019)"},{"key":"350_CR56","unstructured":"Paszke, A., Gross, S., Chintala, S., Chanan, G., Yang, E., DeVito, Z., Lin, Z., Desmaison, A., Antiga, L., Lerer, A.: Automatic differentiation in pytorch. In: NIPS-W (2017)"},{"issue":"1","key":"350_CR57","doi-asserted-by":"publisher","first-page":"2","DOI":"10.3390\/technologies9010002","volume":"9","author":"A Jaiswal","year":"2021","unstructured":"Jaiswal, A., Babu, A.R., Zadeh, M.Z., Banerjee, D., Makedon, F.: A survey on contrastive self-supervised learning. Technologies 9(1), 2 (2021)","journal-title":"Technologies"},{"key":"350_CR58","doi-asserted-by":"crossref","unstructured":"Liu, F.T., Ting, K.M., Zhou, Z.-H.: Isolation forest. In: 2008 Eighth IEEE International Conference on Data Mining, pp. 413\u2013422 (2008). IEEE","DOI":"10.1109\/ICDM.2008.17"},{"key":"350_CR59","first-page":"2825","volume":"12","author":"F Pedregosa","year":"2011","unstructured":"Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., Blondel, M., Prettenhofer, P., Weiss, R., Dubourg, V., et al.: Scikit-learn: machine learning in python. J. Mach. Learn. Res. 12, 2825\u20132830 (2011)","journal-title":"J. Mach. Learn. Res."},{"key":"350_CR60","unstructured":"Zhuang, J., Tang, T., Ding, Y., Tatikonda, S., Dvornek, N., Papademetris, X., Duncan, J.S.: Adabelief optimizer: adapting stepsizes by the belief in observed gradients. arXiv preprint arXiv:2010.07468 (2020)"},{"key":"350_CR61","unstructured":"Ramachandran, P., Zoph, B., Le, Q.V.: Searching for activation functions. arXiv preprint arXiv:1710.05941 (2017)"},{"key":"350_CR62","unstructured":"Bergstra, J., Bardenet, R., Bengio, Y., K\u00e9gl, B.: Algorithms for hyper-parameter optimization. Neural Inf. Process. Syst. Found. 24, 2546\u20132554 (2011)"}],"container-title":["International Journal of Data Science and Analytics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s41060-022-00350-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s41060-022-00350-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s41060-022-00350-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,11,26]],"date-time":"2023-11-26T02:12:59Z","timestamp":1700964779000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s41060-022-00350-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,8,23]]},"references-count":62,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2023,6]]}},"alternative-id":["350"],"URL":"https:\/\/doi.org\/10.1007\/s41060-022-00350-z","relation":{},"ISSN":["2364-415X","2364-4168"],"issn-type":[{"value":"2364-415X","type":"print"},{"value":"2364-4168","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,8,23]]},"assertion":[{"value":"8 February 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"16 July 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"23 August 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"On behalf of all authors, the corresponding author states that there is no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}