{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,16]],"date-time":"2026-04-16T18:08:57Z","timestamp":1776362937797,"version":"3.51.2"},"reference-count":48,"publisher":"MDPI AG","issue":"18","license":[{"start":{"date-parts":[[2022,9,9]],"date-time":"2022-09-09T00:00:00Z","timestamp":1662681600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"the National Natural Science Foundation of China","award":["62071135"],"award-info":[{"award-number":["62071135"]}]},{"name":"the National Natural Science Foundation of China","award":["GuiKe AD20159018"],"award-info":[{"award-number":["GuiKe AD20159018"]}]},{"name":"the National Natural Science Foundation of China","award":["2020GXNSFAA159004"],"award-info":[{"award-number":["2020GXNSFAA159004"]}]},{"name":"the National Natural Science Foundation of China","award":["CRKL200104"],"award-info":[{"award-number":["CRKL200104"]}]},{"name":"the National Natural Science Foundation of China","award":["WRJ2016KF01"],"award-info":[{"award-number":["WRJ2016KF01"]}]},{"name":"Guangxi Technology Base and Talent Special Project","award":["62071135"],"award-info":[{"award-number":["62071135"]}]},{"name":"Guangxi Technology Base and Talent Special Project","award":["GuiKe AD20159018"],"award-info":[{"award-number":["GuiKe AD20159018"]}]},{"name":"Guangxi Technology Base and Talent Special Project","award":["2020GXNSFAA159004"],"award-info":[{"award-number":["2020GXNSFAA159004"]}]},{"name":"Guangxi Technology Base and Talent Special Project","award":["CRKL200104"],"award-info":[{"award-number":["CRKL200104"]}]},{"name":"Guangxi Technology Base and Talent Special Project","award":["WRJ2016KF01"],"award-info":[{"award-number":["WRJ2016KF01"]}]},{"name":"Guangxi Natural Science Foundation","award":["62071135"],"award-info":[{"award-number":["62071135"]}]},{"name":"Guangxi Natural Science Foundation","award":["GuiKe AD20159018"],"award-info":[{"award-number":["GuiKe AD20159018"]}]},{"name":"Guangxi Natural Science Foundation","award":["2020GXNSFAA159004"],"award-info":[{"award-number":["2020GXNSFAA159004"]}]},{"name":"Guangxi Natural Science Foundation","award":["CRKL200104"],"award-info":[{"award-number":["CRKL200104"]}]},{"name":"Guangxi Natural Science Foundation","award":["WRJ2016KF01"],"award-info":[{"award-number":["WRJ2016KF01"]}]},{"name":"Key Laboratory of Cognitive Radio and Information Processing, Ministry of Education","award":["62071135"],"award-info":[{"award-number":["62071135"]}]},{"name":"Key Laboratory of Cognitive Radio and Information Processing, Ministry of Education","award":["GuiKe AD20159018"],"award-info":[{"award-number":["GuiKe AD20159018"]}]},{"name":"Key Laboratory of Cognitive Radio and Information Processing, Ministry of Education","award":["2020GXNSFAA159004"],"award-info":[{"award-number":["2020GXNSFAA159004"]}]},{"name":"Key Laboratory of Cognitive Radio and Information Processing, Ministry of Education","award":["CRKL200104"],"award-info":[{"award-number":["CRKL200104"]}]},{"name":"Key Laboratory of Cognitive Radio and Information Processing, Ministry of Education","award":["WRJ2016KF01"],"award-info":[{"award-number":["WRJ2016KF01"]}]},{"name":"Guangxi Key Laboratory of UAV Remote Sensing","award":["62071135"],"award-info":[{"award-number":["62071135"]}]},{"name":"Guangxi Key Laboratory of UAV Remote Sensing","award":["GuiKe AD20159018"],"award-info":[{"award-number":["GuiKe AD20159018"]}]},{"name":"Guangxi Key Laboratory of UAV Remote Sensing","award":["2020GXNSFAA159004"],"award-info":[{"award-number":["2020GXNSFAA159004"]}]},{"name":"Guangxi Key Laboratory of UAV Remote Sensing","award":["CRKL200104"],"award-info":[{"award-number":["CRKL200104"]}]},{"name":"Guangxi Key Laboratory of UAV Remote Sensing","award":["WRJ2016KF01"],"award-info":[{"award-number":["WRJ2016KF01"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>The complexity of polyphonic sounds imposes numerous challenges on their classification. Especially in real life, polyphonic sound events have discontinuity and unstable time-frequency variations. Traditional single acoustic features cannot characterize the key feature information of the polyphonic sound event, and this deficiency results in poor model classification performance. In this paper, we propose a convolutional recurrent neural network model based on the temporal-frequency (TF) attention mechanism and feature space (FS) attention mechanism (TFFS-CRNN). The TFFS-CRNN model aggregates Log-Mel spectrograms and MFCCs feature as inputs, which contains the TF-attention module, the convolutional recurrent neural network (CRNN) module, the FS-attention module and the bidirectional gated recurrent unit (BGRU) module. In polyphonic sound events detection (SED), the TF-attention module can capture the critical temporal\u2013frequency features more capably. The FS-attention module assigns different dynamically learnable weights to different dimensions of features. The TFFS-CRNN model improves the characterization of features for key feature information in polyphonic SED. By using two attention modules, the model can focus on semantically relevant time frames, key frequency bands, and important feature spaces. Finally, the BGRU module learns contextual information. The experiments were conducted on the DCASE 2016 Task3 dataset and the DCASE 2017 Task3 dataset. Experimental results show that the F1-score of the TFFS-CRNN model improved 12.4% and 25.2% compared with winning system models in DCASE challenge; the ER is reduced by 0.41 and 0.37 as well. The proposed TFFS-CRNN model algorithm has better classification performance and lower ER in polyphonic SED.<\/jats:p>","DOI":"10.3390\/s22186818","type":"journal-article","created":{"date-parts":[[2022,9,9]],"date-time":"2022-09-09T04:54:41Z","timestamp":1662699281000},"page":"6818","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":11,"title":["Polyphonic Sound Event Detection Using Temporal-Frequency Attention and Feature Space Attention"],"prefix":"10.3390","volume":"22","author":[{"given":"Ye","family":"Jin","sequence":"first","affiliation":[{"name":"Ministry of Education Key Laboratory of Cognitive Radio and Information Processing, Guilin 541006, China"},{"name":"School of Information and Communication, Guilin University of Electronic Technology, Guilin 541006, China"}]},{"given":"Mei","family":"Wang","sequence":"additional","affiliation":[{"name":"Ministry of Education Key Laboratory of Cognitive Radio and Information Processing, Guilin 541006, China"},{"name":"School of Information Science & Engineering, Guilin University of Technology, Guilin 541006, China"}]},{"given":"Liyan","family":"Luo","sequence":"additional","affiliation":[{"name":"Ministry of Education Key Laboratory of Cognitive Radio and Information Processing, Guilin 541006, China"},{"name":"School of Information and Communication, Guilin University of Electronic Technology, Guilin 541006, China"}]},{"given":"Dinghao","family":"Zhao","sequence":"additional","affiliation":[{"name":"Ministry of Education Key Laboratory of Cognitive Radio and Information Processing, Guilin 541006, China"},{"name":"School of Information and Communication, Guilin University of Electronic Technology, Guilin 541006, China"}]},{"given":"Zhanqi","family":"Liu","sequence":"additional","affiliation":[{"name":"Ministry of Education Key Laboratory of Cognitive Radio and Information Processing, Guilin 541006, China"},{"name":"School of Information and Communication, Guilin University of Electronic Technology, Guilin 541006, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,9,9]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Baumann, J., Meyer, P., Lohrenz, T., Roy, A., Papendieck, M., and Fingscheidt, T. (2021, January 6\u20139). A New DCASE 2017 Rare Sound Event Detection Benchmark under Equal Training Data: CRNN with Multi-Width Kernels. Proceedings of the 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada.","DOI":"10.1109\/ICASSP39728.2021.9414254"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"89029","DOI":"10.1109\/ACCESS.2021.3088949","article-title":"A multi-resolution CRNN-based approach for semi-supervised sound event detection in DCASE 2020 challenge","volume":"9","author":"Ramos","year":"2021","journal-title":"IEEE Access"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"147900","DOI":"10.1109\/ACCESS.2021.3123970","article-title":"A System for the Detection of Polyphonic Sound on a University Campus Based on CapsNet-RNN","volume":"9","author":"Luo","year":"2021","journal-title":"IEEE Access"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"279","DOI":"10.1109\/TITS.2015.2470216","article-title":"Audio surveillance of roads: A system for detecting anomalous sounds","volume":"17","author":"Foggia","year":"2015","journal-title":"IEEE Trans. Intell. Transp. Syst."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"5537","DOI":"10.1007\/s11042-021-11817-9","article-title":"Anomalous sound event detection: A survey of machine learning based methods and applications","volume":"81","author":"Mnasri","year":"2022","journal-title":"Multimed. Tools Appl."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Aljshamee, M., Mousa, A.H., Omran, A.A., and Ahmed, S. (2020). Sound Signal Control on Home Appliances Using Android Smart-Phone, AIP Publishing LLC.","DOI":"10.1063\/5.0027437"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Serizel, R., Turpault, N., Shah, A., and Salamon, J. (2020, January 4\u20138). Sound Event Detection in Synthetic Domestic Environments. Proceedings of the ICASSP 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain.","DOI":"10.1109\/ICASSP40776.2020.9054478"},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Chaudhary, M., Prakash, V., and Kumari, N. (2018, January 23\u201324). Identification vehicle movement detection in forest area using MFCC and KNN. Proceedings of the 2018 International Conference on System Modeling & Advancement in Research Trends (SMART), Moradabad, India.","DOI":"10.1109\/SYSMART.2018.8746936"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"61","DOI":"10.1016\/j.ecoinf.2016.08.006","article-title":"Identification of European woodpecker species in audio recordings from their drumming rolls","volume":"35","author":"Florentin","year":"2016","journal-title":"Ecol. Inform."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"209","DOI":"10.1109\/TNN.2002.806626","article-title":"Content-based audio classification and retrieval by support vector machines","volume":"14","author":"Guo","year":"2003","journal-title":"IEEE Trans. Neural Netw."},{"key":"ref_11","unstructured":"Heittola, T., Mesaros, A., Eronen, A., and Virtanen, T. (2010, January 23\u201327). Audio context recognition using audio event histograms. Proceedings of the 2010 18th European Signal Processing Conference, Aalborg, Denmark."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"1228","DOI":"10.1109\/JSTSP.2011.2146229","article-title":"Onset event decoding exploiting the rhythmic structure of polyphonic music","volume":"5","author":"Degara","year":"2011","journal-title":"IEEE J. Sel. Top. Signal Process."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Sidiropoulos, P., Mezaris, V., Kompatsiaris, I., Meinedo, H., Bugalho, M., and Trancoso, I. (2013). On the use of audio events for improving video scene segmentation. Analysis, Retrieval and Delivery of Multimedia Content, Springer.","DOI":"10.1007\/978-1-4614-3831-1_1"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Liu, Y., Tang, J., Song, Y., and Dai, L. (2018, January 12\u201315). A capsule based approach for polyphonic sound event detection. Proceedings of the 2018 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Honolulu, HI, USA.","DOI":"10.23919\/APSIPA.2018.8659533"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Parascandolo, G., Huttunen, H., and Virtanen, T. (2016, January 20\u201325). Recurrent neural networks for polyphonic sound event detection in real life recordings. Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China.","DOI":"10.1109\/ICASSP.2016.7472917"},{"key":"ref_16","unstructured":"Jeong, I.-Y., Lee, S., Han, Y., and Lee, K. (2017, January 16). Audio Event Detection Using Multiple-Input Convolutional Neural Network. Proceedings of the Detection and Classification of Acoustic Scenes and Events 2017, Munich, Germany."},{"key":"ref_17","unstructured":"Adavanne, S., and Virtanen, T. (2017). A report on sound event detection with different binaural features. arXiv."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Dinkel, H., and Yu, K. (2020, January 4\u20138). Duration robust weakly supervised sound event detection. Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain.","DOI":"10.1109\/ICASSP40776.2020.9053459"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Imoto, K., Mishima, S., Arai, Y., and Kondo, R. (2021, January 6\u20137). Impact of sound duration and inactive frames on sound event detection performance. Proceedings of the 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada.","DOI":"10.1109\/ICASSP39728.2021.9414949"},{"key":"ref_20","unstructured":"Lim, H., Park, J.-S., and Han, Y. (2017, January 16). Rare Sound Event Detection Using 1D Convolutional Recurrent Neural Networks. Proceedings of the Detection and Classification of Acoustic Scenes and Events 2017, Munich, Germany."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"279","DOI":"10.1109\/LSP.2017.2657381","article-title":"Deep convolutional neural networks and data augmentation for environmental sound classification","volume":"24","author":"Salamon","year":"2017","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Zhang, X., Zou, Y., and Shi, W. (2017, January 23\u201325). Dilated convolution neural network with LeakyReLU for environmental sound classification. Proceedings of the 2017 22nd International Conference on Digital Signal Processing (DSP), London, UK.","DOI":"10.1109\/ICDSP.2017.8096153"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Phan, H., Hertel, L., Maass, M., and Mertins, A. (2016). Robust audio event recognition with 1-max pooling convolutional neural networks. arXiv.","DOI":"10.21437\/Interspeech.2016-123"},{"key":"ref_24","first-page":"141","article-title":"Convolutional recurrent neural networks for rare sound event detection","volume":"12","author":"Virtanen","year":"2019","journal-title":"Deep Neural Netw. Sound Event Detect."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Luo, Y., Chen, Z., and Yoshioka, T. (2020, January 4\u20138). Dual-path rnn: Efficient long sequence modeling for time-domain single-channel speech separation. Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain.","DOI":"10.1109\/ICASSP40776.2020.9054266"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Wang, F., Jiang, M., Qian, C., Yang, S., Li, C., Zhang, H., Wang, X., and Tang, X. (2017). Residual attention network for image classification. arXiv.","DOI":"10.1109\/CVPR.2017.683"},{"key":"ref_27","unstructured":"Bahdanau, D., Cho, K., and Bengio, Y. (2014). Neural machine translation by jointly learning to align and translate. arXiv."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Yang, Z., Yang, D., Dyer, C., He, X., Smola, A., and Hovy, E. (2016, January 12\u201317). Hierarchical attention networks for document classification. Proceedings of the NAACL-HLT 2016, San Diego, CA, USA.","DOI":"10.18653\/v1\/N16-1174"},{"key":"ref_29","unstructured":"Chorowski, J.K., Bahdanau, D., Serdyuk, D., Cho, K., and Bengio, Y. (2015). Attention-based models for speech recognition. Adv. Neural Inf. Process. Syst., 28."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Chiu, C.-C., Sainath, T.N., Wu, Y., Prabhavalkar, R., Nguyen, P., Chen, Z., Kannan, A., Weiss, R.J., Rao, K., and Gonina, E. (2018, January 15\u201320). State-of-the-art speech recognition with sequence-to-sequence models. Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada.","DOI":"10.1109\/ICASSP.2018.8462105"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"896","DOI":"10.1016\/j.neucom.2020.08.069","article-title":"Attention based convolutional recurrent neural network for environmental sound classification","volume":"453","author":"Zhang","year":"2021","journal-title":"Neurocomputing"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"21552","DOI":"10.1038\/s41598-021-01045-4","article-title":"Environmental sound classification using temporal-frequency attention based convolutional neural network","volume":"11","author":"Mu","year":"2021","journal-title":"Sci. Rep."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"2615","DOI":"10.1109\/TNSRE.2020.3037326","article-title":"A multi-scale fusion convolutional neural network based on attention mechanism for the visualization analysis of EEG signals decoding","volume":"28","author":"Li","year":"2020","journal-title":"IEEE Trans. Neural Syst. Rehabil. Eng."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"2430","DOI":"10.1109\/TGRS.2020.3005431","article-title":"Hyperspectral image classification based on 3-D octave convolution with spatial\u2013spectral attention network","volume":"59","author":"Tang","year":"2020","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Xia, X., Pan, J., and Wang, Y. (2020, January 4\u20138). Audio Sound Determination Using Feature Space Attention Based Convolution Recurrent Neural Network. Proceedings of the ICASSP 2020\u20142020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain.","DOI":"10.1109\/ICASSP40776.2020.9054711"},{"key":"ref_36","first-page":"1","article-title":"Channel attention-based temporal convolutional network for satellite image time series classification","volume":"19","author":"Tang","year":"2021","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Shen, Y.-H., He, K.-X., and Zhang, W.-Q. (2018). Learning how to listen: A temporal-frequential attention model for sound event detection. arXiv.","DOI":"10.21437\/Interspeech.2019-2045"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Hu, J., Shen, L., and Sun, G. (2018, January 18\u201322). Squeeze-and-excitation networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00745"},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Li, X., Chebiyyam, V., and Kirchhoff, K. (2019). Multi-stream network with temporal attention for environmental sound classification. arXiv.","DOI":"10.21437\/Interspeech.2019-3019"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Mesaros, A., Heittola, T., and Virtanen, T. (2016). Metrics for polyphonic sound event detection. Appl. Sci., 6.","DOI":"10.3390\/app6060162"},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1155\/2007\/48317","article-title":"A discriminative model for polyphonic piano transcription","volume":"2007","author":"Poliner","year":"2006","journal-title":"EURASIP J. Adv. Signal Process."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Mesaros, A., Heittola, T., and Virtanen, T. (September, January 29). TUT database for acoustic scene classification and sound event detection. Proceedings of the 2016 24th European Signal Processing Conference (EUSIPCO), Budapest, Hungary.","DOI":"10.1109\/EUSIPCO.2016.7760424"},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"1291","DOI":"10.1109\/TASLP.2017.2690575","article-title":"Convolutional recurrent neural networks for polyphonic sound event detection","volume":"25","author":"Parascandolo","year":"2017","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"ref_44","unstructured":"Maas, A.L., Hannun, A.Y., and Ng, A.Y. (2022, September 02). Rectifier Nonlinearities Improve Neural Network Acoustic Models; Atlanta, GA, USA, 2013; Volume 30, p. 3. Available online: https:\/\/citeseerx.ist.psu.edu\/viewdoc\/download?doi=10.1.1.693.1422&rep=rep1&type=pdf."},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Jin, W., Liu, J., Feng, M., and Ren, J. (2022). Polyphonic Sound Event Detection Using Capsule Neural Network on Multi-Type-Multi-Scale Time-Frequency Representation, IEEE.","DOI":"10.1109\/SEAI55746.2022.9832286"},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"294","DOI":"10.1109\/TASLP.2019.2953350","article-title":"Adaptive multi-scale detection of acoustic events","volume":"28","author":"Ding","year":"2019","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"103434","DOI":"10.1016\/j.dsp.2022.103434","article-title":"A capsule network with pixel-based attention and BGRU for sound event detection","volume":"123","author":"Meng","year":"2022","journal-title":"Digit. Signal Process."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Wang, M., Yao, Y., Qiu, H., and Song, X. (2022). Adaptive Memory-Controlled Self-Attention for Polyphonic Sound Event Detection. Symmetry, 14.","DOI":"10.3390\/sym14020366"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/18\/6818\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:28:01Z","timestamp":1760142481000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/18\/6818"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,9]]},"references-count":48,"journal-issue":{"issue":"18","published-online":{"date-parts":[[2022,9]]}},"alternative-id":["s22186818"],"URL":"https:\/\/doi.org\/10.3390\/s22186818","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,9,9]]}}}