{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,18]],"date-time":"2026-03-18T01:10:35Z","timestamp":1773796235421,"version":"3.50.1"},"reference-count":62,"publisher":"MDPI AG","issue":"20","license":[{"start":{"date-parts":[[2022,10,11]],"date-time":"2022-10-11T00:00:00Z","timestamp":1665446400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"the Projet r\u00e9gional Recherche Formation Innovation RFI WISE"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>In environment sound classification, log Mel band energies (MBEs) are considered as the most successful and commonly used features for classification. The underlying algorithm, fast Fourier transform (FFT), is valid under certain restrictions. In this study, we address these limitations of Fourier transform and propose a new method to extract log Mel band energies using amplitude modulation and frequency modulation. We present a comparative study between traditionally used log Mel band energy features extracted by Fourier transform and log Mel band energy features extracted by our new approach. This approach is based on extracting log Mel band energies from estimation of instantaneous frequency (IF) and instantaneous amplitude (IA), which are used to construct a spectrogram. The estimation of IA and IF is made by associating empirical mode decomposition (EMD) with the Teager\u2013Kaiser energy operator (TKEO) and the discrete energy separation algorithm. Later, Mel filter bank is applied to the estimated spectrogram to generate EMD-TKEO-based MBEs, or simply, EMD-MBEs. In addition, we employ the EMD method to remove signal trends from the original signal and generate another type of MBE, called S-MBEs, using FFT and a Mel filter bank. Four different datasets were utilised and convolutional neural networks (CNN) were trained using features extracted from Fourier transform-based MBEs (FFT-MBEs), EMD-MBEs, and S-MBEs. In addition, CNNs were trained with an aggregation of all three feature extraction techniques and a combination of FFT-MBEs and EMD-MBEs. Individually, FFT-MBEs achieved higher accuracy compared to EMD-MBEs and S-MBEs. In general, the system trained with the combination of all three features performed slightly better compared to the system trained with the three features separately.<\/jats:p>","DOI":"10.3390\/s22207717","type":"journal-article","created":{"date-parts":[[2022,10,12]],"date-time":"2022-10-12T02:10:27Z","timestamp":1665540627000},"page":"7717","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":11,"title":["Empirical Mode Decomposition-Based Feature Extraction for Environmental Sound Classification"],"prefix":"10.3390","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-3018-8919","authenticated-orcid":false,"given":"Ammar","family":"Ahmed","sequence":"first","affiliation":[{"name":"Laboratoire d\u2019Acoustique de l\u2019Universit\u00e9 du Mans (LAUM), UMR 6613, Institut d\u2019Acoustique-Graduate School (IA-GS), CNRS, Le Mans Universit\u00e9, 72085 Le Mans, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9748-4560","authenticated-orcid":false,"given":"Youssef","family":"Serrestou","sequence":"additional","affiliation":[{"name":"Laboratoire d\u2019Acoustique de l\u2019Universit\u00e9 du Mans (LAUM), UMR 6613, Institut d\u2019Acoustique-Graduate School (IA-GS), CNRS, Le Mans Universit\u00e9, 72085 Le Mans, France"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9775-7485","authenticated-orcid":false,"given":"Kosai","family":"Raoof","sequence":"additional","affiliation":[{"name":"Laboratoire d\u2019Acoustique de l\u2019Universit\u00e9 du Mans (LAUM), UMR 6613, Institut d\u2019Acoustique-Graduate School (IA-GS), CNRS, Le Mans Universit\u00e9, 72085 Le Mans, France"}]},{"given":"Jean-Fran\u00e7ois","family":"Diouris","sequence":"additional","affiliation":[{"name":"CNRS, IETR UMR 6164, Universit\u00e9 de Nantes, 85000 La Roche-sur-Yon, France"}]}],"member":"1968","published-online":{"date-parts":[[2022,10,11]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"379","DOI":"10.1109\/TASLP.2017.2778423","article-title":"Detection and classification of acoustic scenes and events: Outcome of the DCASE 2016 challenge","volume":"26","author":"Mesaros","year":"2017","journal-title":"IEEE Trans. Audio Speech Lang. Process."},{"key":"ref_2","unstructured":"Plumbley, M.D., Kroos, C., Bello, J.P., Richard, G., Ellis, D.P., and Mesaros, A. (2018). Proceedings of the Detection and Classification of Acoustic Scenes and Events 2018 Workshop (DCASE2018), Tampere University of Technology."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"1291","DOI":"10.1109\/TASLP.2017.2690575","article-title":"Convolutional Recurrent Neural Networks for Polyphonic Sound Event Detection","volume":"25","author":"Parascandolo","year":"2017","journal-title":"IEEE Trans. Audio Speech Lang. Process."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Cakir, E., Heittola, T., Huttunen, H., and Virtanen, T. (2015, January 12\u201317). Polyphonic sound event detection using multi label deep neural networks. Proceedings of the 2015 International Joint Conference on Neural Networks (IJCNN), Killarney, Ireland.","DOI":"10.1109\/IJCNN.2015.7280624"},{"key":"ref_5","unstructured":"Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., and Keutzer, K. (2016). SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size. arXiv."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Hershey, S., Chaudhuri, S., Ellis, D.P., Gemmeke, J.F., Jansen, A., Moore, R.C., Plakal, M., Platt, D., Saurous, R.A., and Seybold, B. (2017, January 5\u20139). CNN architectures for large-scale audio classification. Proceedings of the 2017 IEEE International Conference On Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA.","DOI":"10.1109\/ICASSP.2017.7952132"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Zinemanas, P., Cancela, P., and Rocamora, M. (2019, January 8\u201312). End-to-end convolutional neural networks for sound event detection in urban environments. Proceedings of the 2019 24th Conference of Open Innovations Association (FRUCT), Moscow, Russia.","DOI":"10.23919\/FRUCT.2019.8711906"},{"key":"ref_8","unstructured":"Adavanne, S., Parascandolo, G., Pertil\u00e4, P., Heittola, T., and Virtanen, T. (2017). Sound event detection in multichannel audio using spatial and harmonic features. arXiv."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"153","DOI":"10.1016\/j.dsp.2007.12.004","article-title":"Time\u2013frequency feature representation using energy concentration: An overview of recent advances","volume":"19","author":"Jiang","year":"2009","journal-title":"Digital Signal Process."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"236","DOI":"10.1109\/TASSP.1984.1164317","article-title":"Signal estimation from modified short-time Fourier transform","volume":"32","author":"Griffin","year":"1984","journal-title":"IEEE Trans. Acoust. Speech Signal Process."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"55","DOI":"10.1109\/TASSP.1980.1163359","article-title":"Time-frequency representation of digital signals and systems based on short-time Fourier analysis","volume":"28","author":"Portnoff","year":"1980","journal-title":"IEEE Trans. Acoust. Speech Signal Process."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Shor, J., Jansen, A., Maor, R., Lang, O., Tuval, O., Quitry, F.d.C., Tagliasacchi, M., Shavitt, I., Emanuel, D., and Haviv, Y. (2020). Towards learning a universal non-semantic representation of speech. arXiv.","DOI":"10.21437\/Interspeech.2020-1242"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Drossos, K., Mimilakis, S.I., Gharib, S., Li, Y., and Virtanen, T. (2020, January 19\u201324). Sound event detection with depthwise separable and dilated convolutions. Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), Glasgow, UK.","DOI":"10.1109\/IJCNN48605.2020.9207532"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., and Rabinovich, A. (2015, January 7\u201312). Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Zhang, X., Zhou, X., Lin, M., and Sun, J. (2018, January 18\u201322). Shufflenet: An extremely efficient convolutional neural network for mobile devices. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00716"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Tsalera, E., Papadakis, A., and Samarakou, M. (2021). Comparison of Pre-Trained CNNs for Audio Classification Using Transfer Learning. J. Sens. Actuator Netw., 10.","DOI":"10.3390\/jsan10040072"},{"key":"ref_17","unstructured":"Titchmarsh, E.C. (1948). Introduction to the Theory of Fourier Integrals, Clarendon Press Oxford."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"903","DOI":"10.1098\/rspa.1998.0193","article-title":"The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis","volume":"454","author":"Huang","year":"1998","journal-title":"Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"297","DOI":"10.1090\/S0025-5718-1965-0178586-1","article-title":"An algorithm for the machine calculation of complex Fourier series","volume":"19","author":"Cooley","year":"1965","journal-title":"Math. Comput."},{"key":"ref_20","unstructured":"Ono, N., Harada, N., Kawaguchi, Y., Mesaros, A., Imoto, K., Koizumi, Y., and Komatsu, T. In Proceedings of the Fifth Workshop on Detection and Classification of Acoustic Scenes and Events (DCASE 2020), Tokyo, Japan, 2\u20134 November 2020."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Mesaros, A., Heittola, T., and Virtanen, T. (2018, January 17\u201320). Acoustic scene classification: An overview of DCASE 2017 challenge entries. Proceedings of the 2018 16th International Workshop on Acoustic Signal Enhancement (IWAENC), Tokyo, Japan.","DOI":"10.1109\/IWAENC.2018.8521242"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Mesaros, A., Heittola, T., and Virtanen, T. (2017, January 15\u201318). Assessment of human and machine performance in acoustic scene classification: Dcase 2016 case study. Proceedings of the 2017 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), New Paltz, NY, USA.","DOI":"10.1109\/WASPAA.2017.8170047"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"1733","DOI":"10.1109\/TMM.2015.2428998","article-title":"Detection and classification of acoustic scenes and events","volume":"17","author":"Stowell","year":"2015","journal-title":"IEEE Trans. Multimed."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"348","DOI":"10.1190\/1.1441816","article-title":"The form and nature of seismic waves and the structure of seismograms","volume":"5","author":"Ricker","year":"1940","journal-title":"Geophysics"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Wirsing, K. (2020). Time Frequency Analysis of Wavelet and Fourier Transform. Wavelet Theory, IntechOpen.","DOI":"10.5772\/intechopen.94521"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"385","DOI":"10.1029\/97RG00427","article-title":"Wavelet analysis for geophysical applications","volume":"35","author":"Kumar","year":"1997","journal-title":"Rev. Geophys."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Morlet, J. (1983). Sampling theory and wave propagation. Issues in Acoustic Signal\u2014Image Processing and Recognition, Springer.","DOI":"10.1007\/978-3-642-82002-1_12"},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"7","DOI":"10.5937\/engtoday2201007D","article-title":"Observer-based fault estimation in steer-by-wire vehicle","volume":"1","author":"Morato","year":"2022","journal-title":"Eng. Today"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"101088","DOI":"10.1016\/j.nahs.2021.101088","article-title":"Exponential stability of nonlinear state-dependent delayed impulsive systems with applications","volume":"42","author":"Xu","year":"2021","journal-title":"Nonlinear Anal. Hybrid Syst."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"451","DOI":"10.1121\/1.4837835","article-title":"Speech enhancement using empirical mode decomposition and the Teager\u2013Kaiser energy operator","volume":"135","author":"Khaldi","year":"2014","journal-title":"J. Acoust. Soc. Am."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"1919","DOI":"10.1007\/s40747-021-00295-z","article-title":"Emotion classification from speech signal based on empirical mode decomposition and non-linear features","volume":"7","author":"Krishnan","year":"2021","journal-title":"Complex Intell. Syst."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"De La Cruz, C., and Santhanam, B. (2016, January 6\u20139). A joint EMD and Teager-Kaiser energy approach towards normal and nasal speech analysis. Proceedings of the 2016 50th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA.","DOI":"10.1109\/ACSSC.2016.7869075"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"22","DOI":"10.1016\/j.specom.2019.09.002","article-title":"Automatic speech emotion recognition using an optimal combination of features based on EMD-TKEO","volume":"114","author":"Kerkeni","year":"2019","journal-title":"Speech Commun."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"17029","DOI":"10.1007\/s00521-021-06295-x","article-title":"GTCC-based BiLSTM deep-learning framework for respiratory sound classification using empirical mode decomposition","volume":"33","author":"Jayalakshmy","year":"2021","journal-title":"Neural Comput. Appl."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"3024","DOI":"10.1109\/78.277799","article-title":"Energy separation in signal modulations with application to speech analysis","volume":"41","author":"Maragos","year":"1993","journal-title":"IEEE Trans. Signal Process."},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"95","DOI":"10.1016\/0165-1684(94)90169-4","article-title":"A comparison of the energy operator and the Hilbert transform approach to signal and speech demodulation","volume":"37","author":"Potamianos","year":"1994","journal-title":"Signal Process."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"39","DOI":"10.1016\/j.specom.2016.12.004","article-title":"Empirical mode decomposition for adaptive AM-FM analysis of speech: A review","volume":"88","author":"Sharma","year":"2017","journal-title":"Speech Commun."},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Sethu, V., Ambikairajah, E., and Epps, J. (2008, January 12). Empirical mode decomposition based weighted frequency feature for speech-based emotion classification. Proceedings of the 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, NV, USA.","DOI":"10.1109\/ICASSP.2008.4518785"},{"key":"ref_39","unstructured":"Kaiser, J. (1990, January 3\u20136). On a simple algorithm to calculate the \u2018energy\u2019 of a signal. Proceedings of the International Conference on Acoustics, Speech, and Signal Processing, Albuquerque, NM, USA."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"338","DOI":"10.1016\/j.dsp.2018.03.010","article-title":"Teager\u2013Kaiser energy methods for signal and image analysis: A review","volume":"78","author":"Boudraa","year":"2018","journal-title":"Digital Signal Process."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"1532","DOI":"10.1109\/78.212729","article-title":"On amplitude and frequency demodulation using energy operators","volume":"41","author":"Maragos","year":"1993","journal-title":"IEEE Trans. Signal Process."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Kaiser, J.F. (1993, January 27\u201330). Some useful properties of Teager\u2019s energy operators. Proceedings of the 1993 IEEE International Conference On Acoustics, Speech, and Signal Processing, Minneapolis, MN, USA.","DOI":"10.1109\/ICASSP.1993.319457"},{"key":"ref_43","unstructured":"Bouchikhi, A. (2010). AM-FM Signal Analysis by Teager Huang Transform: Application to Underwater Acoustics. [Ph.D. Thesis, Universit\u00e9 Rennes 1]."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Maragos, P., Kaiser, J.F., and Quatieri, T.F. (1992, January 23\u201326). On separating amplitude from frequency modulations using energy operators. Proceedings of the 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP \u201992), San Francisco, CA, USA.","DOI":"10.1109\/ICASSP.1992.226135"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Li, X., Li, X., Zheng, X., and Zhang, D. (2010). Emd-teo based speech emotion recognition. Life System Modeling and Intelligent Computing, Springer.","DOI":"10.1007\/978-3-642-15597-0_20"},{"key":"ref_46","unstructured":"Mesaros, A., Heittola, T., and Virtanen, T. (2018, January 19\u201320). A multi-device dataset for urban acoustic scene classification. Proceedings of the Detection and Classification of Acoustic Scenes and Events 2018 Workshop (DCASE2018), Surrey, UK."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Kumari, S., Roy, D., Cartwright, M., Bello, J.P., and Arora, A. (2019, January 24). EdgeL\u02c6 3: Compressing L\u02c6 3-Net for Mote Scale Urban Noise Monitoring. Proceedings of the 2019 IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), Rio de Janeiro, Brazil.","DOI":"10.1109\/IPDPSW.2019.00145"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Salamon, J., and Bello, J.P. (2015, January 19\u201324). Unsupervised feature learning for urban sound classification. Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, QLD, Australia.","DOI":"10.1109\/ICASSP.2015.7177954"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Piczak, K.J. (2015, January 17\u201320). Environmental sound classification with convolutional neural networks. Proceedings of the 2015 IEEE 25th international workshop on machine learning for signal processing (MLSP), Boston, MA, USA.","DOI":"10.1109\/MLSP.2015.7324337"},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Font, F., Roma, G., and Serra, X. (2013, January 21). Freesound Technical Demo. Proceedings of the 21st ACM International Conference on Multimedia (MM \u201913), New York, NY, USA.","DOI":"10.1145\/2502081.2502245"},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Ahmed, A., Serrestou, Y., Raoof, K., and Diouris, J.F. (2021, January 14\u201315). Sound event classification using neural networks and feature selection based methods. Proceedings of the 2021 IEEE International Conference on Electro Information Technology (EIT), Mt. Pleasant, MI, USA.","DOI":"10.1109\/EIT51626.2021.9491869"},{"key":"ref_52","unstructured":"Sakashita, Y., and Aono, M. (2018, January 19\u201320). Acoustic scene classification by ensemble of spectrograms based on adaptive temporal divisions. Proceedings of the Detection and Classification of Acoustic Scenes and Events, Surrey, UK."},{"key":"ref_53","unstructured":"Dorfer, M., Lehner, B., Eghbal-zadeh, H., Christop, H., Fabian, P., and Gerhard, W. (2018, January 19\u201320). Acoustic scene classification with fully convolutional neural networks and I-vectors. Proceedings of the Detection and Classification of Acoustic Scenes and Events, Surrey, UK."},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Guo, J., Li, C., Sun, Z., Li, J., and Wang, P. (2022). A Deep Attention Model for Environmental Sound Classification from Multi-Feature Data. Appl. Sci., 12.","DOI":"10.3390\/app12125988"},{"key":"ref_55","unstructured":"Kingma, D.P., and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv."},{"key":"ref_56","first-page":"2121","article-title":"Adaptive subgradient methods for online learning and stochastic optimization","volume":"12","author":"Duchi","year":"2011","journal-title":"J. Mach. Learn. Res."},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"362","DOI":"10.1016\/j.sigpro.2013.09.013","article-title":"Partly ensemble empirical mode decomposition: An improved noise-assisted method for eliminating mode mixing","volume":"96","author":"Zheng","year":"2014","journal-title":"Signal Process."},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"Xu, G., Yang, Z., and Wang, S. (2016, January 4\u20135). Study on mode mixing problem of empirical mode decomposition. Proceedings of the Joint International Information Technology, Mechanical and Electronic Engineering Conference, Xi\u2019an, China.","DOI":"10.2991\/jimec-16.2016.69"},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Gao, Y., Ge, G., Sheng, Z., and Sang, E. (2008, January 27\u201330). Analysis and solution to the mode mixing phenomenon in EMD. Proceedings of the 2008 Congress on Image and Signal Processing, Sanya, Hainan, China.","DOI":"10.1109\/CISP.2008.193"},{"key":"ref_60","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1142\/S1793536909000047","article-title":"Ensemble empirical mode decomposition: A noise-assisted data analysis method","volume":"1","author":"Wu","year":"2009","journal-title":"Adv. Adapt. Data Anal."},{"key":"ref_61","doi-asserted-by":"crossref","first-page":"170","DOI":"10.1016\/j.dsp.2013.08.004","article-title":"Low-complexity sinusoidal-assisted EMD (SAEMD) algorithms for solving mode-mixing problems in HHT","volume":"24","author":"Shen","year":"2014","journal-title":"Digital Signal Process."},{"key":"ref_62","doi-asserted-by":"crossref","first-page":"248","DOI":"10.1016\/j.sigpro.2011.07.013","article-title":"Method for eliminating mode mixing of empirical mode decomposition based on the revised blind source separation","volume":"92","author":"Tang","year":"2012","journal-title":"Signal Process."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/20\/7717\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:50:03Z","timestamp":1760143803000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/20\/7717"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,10,11]]},"references-count":62,"journal-issue":{"issue":"20","published-online":{"date-parts":[[2022,10]]}},"alternative-id":["s22207717"],"URL":"https:\/\/doi.org\/10.3390\/s22207717","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,10,11]]}}}