{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,26]],"date-time":"2026-03-26T14:25:49Z","timestamp":1774535149616,"version":"3.50.1"},"reference-count":137,"publisher":"MDPI AG","issue":"11","license":[{"start":{"date-parts":[[2023,11,15]],"date-time":"2023-11-15T00:00:00Z","timestamp":1700006400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"European Regional Development Fund (ERDF)","award":["COMPETE 2020"],"award-info":[{"award-number":["COMPETE 2020"]}]},{"name":"European Regional Development Fund (ERDF)","award":["2021.08660.BD"],"award-info":[{"award-number":["2021.08660.BD"]}]},{"name":"Funda\u00e7\u00e3o para a Ci\u00eancia e Tecnologia (FCT)","award":["COMPETE 2020"],"award-info":[{"award-number":["COMPETE 2020"]}]},{"name":"Funda\u00e7\u00e3o para a Ci\u00eancia e Tecnologia (FCT)","award":["2021.08660.BD"],"award-info":[{"award-number":["2021.08660.BD"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Information"],"abstract":"<jats:p>In this article, a hierarchical method for action recognition based on temporal and spatial features is proposed. In current HAR methods, camera movement, sensor movement, sudden scene changes, and scene movement can increase motion feature errors and decrease accuracy. Another important aspect to take into account in a HAR method is the required computational cost. The proposed method provides a preprocessing step to address these challenges. As a preprocessing step, the method uses optical flow to detect camera movements and shots in input video image sequences. In the temporal processing block, the optical flow technique is combined with the absolute value of frame differences to obtain a time saliency map. The detection of shots, cancellation of camera movement, and the building of a time saliency map minimise movement detection errors. The time saliency map is then passed to the spatial processing block to segment the moving persons and\/or objects in the scene. Because the search region for spatial processing is limited based on the temporal processing results, the computations in the spatial domain are drastically reduced. In the spatial processing block, the scene foreground is extracted in three steps: silhouette extraction, active contour segmentation, and colour segmentation. Key points are selected at the borders of the segmented foreground. The last used features are the intensity and angle of the optical flow of detected key points. Using key point features for action detection reduces the computational cost of the classification step and the required training time. Finally, the features are submitted to a Recurrent Neural Network (RNN) to recognise the involved action. The proposed method was tested using four well-known action datasets: KTH, Weizmann, HMDB51, and UCF101 datasets and its efficiency was evaluated. Since the proposed approach segments salient objects based on motion, edges, and colour features, it can be added as a preprocessing step to most current HAR systems to improve performance.<\/jats:p>","DOI":"10.3390\/info14110616","type":"journal-article","created":{"date-parts":[[2023,11,15]],"date-time":"2023-11-15T10:57:46Z","timestamp":1700045866000},"page":"616","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":10,"title":["Deep Learning Approach for Human Action Recognition Using a Time Saliency Map Based on Motion Features Considering Camera Movement and Shot in Video Image Sequences"],"prefix":"10.3390","volume":"14","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-0863-1977","authenticated-orcid":false,"given":"Abdorreza","family":"Alavigharahbagh","sequence":"first","affiliation":[{"name":"Faculdade de Engenharia, Universidade do Porto, Rua Dr. Roberto Frias, s\/n, 4200-465 Porto, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0842-8250","authenticated-orcid":false,"given":"Vahid","family":"Hajihashemi","sequence":"additional","affiliation":[{"name":"Faculdade de Engenharia, Universidade do Porto, Rua Dr. Roberto Frias, s\/n, 4200-465 Porto, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1094-0114","authenticated-orcid":false,"given":"Jos\u00e9 J. M.","family":"Machado","sequence":"additional","affiliation":[{"name":"Departamento de Engenharia Mec\u00e2nica, Faculdade de Engenharia, Universidade do Porto, Rua Dr. Roberto Frias, s\/n, 4200-465 Porto, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7603-6526","authenticated-orcid":false,"given":"Jo\u00e3o Manuel R. S.","family":"Tavares","sequence":"additional","affiliation":[{"name":"Departamento de Engenharia Mec\u00e2nica, Faculdade de Engenharia, Universidade do Porto, Rua Dr. Roberto Frias, s\/n, 4200-465 Porto, Portugal"}]}],"member":"1968","published-online":{"date-parts":[[2023,11,15]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Caetano, C., dos Santos, J.A., and Schwartz, W.R. (2016, January 4\u20138). Optical Flow Co-occurrence Matrices: A novel spatiotemporal feature descriptor. Proceedings of the 2016 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico.","DOI":"10.1109\/ICPR.2016.7899921"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Gupta, A., and Balan, M.S. (2018, January 1). Action recognition from optical flow visualizations. Proceedings of the 2nd International Conference on Computer Vision & Image Processing, Roorkee, India.","DOI":"10.1007\/978-981-10-7895-8_31"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Kumar, S.S., and John, M. (2016, January 24\u201327). Human activity recognition using optical flow based feature set. Proceedings of the 2016 IEEE International Carnahan Conference on Security Technology (ICCST), Orlando, FL, USA.","DOI":"10.1109\/CCST.2016.7815694"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"34141","DOI":"10.1007\/s11042-020-09194-w","article-title":"Action representation and recognition through temporal co-occurrence of flow fields and convolutional neural networks","volume":"79","author":"Rashwan","year":"2020","journal-title":"Multimed. Tools Appl."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"139","DOI":"10.1007\/s00138-018-0982-3","article-title":"Gait representation and recognition from temporal co-occurrence of flow fields","volume":"30","author":"Rashwan","year":"2019","journal-title":"Mach. Vis. Appl."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"94","DOI":"10.53106\/199115992021083204008","article-title":"Using Improved Dense Trajectory Feature to Realize Action Recognition","volume":"32","author":"Xu","year":"2021","journal-title":"J. Comput."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"1327","DOI":"10.1007\/s00371-020-01868-8","article-title":"Improved human action recognition approach based on two-stream convolutional neural network model","volume":"37","author":"Liu","year":"2021","journal-title":"Vis. Comput."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"012031","DOI":"10.1088\/1757-899X\/1042\/1\/012031","article-title":"Human action recognition using a novel deep learning approach","volume":"1042","author":"Kumar","year":"2021","journal-title":"Proc. Iop Conf. Ser. Mater. Sci. Eng."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"1821","DOI":"10.1007\/s00371-020-01940-3","article-title":"Two-stream spatiotemporal feature fusion for human action recognition","volume":"37","author":"Abdelbaky","year":"2021","journal-title":"Vis. Comput."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"5267","DOI":"10.1007\/s00521-020-05297-5","article-title":"CGA: A new feature selection model for visual human action recognition","volume":"33","author":"Guha","year":"2021","journal-title":"Neural Comput. Appl."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"13079","DOI":"10.1007\/s00500-021-06149-7","article-title":"Human action recognition using a hybrid deep learning heuristic","volume":"25","author":"Dash","year":"2021","journal-title":"Soft Comput."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"35827","DOI":"10.1007\/s11042-020-09408-1","article-title":"A resource conscious human action recognition framework using 26-layered deep convolutional neural network","volume":"80","author":"Khan","year":"2021","journal-title":"Multimed. Tools Appl."},{"key":"ref_13","first-page":"447","article-title":"A new hybrid deep learning model for human action recognition","volume":"32","author":"Jaouedi","year":"2020","journal-title":"J. King Saud Univ.-Comput. Inf. Sci."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"323","DOI":"10.1016\/j.sigpro.2017.10.022","article-title":"Distinctive action sketch for human action recognition","volume":"144","author":"Zheng","year":"2018","journal-title":"Signal Process."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"8147","DOI":"10.1007\/s11042-020-10140-z","article-title":"Human action recognition using distance transform and entropy based features","volume":"80","author":"Ramya","year":"2021","journal-title":"Multimed. Tools Appl."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"24303","DOI":"10.1007\/s11042-021-10721-6","article-title":"A statistical framework for few-shot action recognition","volume":"80","author":"Haddad","year":"2021","journal-title":"Multimed. Tools Appl."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"29675","DOI":"10.1007\/s11042-021-11188-1","article-title":"Towards a deep human activity recognition approach based on video to image transformation with skeleton data","volume":"80","author":"Snoun","year":"2021","journal-title":"Multimed. Tools Appl."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"20019","DOI":"10.1007\/s11042-021-10636-2","article-title":"Human action recognition using three orthogonal planes with unsupervised deep convolutional neural network","volume":"80","author":"Abdelbaky","year":"2021","journal-title":"Multimed. Tools Appl."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"14230","DOI":"10.1007\/s11227-021-03827-z","article-title":"Human action recognition using high-order feature of optical flows","volume":"77","author":"Xia","year":"2021","journal-title":"J. Supercomput."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"1867","DOI":"10.1007\/s11554-020-01057-9","article-title":"A compact and recursive Riemannian motion descriptor for untrimmed activity recognition","volume":"18","author":"Manzanera","year":"2021","journal-title":"J. Real-Time Image Process."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"12192","DOI":"10.1007\/s11227-021-03772-x","article-title":"Applying TS-DBN model into sports behavior recognition with deep learning approach","volume":"77","author":"Guo","year":"2021","journal-title":"J. Supercomput."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"151","DOI":"10.1007\/s42979-021-00576-x","article-title":"Sparse deep LSTMs with convolutional attention for human action recognition","volume":"2","author":"Aghaei","year":"2021","journal-title":"SN Comput. Sci."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"3449","DOI":"10.1007\/s13042-021-01383-9","article-title":"Human activity recognition using pre-trained network with informative templates","volume":"12","author":"Zebhi","year":"2021","journal-title":"Int. J. Mach. Learn. Cybern."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"580","DOI":"10.1134\/S105466182103024X","article-title":"Action Recognition in Videos with Spatio-Temporal Fusion 3D Convolutional Neural Networks","volume":"31","author":"Wang","year":"2021","journal-title":"Pattern Recognit. Image Anal."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Khan, S., Khan, M.A., Alhaisoni, M., Tariq, U., Yong, H.S., Armghan, A., and Alenezi, F. (2021). Human action recognition: A paradigm of best deep learning features selection and serial based extended fusion. Sensors, 21.","DOI":"10.3390\/s21237941"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"136","DOI":"10.1016\/j.patrec.2021.06.003","article-title":"Scene image and human skeleton-based dual-stream human action recognition","volume":"148","author":"Xu","year":"2021","journal-title":"Pattern Recognit. Lett."},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"116399","DOI":"10.1016\/j.image.2021.116399","article-title":"Double constrained bag of words for human action recognition","volume":"98","author":"Wu","year":"2021","journal-title":"Signal Process. Image Commun."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"El-Assal, M., Tirilly, P., and Bilasco, I.M. (2021, January 28\u201330). A Study On the Effects of Pre-processing On Spatio-temporal Action Recognition Using Spiking Neural Networks Trained with STDP. Proceedings of the 2021 International Conference on Content-Based Multimedia Indexing (CBMI), Lille, France.","DOI":"10.1109\/CBMI50038.2021.9461922"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Boualia, S.N., and Amara, N.E.B. (2021, January 22\u201325). 3D CNN for Human Action Recognition. Proceedings of the 2021 18th International Multi-Conference on Systems, Signals & Devices (SSD), Monastir, Tunisia.","DOI":"10.1109\/SSD52085.2021.9429429"},{"key":"ref_30","first-page":"45","article-title":"Modal Frequencies Based Human Action Recognition Using Silhouettes And Simplicial Elements","volume":"35","author":"Mishra","year":"2022","journal-title":"Int. J. Eng."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Ha, J., Shin, J., Park, H., and Paik, J. (2021). Action recognition network using stacked short-term deep features and bidirectional moving average. Appl. Sci., 11.","DOI":"10.3390\/app11125563"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Gharahbagh, A.A., Hajihashemi, V., Ferreira, M.C., Machado, J.J., and Tavares, J.M.R. (2022). Best Frame Selection to Enhance Training Step Efficiency in Video-Based Human Action Recognition. Appl. Sci., 12.","DOI":"10.3390\/app12041830"},{"key":"ref_33","first-page":"3152","article-title":"Human activity recognition in videos based on a Two Levels K-means and Hierarchical Codebooks","volume":"6","author":"Hajihashemi","year":"2016","journal-title":"Int. J. Mechatron. Electr. Comput. Technol"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Deshpnande, A., and Warhade, K.K. (2021, January 5\u20137). An Improved Model for Human Activity Recognition by Integrated feature Approach and Optimized SVM. Proceedings of the 2021 International Conference on Emerging Smart Computing and Informatics (ESCI), Pune, India.","DOI":"10.1109\/ESCI50559.2021.9396914"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Ma, J., Tao, X., Ma, J., Hong, X., and Gong, Y. (2021, January 19\u201322). Class incremental learning for video action classification. Proceedings of the 2021 IEEE International Conference on Image Processing (ICIP), Anchorage, AK, USA.","DOI":"10.1109\/ICIP42928.2021.9506788"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Shekokar, R., and Kale, S. (2021, January 2\u20134). Deep Learning for Human Action Recognition. Proceedings of the 2021 6th International Conference for Convergence in Technology (I2CT), Maharashtra, India.","DOI":"10.1109\/I2CT51068.2021.9418080"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Sawanglok, T., and Songmuang, P. (2021, January 21\u201324). Data Preparation for Reducing Computational Time with Transpose Stack Matrix for Action Recognition. Proceedings of the 2021 13th International Conference on Knowledge and Smart Technology (KST), Bangsaen, Thailand.","DOI":"10.1109\/KST51265.2021.9415834"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Shi, S., and Jung, C. (2021, January 5\u20138). Deep Metric Learning for Human Action Recognition with SlowFast Networks. Proceedings of the 2021 International Conference on Visual Communications and Image Processing (VCIP), Munich, Germany.","DOI":"10.1109\/VCIP53242.2021.9675393"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"012006","DOI":"10.1088\/1742-6596\/2093\/1\/012006","article-title":"Human Behavior Recognition Method based on Two-layer LSTM Network with Attention Mechanism","volume":"2093","author":"Gao","year":"2021","journal-title":"J. Phys. Conf. Ser."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"89287","DOI":"10.1109\/ACCESS.2021.3088155","article-title":"Human action recognition based on motion feature and manifold learning","volume":"9","author":"Wang","year":"2021","journal-title":"IEEE Access"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Nasir, I.M., Raza, M., Shah, J.H., Khan, M.A., and Rehman, A. (2021, January 6\u20137). Human action recognition using machine learning in uncontrolled environment. Proceedings of the 2021 1st International Conference on Artificial Intelligence and Data Analytics (CAIDA), Riyadh, Saudi Arabia.","DOI":"10.1109\/CAIDA51941.2021.9425202"},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"38051","DOI":"10.1007\/s11042-022-14056-8","article-title":"STHARNet: Spatio-temporal human action recognition network in content based video retrieval","volume":"82","author":"Sowmyayani","year":"2022","journal-title":"Multimed. Tools Appl."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"e12805","DOI":"10.1111\/exsy.12805","article-title":"A sparse coded composite descriptor for human activity recognition","volume":"39","author":"Singh","year":"2022","journal-title":"Expert Syst."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Mithsara, W. (2022, January 15\u201317). Comparative Analysis of AI-powered Approaches for Skeleton-based Child and Adult Action Recognition in Multi-person Environment. Proceedings of the 2022 International Conference on Computer Science and Software Engineering (CSASE), Duhok, Iraq.","DOI":"10.1109\/CSASE51777.2022.9759717"},{"key":"ref_45","unstructured":"Nair, S.A.L., and Megalingam, R.K. (2022, January 27\u201328). Fusion of Bag of Visual Words with Neural Network for Human Action Recognition. Proceedings of the 2022 12th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India."},{"key":"ref_46","unstructured":"Megalingam, R.K., and Nair S., A.L. (2021, January 10\u201311). Human Action Recognition: A Review. Proceedings of the 2021 10th International Conference on System Modeling & Advancement in Research Trends (SMART), Moradabad, India."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Bayoudh, K., Hamdaoui, F., and Mtibaa, A. (2022, January 25\u201327). An Attention-based Hybrid 2D\/3D CNN-LSTM for Human Action Recognition. Proceedings of the 2022 2nd International Conference on Computing and Information Technology (ICCIT), Tabuk, Saudi Arabia.","DOI":"10.1109\/ICCIT52419.2022.9711631"},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"116101","DOI":"10.1063\/5.0109807","article-title":"Action recognition based on discrete cosine transform by optical pixel-wise encoding","volume":"7","author":"Liang","year":"2022","journal-title":"APL Photonics"},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"45","DOI":"10.1186\/s44147-022-00098-0","article-title":"A novel human activity recognition architecture: Using residual inception ConvLSTM layer","volume":"69","author":"Khater","year":"2022","journal-title":"J. Eng. Appl. Sci."},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Momin, M.S., Sufian, A., Barman, D., Dutta, P., Dong, M., and Leo, M. (2022). In-home older adults\u2019 activity pattern monitoring using depth sensors: A review. Sensors, 22.","DOI":"10.3390\/s22239067"},{"key":"ref_51","first-page":"3200","article-title":"Human action recognition from various data modalities: A review","volume":"45","author":"Sun","year":"2022","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_52","first-page":"4652946","article-title":"Research on Human Action Feature Detection and Recognition Algorithm Based on Deep Learning","volume":"2022","author":"Wu","year":"2022","journal-title":"Mob. Inf. Syst."},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Ahn, D., Kim, S., Hong, H., and Ko, B.C. (2023, January 3\u20137). STAR-Transformer: A spatio-temporal cross attention transformer for human action recognition. Proceedings of the IEEE\/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA.","DOI":"10.1109\/WACV56688.2023.00333"},{"key":"ref_54","doi-asserted-by":"crossref","unstructured":"Vaitesswar, U., and Yeo, C.K. (2023, January 9\u201311). Multi-Range Mixed Graph Convolution Network for Skeleton-Based Action Recognition. Proceedings of the 2023 5th Asia Pacific Information Technology Conference, Ho Chi Minh, Vietnam.","DOI":"10.1145\/3588155.3588163"},{"key":"ref_55","doi-asserted-by":"crossref","unstructured":"Lee, J., Lee, M., Lee, D., and Lee, S. (2023, January 2\u20136). Hierarchically decomposed graph convolutional networks for skeleton-based action recognition. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Paris, France.","DOI":"10.1109\/ICCV51070.2023.00958"},{"key":"ref_56","doi-asserted-by":"crossref","unstructured":"Wu, J., Wang, L., Chong, G., and Feng, H. (2022, January 7\u201310). 2S-AGCN Human Behavior Recognition Based on New Partition Strategy. Proceedings of the 2022 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Chiang Mai, Thailand.","DOI":"10.23919\/APSIPAASC55919.2022.9980273"},{"key":"ref_57","doi-asserted-by":"crossref","unstructured":"Radulescu, B.A., and Radulescu, V. (2021, January 2\u20133). Modeling 3D convolution architecture for actions recognition. Proceedings of the Information Storage and Processing Systems. American Society of Mechanical Engineers, Online.","DOI":"10.1115\/ISPS2021-65036"},{"key":"ref_58","doi-asserted-by":"crossref","unstructured":"Yan, Z., Yongfeng, Q., and Xiaoxu, P. (2022, January 15\u201317). Dangerous Action Recognition for Spatial-Temporal Graph Convolutional Networks. Proceedings of the 2022 IEEE 12th International Conference on Electronics Information and Emergency Communication (ICEIEC), Beijing, China.","DOI":"10.1109\/ICEIEC54567.2022.9835097"},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Liao, T., Zhao, J., Liu, Y., Ivanov, K., Xiong, J., and Yan, Y. (2022, January 6\u20138). Deep transfer learning with graph neural network for sensor-based human activity recognition. Proceedings of the 2022 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), Las Vegas, NV, USA.","DOI":"10.1109\/BIBM55620.2022.9995660"},{"key":"ref_60","doi-asserted-by":"crossref","first-page":"21397","DOI":"10.1109\/ACCESS.2018.2825106","article-title":"Dynamic mode decomposition based video shot detection","volume":"6","author":"Bi","year":"2018","journal-title":"IEEE Access"},{"key":"ref_61","doi-asserted-by":"crossref","first-page":"5136","DOI":"10.1109\/TIP.2013.2282081","article-title":"Fast video shot boundary detection based on SVD and pattern matching","volume":"22","author":"Lu","year":"2013","journal-title":"IEEE Trans. Image Process."},{"key":"ref_62","doi-asserted-by":"crossref","first-page":"28109","DOI":"10.1007\/s11042-021-11052-2","article-title":"Video shot boundary detection using hybrid dual tree complex wavelet transform with Walsh Hadamard transform","volume":"80","author":"Mishra","year":"2021","journal-title":"Multimed. Tools Appl."},{"key":"ref_63","doi-asserted-by":"crossref","first-page":"641","DOI":"10.1007\/s11042-020-09697-6","article-title":"Video shot boundary detection using block based cumulative approach","volume":"80","author":"Rashmi","year":"2021","journal-title":"Multimed. Tools Appl."},{"key":"ref_64","doi-asserted-by":"crossref","first-page":"164","DOI":"10.1016\/j.jvcir.2015.03.003","article-title":"Moving object detection and tracking from video captured by moving camera","volume":"30","author":"Hu","year":"2015","journal-title":"J. Vis. Commun. Image Represent."},{"key":"ref_65","doi-asserted-by":"crossref","first-page":"195","DOI":"10.1109\/TCI.2019.2891389","article-title":"Panoramic robust pca for foreground\u2013background separation on noisy, free-motion camera video","volume":"5","author":"Moore","year":"2019","journal-title":"IEEE Trans. Comput. Imaging"},{"key":"ref_66","doi-asserted-by":"crossref","unstructured":"Zhang, W., Sun, X., and Yu, Q. (2020). Moving Object Detection under a Moving Camera via Background Orientation Reconstruction. Sensors, 20.","DOI":"10.3390\/s20113103"},{"key":"ref_67","first-page":"1320","article-title":"Human Gait Detection Using Silhouette Image Recognition","volume":"12","author":"Ahammed","year":"2021","journal-title":"Turk. J. Comput. Math. Educ. (TURCOMAT)"},{"key":"ref_68","unstructured":"Lam, T.H., and Lee, R.S. (2005). Advances in Biometrics, Springer."},{"key":"ref_69","doi-asserted-by":"crossref","unstructured":"Jawed, B., Khalifa, O.O., and Bhuiyan, S.S.N. (2018, January 19\u201320). Human gait recognition system. Proceedings of the 2018 7th International Conference on Computer and Communication Engineering (ICCCE), Kuala Lumpur, Malaysia.","DOI":"10.1109\/ICCCE.2018.8539245"},{"key":"ref_70","doi-asserted-by":"crossref","first-page":"236","DOI":"10.1080\/03772063.2017.1409085","article-title":"Robust human action recognition using AREI features and trajectory analysis from silhouette image sequence","volume":"65","author":"Maity","year":"2019","journal-title":"IETE J. Res."},{"key":"ref_71","doi-asserted-by":"crossref","first-page":"1595","DOI":"10.1007\/s00371-018-1560-4","article-title":"A unified model for human activity recognition using spatial distribution of gradients and difference of Gaussian kernel","volume":"35","author":"Vishwakarma","year":"2019","journal-title":"Vis. Comput."},{"key":"ref_72","doi-asserted-by":"crossref","first-page":"470","DOI":"10.1016\/j.neucom.2022.02.079","article-title":"An overview of edge and object contour detection","volume":"488","author":"Yang","year":"2022","journal-title":"Neurocomputing"},{"key":"ref_73","doi-asserted-by":"crossref","first-page":"321","DOI":"10.1007\/BF00133570","article-title":"Snakes: Active contour models","volume":"1","author":"Kass","year":"1988","journal-title":"Int. J. Comput. Vis."},{"key":"ref_74","doi-asserted-by":"crossref","first-page":"211","DOI":"10.1016\/1049-9660(91)90028-N","article-title":"On active contour models and balloons","volume":"53","author":"Cohen","year":"1991","journal-title":"CVGIP: Image Underst."},{"key":"ref_75","doi-asserted-by":"crossref","first-page":"359","DOI":"10.1109\/83.661186","article-title":"Snakes, shapes, and gradient vector flow","volume":"7","author":"Xu","year":"1998","journal-title":"IEEE Trans. Image Process."},{"key":"ref_76","doi-asserted-by":"crossref","first-page":"2096","DOI":"10.1109\/TIP.2007.899601","article-title":"Active contour external force using vector field convolution for image segmentation","volume":"16","author":"Li","year":"2007","journal-title":"IEEE Trans. Image Process."},{"key":"ref_77","unstructured":"Mumford, D., and Shah, J. (1985, January 9\u201313). Boundary detection by minimizing functionals. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA."},{"key":"ref_78","doi-asserted-by":"crossref","first-page":"61","DOI":"10.1023\/A:1007979827043","article-title":"Geodesic active contours","volume":"22","author":"Caselles","year":"1997","journal-title":"Int. J. Comput. Vis."},{"key":"ref_79","doi-asserted-by":"crossref","first-page":"266","DOI":"10.1109\/83.902291","article-title":"Active contours without edges","volume":"10","author":"Chan","year":"2001","journal-title":"IEEE Trans. Image Process."},{"key":"ref_80","doi-asserted-by":"crossref","first-page":"1940","DOI":"10.1109\/TIP.2008.2002304","article-title":"Minimization of region-scalable fitting energy for image segmentation","volume":"17","author":"Li","year":"2008","journal-title":"IEEE Trans. Image Process."},{"key":"ref_81","doi-asserted-by":"crossref","first-page":"413","DOI":"10.1016\/j.asoc.2018.02.034","article-title":"Image co-segmentation using dual active contours","volume":"66","author":"Ghosh","year":"2018","journal-title":"Appl. Soft Comput."},{"key":"ref_82","doi-asserted-by":"crossref","first-page":"1639","DOI":"10.1109\/TIP.2017.2781424","article-title":"Robust object co-segmentation using background prior","volume":"27","author":"Han","year":"2017","journal-title":"IEEE Trans. Image Process."},{"key":"ref_83","doi-asserted-by":"crossref","first-page":"112901","DOI":"10.1016\/j.eswa.2019.112901","article-title":"A comprehensive overview of relevant methods of image cosegmentation","volume":"140","author":"Merdassi","year":"2020","journal-title":"Expert Syst. Appl."},{"key":"ref_84","doi-asserted-by":"crossref","first-page":"115003","DOI":"10.1016\/j.eswa.2021.115003","article-title":"An efficient multilevel color image thresholding based on modified whale optimization algorithm","volume":"178","author":"Anitha","year":"2021","journal-title":"Expert Syst. Appl."},{"key":"ref_85","doi-asserted-by":"crossref","unstructured":"Jing, Y., Kong, T., Wang, W., Wang, L., Li, L., and Tan, T. (2021, January 20\u201325). Locate then segment: A strong pipeline for referring image segmentation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00973"},{"key":"ref_86","doi-asserted-by":"crossref","first-page":"2481","DOI":"10.1109\/TPAMI.2016.2644615","article-title":"Segnet: A deep convolutional encoder-decoder architecture for image segmentation","volume":"39","author":"Badrinarayanan","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_87","doi-asserted-by":"crossref","first-page":"834","DOI":"10.1109\/TPAMI.2017.2699184","article-title":"Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs","volume":"40","author":"Chen","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_88","doi-asserted-by":"crossref","unstructured":"Lin, G., Milan, A., Shen, C., and Reid, I. (2017, January 21\u201326). Refinenet: Multi-path refinement networks for high-resolution semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.549"},{"key":"ref_89","doi-asserted-by":"crossref","unstructured":"Long, J., Shelhamer, E., and Darrell, T. (2015, January 7\u201312). Fully convolutional networks for semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA.","DOI":"10.1109\/CVPR.2015.7298965"},{"key":"ref_90","doi-asserted-by":"crossref","unstructured":"Wang, X., Kong, T., Shen, C., Jiang, Y., and Li, L. (2020, January 23\u201328). Solo: Segmenting objects by locations. Proceedings of the European Conference on Computer Vision. Springer, Glasgow, UK.","DOI":"10.1007\/978-3-030-58523-5_38"},{"key":"ref_91","doi-asserted-by":"crossref","unstructured":"Kabilan, R., Devaraj, G.P., Muthuraman, U., Muthukumaran, N., Gabriel, J.Z., and Swetha, R. (2021, January 4\u20136). Efficient color image segmentation using fastmap algorithm. Proceedings of the 2021 Third International Conference on Intelligent Communication Technologies and Virtual Mobile Networks (ICICV), Tirunelveli, India.","DOI":"10.1109\/ICICV50876.2021.9388470"},{"key":"ref_92","doi-asserted-by":"crossref","first-page":"11654","DOI":"10.1007\/s10489-022-04064-4","article-title":"Multilevel thresholding image segmentation using meta-heuristic optimization algorithms: Comparative analysis, open challenges and new trends","volume":"53","author":"Abualigah","year":"2022","journal-title":"Appl. Intell."},{"key":"ref_93","doi-asserted-by":"crossref","first-page":"114636","DOI":"10.1016\/j.eswa.2021.114636","article-title":"Color image segmentation using Kapur, Otsu and minimum cross entropy functions based on exchange market algorithm","volume":"172","author":"Sathya","year":"2021","journal-title":"Expert Syst. Appl."},{"key":"ref_94","doi-asserted-by":"crossref","first-page":"713","DOI":"10.1007\/s11554-014-0423-0","article-title":"Massively parallel Lucas Kanade optical flow for real-time video processing applications","volume":"11","author":"Plyer","year":"2016","journal-title":"J. Real-Time Image Process."},{"key":"ref_95","doi-asserted-by":"crossref","unstructured":"Sundberg, P., Brox, T., Maire, M., Arbel\u00e1ez, P., and Malik, J. (2011, January 20\u201325). Occlusion boundary detection and figure\/ground assignment from optical flow. Proceedings of the CVPR 2011, Colorado Springs, CO, USA.","DOI":"10.1109\/CVPR.2011.5995364"},{"key":"ref_96","doi-asserted-by":"crossref","unstructured":"Galasso, F., Nagaraja, N.S., Cardenas, T.J., Brox, T., and Schiele, B. (2013, January 1\u20138). A unified video segmentation benchmark: Annotation, metrics and analysis. Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia.","DOI":"10.1109\/ICCV.2013.438"},{"key":"ref_97","doi-asserted-by":"crossref","first-page":"4334","DOI":"10.1109\/TCYB.2022.3167711","article-title":"Evolutionary Robust Clustering Over Time for Temporal Data","volume":"53","author":"Zhao","year":"2022","journal-title":"IEEE Trans. Cybern."},{"key":"ref_98","doi-asserted-by":"crossref","unstructured":"Han, D., Xiao, Y., Zhan, P., Li, T., and Fan, M. (2022, January 25\u201327). A Semi-Supervised Video Object Segmentation Method Based on ConvNext and Unet. Proceedings of the 2022 41st Chinese Control Conference (CCC), Hefei, China.","DOI":"10.23919\/CCC55666.2022.9902558"},{"key":"ref_99","doi-asserted-by":"crossref","unstructured":"Hu, Y.T., Huang, J.B., and Schwing, A.G. (2018, January 2\u201314). Unsupervised video object segmentation using motion saliency-guided spatio-temporal propagation. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01246-5_48"},{"key":"ref_100","doi-asserted-by":"crossref","unstructured":"Schuldt, C., Laptev, I., and Caputo, B. (2004, January 23\u201326). Recognizing human actions: A local SVM approach. Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK.","DOI":"10.1109\/ICPR.2004.1334462"},{"key":"ref_101","unstructured":"Laptev, I. (2004). Local Spatio-Temporal Image Features for Motion Interpretation. [Ph.D. Thesis, KTH Numerisk Analys Och Datalogi]."},{"key":"ref_102","unstructured":"Laptev, I., and Lindeberg, T. (2004, January 15). Local descriptors for spatio-temporal recognition. Proceedings of the International Workshop on Spatial Coherence for Visual Motion Analysis, Prague, Czech Republic."},{"key":"ref_103","doi-asserted-by":"crossref","unstructured":"Laptev, I., and Lindeberg, T. (2004, January 23\u201326). Velocity adaptation of space-time interest points. Proceedings of the 17th International Conference on Pattern Recognition, Cambridge, UK.","DOI":"10.1109\/ICPR.2004.1334003"},{"key":"ref_104","doi-asserted-by":"crossref","first-page":"107","DOI":"10.1007\/s11263-005-1838-7","article-title":"On space-time interest points","volume":"64","author":"Laptev","year":"2005","journal-title":"Int. J. Comput. Vis."},{"key":"ref_105","doi-asserted-by":"crossref","unstructured":"Blank, M., Gorelick, L., Shechtman, E., Irani, M., and Basri, R. (2005, January 17\u201321). Actions as space-time shapes. Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV\u201905), Beijing, China.","DOI":"10.1109\/ICCV.2005.28"},{"key":"ref_106","doi-asserted-by":"crossref","unstructured":"Nadeem, A., Jalal, A., and Kim, K. (2020, January 17\u201319). Human actions tracking and recognition based on body parts detection via Artificial neural network. Proceedings of the 2020 3rd International Conference on Advancements in Computational Sciences (ICACS), Lahore, Pakistan.","DOI":"10.1109\/ICACS47775.2020.9055951"},{"key":"ref_107","doi-asserted-by":"crossref","first-page":"17303","DOI":"10.1007\/s11042-015-3000-z","article-title":"Integration of moment invariants and uniform local binary patterns for human activity recognition in video sequences","volume":"75","author":"Nigam","year":"2016","journal-title":"Multimed. Tools Appl."},{"key":"ref_108","first-page":"13","article-title":"Robust feature extraction and classification based automated human action recognition system for multiple datasets","volume":"13","author":"Basavaiah","year":"2020","journal-title":"Int. J. Intell. Eng. Syst."},{"key":"ref_109","doi-asserted-by":"crossref","unstructured":"Kuehne, H., Jhuang, H., Garrote, E., Poggio, T., and Serre, T. (2011, January 6\u201313). HMDB: A large video database for human motion recognition. Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain.","DOI":"10.1109\/ICCV.2011.6126543"},{"key":"ref_110","unstructured":"Soomro, K., Zamir, A.R., and Shah, M. (2012). A dataset of 101 human action classes from videos in the wild. arXiv."},{"key":"ref_111","doi-asserted-by":"crossref","unstructured":"Liu, H., Ju, Z., Ji, X., Chan, C.S., and Khoury, M. (2017). Human Motion Sensing and Recognition, Springer.","DOI":"10.1007\/978-3-662-53692-6"},{"key":"ref_112","doi-asserted-by":"crossref","unstructured":"Dasari, R., and Chen, C.W. (2018, January 10\u201312). Mpeg cdvs feature trajectories for action recognition in videos. Proceedings of the 2018 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR), Miami, FL, USA.","DOI":"10.1109\/MIPR.2018.00069"},{"key":"ref_113","doi-asserted-by":"crossref","unstructured":"Sargano, A.B., Wang, X., Angelov, P., and Habib, Z. (2017, January 14\u201319). Human action recognition using transfer learning with deep representations. Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AL, USA.","DOI":"10.1109\/IJCNN.2017.7965890"},{"key":"ref_114","doi-asserted-by":"crossref","first-page":"896","DOI":"10.1049\/iet-ipr.2016.0627","article-title":"Action recognition using fast HOG3D of integral videos and Smith\u2013Waterman partial matching","volume":"12","author":"Ahmed","year":"2018","journal-title":"IET Image Process."},{"key":"ref_115","doi-asserted-by":"crossref","unstructured":"Jain, S.B., and Sreeraj, M. (2015, January 2\u20134). Multi-posture human detection based on hybrid HOG-BO feature. Proceedings of the 2015 Fifth international conference on advances in computing and communications (ICACC), Kochi, India.","DOI":"10.1109\/ICACC.2015.99"},{"key":"ref_116","doi-asserted-by":"crossref","first-page":"817","DOI":"10.1109\/TCYB.2013.2273174","article-title":"Spatio-temporal Laplacian pyramid coding for action recognition","volume":"44","author":"Shao","year":"2013","journal-title":"IEEE Trans. Cybern."},{"key":"ref_117","first-page":"241","article-title":"Action recognition based on multi-scale oriented neighborhood features","volume":"8","author":"Yang","year":"2015","journal-title":"Int. J. Signal Process. Image Process. Pattern Recognit."},{"key":"ref_118","first-page":"95","article-title":"Action recognition based on spatio-temporal log-Euclidean covariance matrix","volume":"9","author":"Cheng","year":"2016","journal-title":"Int. J. Signal Process. Image Process. Pattern Recognit."},{"key":"ref_119","doi-asserted-by":"crossref","first-page":"89","DOI":"10.1186\/s13640-017-0236-8","article-title":"A framework of human detection and action recognition based on uniform segmentation and combination of Euclidean distance and joint entropy-based features selection","volume":"2017","author":"Sharif","year":"2017","journal-title":"EURASIP J. Image Video Process."},{"key":"ref_120","doi-asserted-by":"crossref","first-page":"690","DOI":"10.1007\/s10489-020-01823-z","article-title":"A combined multiple action recognition and summarization for surveillance video sequences","volume":"51","author":"Elharrouss","year":"2021","journal-title":"Appl. Intell."},{"key":"ref_121","doi-asserted-by":"crossref","first-page":"115","DOI":"10.1007\/s11263-015-0861-6","article-title":"Kernelized multiview projection for robust action recognition","volume":"118","author":"Shao","year":"2016","journal-title":"Int. J. Comput. Vis."},{"key":"ref_122","doi-asserted-by":"crossref","first-page":"1510","DOI":"10.1109\/TMM.2017.2666540","article-title":"Sequential deep trajectory descriptor for action recognition with three-stream CNN","volume":"19","author":"Shi","year":"2017","journal-title":"IEEE Trans. Multimed."},{"key":"ref_123","doi-asserted-by":"crossref","first-page":"8585","DOI":"10.1007\/s00521-019-04365-9","article-title":"Human action recognition with bag of visual words using different machine learning methods and hyperparameter optimization","volume":"32","author":"Aslan","year":"2020","journal-title":"Neural Comput. Appl."},{"key":"ref_124","doi-asserted-by":"crossref","first-page":"104090","DOI":"10.1016\/j.imavis.2020.104090","article-title":"A framework of human action recognition using length control features fusion and weighted entropy-variances based feature selection","volume":"106","author":"Afza","year":"2021","journal-title":"Image Vis. Comput."},{"key":"ref_125","doi-asserted-by":"crossref","first-page":"882","DOI":"10.1016\/j.ijleo.2015.02.053","article-title":"Human action recognition via compressive-sensing-based dimensionality reduction","volume":"126","author":"Jiang","year":"2015","journal-title":"Optik"},{"key":"ref_126","doi-asserted-by":"crossref","unstructured":"Zhang, S., Zhang, W., and Li, Y. (2016, January 22\u201323). Human action recognition based on multifeature fusion. Proceedings of the Chinese Intelligent Systems Conference, Xiamen, China.","DOI":"10.1007\/978-981-10-2335-4_18"},{"key":"ref_127","doi-asserted-by":"crossref","unstructured":"Kami\u0144ski, \u0141., Ma\u0107kowiak, S., and Doma\u0144ski, M. (2017, January 10\u201314). Human activity recognition using standard descriptors of MPEG CDVS. Proceedings of the 2017 IEEE International Conference on Multimedia & Expo Workshops (ICMEW), Hong Kong, China.","DOI":"10.1109\/ICMEW.2017.8026248"},{"key":"ref_128","unstructured":"Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv."},{"key":"ref_129","unstructured":"Tran, D., Wang, H., Torresani, L., and Feiszli, M. (November, January 27). Video classification with channel-separated convolutional networks. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Republic of Korea."},{"key":"ref_130","doi-asserted-by":"crossref","unstructured":"Tran, D., Wang, H., Torresani, L., Ray, J., LeCun, Y., and Paluri, M. (2018, January 18\u201323). A closer look at spatiotemporal convolutions for action recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00675"},{"key":"ref_131","doi-asserted-by":"crossref","unstructured":"Karpathy, A., Toderici, G., Shetty, S., Leung, T., Sukthankar, R., and Fei-Fei, L. (2014, January 23\u201328). Large-scale video classification with convolutional neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA.","DOI":"10.1109\/CVPR.2014.223"},{"key":"ref_132","doi-asserted-by":"crossref","unstructured":"Li, Y., Ji, B., Shi, X., Zhang, J., Kang, B., and Wang, L. (2020, January 13\u201319). Tea: Temporal excitation and aggregation for action recognition. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00099"},{"key":"ref_133","doi-asserted-by":"crossref","first-page":"2589","DOI":"10.1007\/s10489-020-01905-y","article-title":"Video sketch: A middle-level representation for action recognition","volume":"51","author":"Zhang","year":"2021","journal-title":"Appl. Intell."},{"key":"ref_134","doi-asserted-by":"crossref","unstructured":"Carreira, J., and Zisserman, A. (2017, January 21\u201326). Quo vadis, action recognition? a new model and the kinetics dataset. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.502"},{"key":"ref_135","doi-asserted-by":"crossref","unstructured":"Wang, L., Xiong, Y., Wang, Z., Qiao, Y., Lin, D., Tang, X., and Van Gool, L. (2016, January 11\u201314). Temporal segment networks: Towards good practices for deep action recognition. Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46484-8_2"},{"key":"ref_136","unstructured":"He, D., Zhou, Z., Gan, C., Li, F., Liu, X., Li, Y., Wang, L., and Wen, S. (February, January 27). Stnet: Local and global spatial-temporal modeling for action recognition. Proceedings of the AAAI Conference on Artificial Intelligence, Honolulu, HI, USA."},{"key":"ref_137","unstructured":"Jiang, B., Wang, M., Gan, W., Wu, W., and Yan, J. (November, January 27). Stm: Spatiotemporal and motion encoding for action recognition. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Republic of Korea."}],"container-title":["Information"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2078-2489\/14\/11\/616\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T21:23:42Z","timestamp":1760131422000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2078-2489\/14\/11\/616"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,11,15]]},"references-count":137,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2023,11]]}},"alternative-id":["info14110616"],"URL":"https:\/\/doi.org\/10.3390\/info14110616","relation":{},"ISSN":["2078-2489"],"issn-type":[{"value":"2078-2489","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,11,15]]}}}