{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,3]],"date-time":"2026-03-03T10:10:24Z","timestamp":1772532624015,"version":"3.50.1"},"reference-count":63,"publisher":"PeerJ","license":[{"start":{"date-parts":[[2026,3,3]],"date-time":"2026-03-03T00:00:00Z","timestamp":1772496000000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia","award":["PNURSP2026R513"],"award-info":[{"award-number":["PNURSP2026R513"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"abstract":"<jats:p>Emotion recognition plays an important role in a wide range of application domains. Although previous studies have made progress in this domain, they often fall short in achieving a better understanding of emotions and inferring their underlying causes. To address these limitations, we propose an emotion recognition framework that integrates visual, audio, and textual modalities within a unified architecture. The proposed framework integrates an adaptive cross-modal attention module to capture inter-modal interactions. This module dynamically adjusts the contribution of each modality based on contextual relevance, enhancing recognition accuracy. Additionally, an emotion causality inference module uses a fine-tuned, trainable LLaMA2-Chat (7B) model to jointly process image and text data. This identifies word clues associated with the expressed emotions. Furthermore, a real-time emotion feedback module delivers instantaneous assessments of emotional states during conversations, supporting timely and context-aware interventions. The experimental results on four datasets, SEMAINE, AESI, ECF, and MER-2024, demonstrate that our method achieves improvements in F1-scores compared to baselines.<\/jats:p>","DOI":"10.7717\/peerj-cs.3629","type":"journal-article","created":{"date-parts":[[2026,3,3]],"date-time":"2026-03-03T08:28:24Z","timestamp":1772526504000},"page":"e3629","source":"Crossref","is-referenced-by-count":0,"title":["Cross-modal emotion recognition with causality inference in human conversations"],"prefix":"10.7717","volume":"12","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0067-692X","authenticated-orcid":true,"given":"Tahani Jaser","family":"Alahmadi","sequence":"first","affiliation":[{"name":"Department of Information Systems, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-4243-0928","authenticated-orcid":true,"given":"Galiya","family":"Ybytayeva","sequence":"additional","affiliation":[{"name":"School of Engineering, International Educational Corporation, Kazakh Leading Academy of Architecture and Civil Engineering, Almaty, Kazakhstan"}]},{"given":"Akbayan","family":"Bekarystankyzy","sequence":"additional","affiliation":[{"name":"School of Digital Technologies, Narxoz University, Almaty, Kazakhstan"}]},{"given":"Khalid J.","family":"Alzahrani","sequence":"additional","affiliation":[{"name":"Department of Clinical Laboratories Sciences, College of Applied Medical Sciences, Taif University, Taif, Saudi Arabia"}]},{"given":"Rizwan","family":"Abbas","sequence":"additional","affiliation":[{"name":"College of Computer Science and Technology, Zhejiang University, Hangzhou, Zhejiang, China"}]},{"given":"Hala","family":"Abdelhameed","sequence":"additional","affiliation":[{"name":"Khaybar Applied College, Taibah University, Medina, Saudi Arabia"},{"name":"Faculty of Computer and Artificial Intelligence, Fayoum University, Fayoum, Egypt"}]}],"member":"4443","published-online":{"date-parts":[[2026,3,3]]},"reference":[{"issue":"2","key":"10.7717\/peerj-cs.3629\/ref-1","doi-asserted-by":"publisher","first-page":"129073","DOI":"10.1016\/j.neucom.2024.129073","article-title":"Context-based emotion recognition: a survey","volume":"618","author":"Abbas","year":"2025a","journal-title":"Neurocomputing"},{"issue":"3","key":"10.7717\/peerj-cs.3629\/ref-2","doi-asserted-by":"publisher","first-page":"253","DOI":"10.1007\/s00530-025-01780-y","article-title":"Emotion recognition in live broadcasting: a multimodal deep learning framework","volume":"31","author":"Abbas","year":"2025b","journal-title":"Multimedia Systems"},{"issue":"5","key":"10.7717\/peerj-cs.3629\/ref-3","doi-asserted-by":"publisher","first-page":"127713","DOI":"10.1016\/j.neucom.2024.127713","article-title":"Deep operational audio-visual emotion recognition","volume":"588","author":"Akt\u00fcrk","year":"2024","journal-title":"Neurocomputing"},{"issue":"1","key":"10.7717\/peerj-cs.3629\/ref-4","doi-asserted-by":"publisher","first-page":"139","DOI":"10.1080\/0144929x.2022.2156387","article-title":"Machine learning techniques for emotion detection and sentiment analysis: current state, challenges, and future directions","volume":"43","author":"Alslaity","year":"2024","journal-title":"Behaviour & Information Technology"},{"issue":"4","key":"10.7717\/peerj-cs.3629\/ref-5","doi-asserted-by":"publisher","first-page":"100770","DOI":"10.1016\/j.entcom.2024.100770","article-title":"Advancing personalized human-robot interaction in the smart world through emotional AI in entertainment robots","volume":"52","author":"Aoudni","year":"2025","journal-title":"Entertainment Computing"},{"key":"10.7717\/peerj-cs.3629\/ref-6","doi-asserted-by":"crossref","DOI":"10.1109\/ICIT63637.2025.10965274","article-title":"Multimodal emotion recognition based on multi-scale facial features and cross-modal attention","author":"Bao","year":"2025"},{"key":"10.7717\/peerj-cs.3629\/ref-7","first-page":"599","article-title":"Research on the detection of causality for textual emotion-cause pair based on BERT","author":"Cao","year":"2022"},{"issue":"4","key":"10.7717\/peerj-cs.3629\/ref-8","doi-asserted-by":"publisher","first-page":"1731","DOI":"10.1007\/s11280-022-01111-5","article-title":"Graph attention network based detection of causality for textual emotion-cause pair","volume":"26","author":"Cao","year":"2023","journal-title":"World Wide Web (WWW)"},{"issue":"6","key":"10.7717\/peerj-cs.3629\/ref-9","doi-asserted-by":"publisher","first-page":"e804","DOI":"10.7717\/peerj-cs.804","article-title":"Comparing supervised and unsupervised approaches to multimodal emotion recognition","volume":"7","author":"Carbonell","year":"2021","journal-title":"PeerJ Computer Science"},{"issue":"5","key":"10.7717\/peerj-cs.3629\/ref-10","doi-asserted-by":"publisher","first-page":"312","DOI":"10.3109\/15622975.2015.1012228","article-title":"The development of the athens emotional states inventory (AESI): collection, validation and automatic processing of emotionally loaded sentences","volume":"16","author":"Chaspari","year":"2015","journal-title":"The World Journal of Biological Psychiatry"},{"key":"10.7717\/peerj-cs.3629\/ref-11","first-page":"213","article-title":"EmoChat: bringing multimodal emotion detection to mobile conversation","author":"Chong","year":"2019"},{"issue":"32","key":"10.7717\/peerj-cs.3629\/ref-12","doi-asserted-by":"publisher","first-page":"23311","DOI":"10.1007\/s00521-021-06012-8","article-title":"Deep learning-based facial emotion recognition for human-computer interaction applications","volume":"35","author":"Chowdary","year":"2023","journal-title":"Neural Computing and Applications"},{"key":"10.7717\/peerj-cs.3629\/ref-13","doi-asserted-by":"publisher","first-page":"2349","DOI":"10.1109\/access.2023.3348518","article-title":"Residual relation-aware attention deep graph-recurrent model for emotion recognition in conversation","volume":"12","author":"Duong","year":"2024","journal-title":"IEEE Access"},{"key":"10.7717\/peerj-cs.3629\/ref-14","doi-asserted-by":"crossref","DOI":"10.1109\/WF-IoT54382.2022.10152117","article-title":"Speech emotion recognition using supervised deep recurrent system for mental health monitoring","author":"Elsayed","year":"2022"},{"key":"10.7717\/peerj-cs.3629\/ref-15","doi-asserted-by":"crossref","DOI":"10.1109\/ISC257844.2023.10293353","article-title":"Facial emotion recognition in smart education systems: a review","author":"Farman","year":"2023"},{"issue":"1","key":"10.7717\/peerj-cs.3629\/ref-16","doi-asserted-by":"publisher","first-page":"1","DOI":"10.4018\/ijswis.339187","article-title":"Affective prompt-tuning-based language model for semantic-based emotional text generation","volume":"20","author":"Gu","year":"2024","journal-title":"International Journal on Semantic Web and Information Systems"},{"key":"10.7717\/peerj-cs.3629\/ref-17","first-page":"109","article-title":"Recognition and visualization of facial expression and emotion in healthcare","author":"Hadjar","year":"2020"},{"issue":"14","key":"10.7717\/peerj-cs.3629\/ref-18","doi-asserted-by":"publisher","first-page":"103092","DOI":"10.1016\/j.specom.2024.103092","article-title":"Emotions recognition in audio signals using an extension of the latent block model","volume":"161","author":"Haj","year":"2024","journal-title":"Speech Communication"},{"key":"10.7717\/peerj-cs.3629\/ref-19","first-page":"2594","article-title":"ICON: interactive conversational memory network for multimodal emotion detection","volume-title":"Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Brussels, Belgium, October 31\u2013November 4, 2018","author":"Hazarika","year":"2018"},{"issue":"3","key":"10.7717\/peerj-cs.3629\/ref-20","doi-asserted-by":"publisher","first-page":"43","DOI":"10.17083\/ijsg.v10i3.603","article-title":"ReWIND: a CBT-based serious game to improve cognitive emotion regulation and anxiety disorder","volume":"10","author":"Heng","year":"2023","journal-title":"International Journal of Serious Games"},{"key":"10.7717\/peerj-cs.3629\/ref-21","doi-asserted-by":"publisher","first-page":"14324","DOI":"10.1109\/access.2024.3356185","article-title":"Cross-modal dynamic transfer learning for multimodal emotion recognition","volume":"12","author":"Hong","year":"2024","journal-title":"IEEE Access"},{"key":"10.7717\/peerj-cs.3629\/ref-22","first-page":"7837","article-title":"UniMSE: towards unified multimodal sentiment analysis and emotion recognition","volume-title":"Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7\u201311, 2022","author":"Hu","year":"2022"},{"key":"10.7717\/peerj-cs.3629\/ref-23","doi-asserted-by":"crossref","DOI":"10.21437\/Interspeech.2024-1733","article-title":"Cross-modal features interaction-and-aggregation network with self-consistency training for speech emotion recognition","volume-title":"25th Annual Conference of the International Speech Communication Association, Interspeech 2024, Kos, Greece, September 1\u20135, 2024","author":"Hu","year":"2024"},{"key":"10.7717\/peerj-cs.3629\/ref-24","first-page":"8139","article-title":"Causal discovery inspired unsupervised domain adaptation for emotion-cause pair extraction","volume-title":"Findings of the Association for Computational Linguistics: EMNLP 2024, Miami, Florida, USA, November 12\u201316, 2024","author":"Hua","year":"2024"},{"key":"10.7717\/peerj-cs.3629\/ref-25","first-page":"4148","article-title":"COGMEN: contextualized GNN based multimodal emotion recognition","volume-title":"Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL 2022, Seattle, WA, United States, July 10\u201315, 2022","author":"Joshi","year":"2022"},{"issue":"12","key":"10.7717\/peerj-cs.3629\/ref-26","doi-asserted-by":"publisher","first-page":"e1091","DOI":"10.7717\/peerj-cs.1091","article-title":"Feature selection enhancement and feature space visualization for speech-based emotion recognition","volume":"8","author":"Kanwal","year":"2022","journal-title":"PeerJ Computer Science"},{"issue":"8","key":"10.7717\/peerj-cs.3629\/ref-27","doi-asserted-by":"publisher","first-page":"23129","DOI":"10.1007\/s11042-023-16342-5","article-title":"CNN-transformer based emotion classification from facial expressions and body gestures","volume":"83","author":"Karatay","year":"2024","journal-title":"Multimedia Tools and Applications"},{"key":"10.7717\/peerj-cs.3629\/ref-28","first-page":"34:1","article-title":"Building a llama2-finetuned LLM for odia language utilizing domain knowledge instruction set","author":"Kohli","year":"2023"},{"key":"10.7717\/peerj-cs.3629\/ref-29","first-page":"41","article-title":"Emotions in latam: a new dataset and benchmark for emotion recognition in latin America","author":"Kumar","year":"2025"},{"key":"10.7717\/peerj-cs.3629\/ref-30","doi-asserted-by":"publisher","first-page":"917","DOI":"10.1109\/taslp.2023.3340603","article-title":"Selective acoustic feature enhancement for speech emotion recognition with noisy speech","volume":"32","author":"Leem","year":"2024","journal-title":"IEEE\/ACM Transactions on Audio, Speech, and Language Processing"},{"key":"10.7717\/peerj-cs.3629\/ref-31","doi-asserted-by":"publisher","first-page":"114121","DOI":"10.1016\/j.dss.2023.114121","article-title":"An explanation framework and method for AI-based text emotion analysis and visualisation","volume":"178","author":"Li","year":"2024","journal-title":"Decision Support Systems"},{"key":"10.7717\/peerj-cs.3629\/ref-32","doi-asserted-by":"publisher","first-page":"6766","DOI":"10.1109\/tmm.2025.3590929","article-title":"ROSA: a robust self-adaptive model for multimodal emotion recognition with uncertain missing modalities","volume":"27","author":"Li","year":"2025","journal-title":"IEEE Transactions on Multimedia"},{"key":"10.7717\/peerj-cs.3629\/ref-33","first-page":"41","article-title":"MER 2024: semi-supervised learning, noise robustness, and open-vocabulary multimodal emotion recognition","volume-title":"Proceedings of the 2nd International Workshop on Multimodal and Responsible Affective Computing, MRAC 2024, Melbourne VIC, Australia, 28 October 2024\u20131 November 2024","author":"Lian","year":"2024"},{"issue":"7","key":"10.7717\/peerj-cs.3629\/ref-34","doi-asserted-by":"publisher","first-page":"18943","DOI":"10.1007\/s11042-023-16062-w","article-title":"Emotion prediction for textual data using glove based hebi-cudnnlstm model","volume":"83","author":"Mahto","year":"2024","journal-title":"Multimedia Tools and Applications"},{"key":"10.7717\/peerj-cs.3629\/ref-35","first-page":"6818","article-title":"DialogueRNN: an attentive RNN for emotion detection in conversations","author":"Majumder","year":"2019"},{"key":"10.7717\/peerj-cs.3629\/ref-36","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2205.00763","article-title":"Data-driven emotional body language generation for social robotics","author":"Marmpena","year":"2022"},{"issue":"1","key":"10.7717\/peerj-cs.3629\/ref-37","doi-asserted-by":"publisher","first-page":"5","DOI":"10.1109\/t-affc.2011.20","article-title":"The SEMAINE database: annotated multimodal records of emotionally colored conversations between a person and a limited agent","volume":"3","author":"McKeown","year":"2012","journal-title":"IEEE Transactions on Affective Computing"},{"key":"10.7717\/peerj-cs.3629\/ref-38","first-page":"5661","article-title":"Affect2MM: affective analysis of multimedia content using emotion causality","author":"Mittal","year":"2021"},{"key":"10.7717\/peerj-cs.3629\/ref-39","article-title":"Mirroring facial expressions and emotions in dyadic conversations","volume-title":"Proceedings of the Tenth International Conference on Language Resources and Evaluation LREC 2016, Portoro\u017e, Slovenia, May 23\u201328, 2016","author":"Navarretta","year":"2016"},{"key":"10.7717\/peerj-cs.3629\/ref-40","first-page":"1264","article-title":"Speech emotion recognition using hybrid textual features, mfcc and deep learning technique","author":"Padman","year":"2023"},{"issue":"3","key":"10.7717\/peerj-cs.3629\/ref-41","doi-asserted-by":"publisher","first-page":"8663","DOI":"10.1007\/s11042-023-16036-y","article-title":"Machine learning approach of speech emotions recognition using feature fusion technique","volume":"83","author":"Paul","year":"2024","journal-title":"Multimedia Tools and Applications"},{"key":"10.7717\/peerj-cs.3629\/ref-42","first-page":"4752","article-title":"Zero-shot audio-visual compound expression recognition method based on emotion probability fusion","author":"Ryumina","year":"2024"},{"key":"10.7717\/peerj-cs.3629\/ref-43","first-page":"626","article-title":"An ML model for mental health monitoring using facial emotion detection and analyzing social media posts","volume-title":"Proceedings of the 38th ACM\/SIGAPP Symposium on Applied Computing, SAC 2023, Tallinn, Estonia, March 27\u201331, 2023","author":"Shafna","year":"2023"},{"issue":"8","key":"10.7717\/peerj-cs.3629\/ref-44","doi-asserted-by":"publisher","first-page":"e1992","DOI":"10.7717\/peerj-cs.1992","article-title":"Fine grain emotion analysis in spanish using linguistic features and transformers","volume":"10","author":"Salmer\u00f3n-R\u00edos","year":"2024","journal-title":"PeerJ Computer Science"},{"issue":"31","key":"10.7717\/peerj-cs.3629\/ref-45","doi-asserted-by":"publisher","first-page":"22935","DOI":"10.1007\/s00521-022-06913-2","article-title":"Real-time emotional health detection using fine-tuned transfer networks with multimodal fusion","volume":"35","author":"Sharma","year":"2023","journal-title":"Neural Computing and Applications"},{"issue":"10","key":"10.7717\/peerj-cs.3629\/ref-46","doi-asserted-by":"publisher","first-page":"e2104","DOI":"10.7717\/peerj-cs.2104","article-title":"Deep learning-based dimensional emotion recognition for conversational agent-based cognitive behavioral therapy","volume":"10","author":"Striegl","year":"2024","journal-title":"PeerJ Computer Science"},{"issue":"1","key":"10.7717\/peerj-cs.3629\/ref-47","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/taffc.2024.3406710","article-title":"Dynamic causal disentanglement model for dialogue emotion detection","volume":"16","author":"Su","year":"2025","journal-title":"IEEE Transactions on Affective Computing"},{"key":"10.7717\/peerj-cs.3629\/ref-48","doi-asserted-by":"crossref","DOI":"10.21437\/Interspeech.2024-427","article-title":"MFSN: multi-perspective fusion search network for pre-training knowledge in speech emotion recognition","volume-title":"25th Annual Conference of the International Speech Communication Association, Interspeech 2024, Kos, Greece, September 1\u20135, 2024","author":"Sun","year":"2024"},{"key":"10.7717\/peerj-cs.3629\/ref-49","doi-asserted-by":"publisher","first-page":"443","DOI":"10.1007\/978-981-15-5285-4_44","volume-title":"Textual Feature Ensemble-Based Sarcasm Detection in Twitter Data","author":"Sundararajan","year":"2021"},{"issue":"4","key":"10.7717\/peerj-cs.3629\/ref-50","doi-asserted-by":"publisher","first-page":"5949","DOI":"10.1007\/s11042-022-13593-6","article-title":"Stress emotion recognition with discrepancy reduction using transfer learning","volume":"82","author":"Theerthagiri","year":"2023","journal-title":"Multimedia Tools and Applications"},{"issue":"3","key":"10.7717\/peerj-cs.3629\/ref-51","doi-asserted-by":"publisher","first-page":"28:1","DOI":"10.1145\/3714410","article-title":"Dialoguepfm: prompt-based fusion model for emotion recognition in conversation","volume":"24","author":"Tian","year":"2025","journal-title":"ACM Transactions on Asian and Low-resource Language Information Processing"},{"key":"10.7717\/peerj-cs.3629\/ref-52","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/smap56125.2022.9942096","article-title":"Facially expressed emotions and hedonic liking on social media food marketing campaigns: comparing different types of products and media posts","volume-title":"2022 17th International Workshop on Semantic and Social Media Adaptation & Personalization (SMAP)","author":"Tzafilkou","year":"2022"},{"issue":"3","key":"10.7717\/peerj-cs.3629\/ref-53","doi-asserted-by":"publisher","first-page":"1832","DOI":"10.1109\/taffc.2022.3226559","article-title":"Multimodal emotion-cause pair extraction in conversations","volume":"14","author":"Wang","year":"2023","journal-title":"Ieee Transactions on Affective Computing"},{"issue":"3","key":"10.7717\/peerj-cs.3629\/ref-54","doi-asserted-by":"publisher","first-page":"107547","DOI":"10.1016\/j.knosys.2021.107547","article-title":"Empathetic response generation through graph-based multi-hop reasoning on emotional causality","volume":"233","author":"Wang","year":"2021","journal-title":"Knowledge-Based Systems"},{"issue":"Part A","key":"10.7717\/peerj-cs.3629\/ref-55","doi-asserted-by":"publisher","first-page":"121419","DOI":"10.1016\/j.eswa.2023.121419","article-title":"Learning facial expression and body gesture visual information for video emotion recognition","volume":"237","author":"Wei","year":"2024","journal-title":"Expert Systems with Applications"},{"key":"10.7717\/peerj-cs.3629\/ref-56","doi-asserted-by":"publisher","first-page":"141251\u2013141260","DOI":"10.1109\/access.2023.3342456","article-title":"DialoguePCN: perception and cognition network for emotion recognition in conversations","volume":"11","author":"Wu","year":"2023","journal-title":"IEEE Access"},{"issue":"9","key":"10.7717\/peerj-cs.3629\/ref-57","doi-asserted-by":"publisher","first-page":"104970","DOI":"10.1016\/j.bspc.2023.104970","article-title":"Depression recognition base on acoustic speech model of multi-task emotional stimulus","volume":"85","author":"Xing","year":"2023","journal-title":"Biomedical Signal Processing and Control"},{"issue":"2","key":"10.7717\/peerj-cs.3629\/ref-58","doi-asserted-by":"publisher","first-page":"21","DOI":"10.1016\/j.specom.2022.07.005","article-title":"GM-TCNet: gated multi-scale temporal convolutional network using emotion causality for speech emotion recognition","volume":"145","author":"Ye","year":"2022","journal-title":"Speech Communication"},{"issue":"8","key":"10.7717\/peerj-cs.3629\/ref-59","doi-asserted-by":"publisher","first-page":"5063","DOI":"10.1007\/s00500-023-07924-4","article-title":"Textual emotion recognition method based on ALBERT-BiLSTM model and SVM-NB classification","volume":"27","author":"Ye","year":"2023","journal-title":"Soft Computing"},{"key":"10.7717\/peerj-cs.3629\/ref-60","first-page":"5699","article-title":"Interactive multimodal framework with temporal modeling for emotion recognition","author":"Yu","year":"2025"},{"key":"10.7717\/peerj-cs.3629\/ref-61","doi-asserted-by":"publisher","DOI":"10.48550\/arXiv.2501.00778","article-title":"Decoding the flow: causemotion for emotional causality analysis in long-form conversations","author":"Zhang","year":"2025"},{"issue":"3","key":"10.7717\/peerj-cs.3629\/ref-62","doi-asserted-by":"publisher","first-page":"1359","DOI":"10.1007\/s00371-023-02854-6","article-title":"Emotion-wise feature interaction analysis-based visual emotion distribution learning","volume":"40","author":"Zhang","year":"2024","journal-title":"The Visual Computer"},{"key":"10.7717\/peerj-cs.3629\/ref-63","doi-asserted-by":"crossref","DOI":"10.21437\/Interspeech.2024-1735","article-title":"MFDR: multiple-stage fusion and dynamically refined network for multimodal emotion recognition","volume-title":"25th Annual Conference of the International Speech Communication Association, Interspeech 2024, Kos, Greece, September 1\u20135, 2024","author":"Zhao","year":"2024"}],"container-title":["PeerJ Computer Science"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/peerj.com\/articles\/cs-3629.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/peerj.com\/articles\/cs-3629.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/peerj.com\/articles\/cs-3629.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/peerj.com\/articles\/cs-3629.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,3]],"date-time":"2026-03-03T08:28:29Z","timestamp":1772526509000},"score":1,"resource":{"primary":{"URL":"https:\/\/peerj.com\/articles\/cs-3629"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,3]]},"references-count":63,"alternative-id":["10.7717\/peerj-cs.3629"],"URL":"https:\/\/doi.org\/10.7717\/peerj-cs.3629","archive":["CLOCKSS","LOCKSS","Portico"],"relation":{},"ISSN":["2376-5992"],"issn-type":[{"value":"2376-5992","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,3,3]]},"article-number":"e3629"}}