{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,20]],"date-time":"2026-03-20T15:50:41Z","timestamp":1774021841174,"version":"3.50.1"},"reference-count":71,"publisher":"MDPI AG","issue":"8","license":[{"start":{"date-parts":[[2024,8,2]],"date-time":"2024-08-02T00:00:00Z","timestamp":1722556800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["BDCC"],"abstract":"<jats:p>Predicting the directions of financial markets has been performed using a variety of approaches, and the large volume of unstructured data generated by traders and other stakeholders on social media microblog platforms provides unique opportunities for analyzing financial markets using additional perspectives. Pretrained large language models (LLMs) have demonstrated very good performance on a variety of sentiment analysis tasks in different domains. However, it is known that sentiment analysis is a very domain-dependent NLP task that requires knowledge of the domain ontology, and this is particularly the case with the financial domain, which uses its own unique vocabulary. Recent developments in NLP and deep learning including LLMs have made it possible to generate actionable financial sentiments using multiple sources including financial news, company fundamentals, technical indicators, as well social media microblogs posted on platforms such as StockTwits and X (formerly Twitter). We developed a financial social media sentiment analyzer (FinSoSent), which is a domain-specific large language model for the financial domain that was pretrained on financial news articles and fine-tuned and tested using several financial social media corpora. We conducted a large number of experiments using different learning rates, epochs, and batch sizes to yield the best performing model. Our model outperforms current state-of-the-art FSA models based on over 860 experiments, demonstrating the efficacy and effectiveness of FinSoSent. We also conducted experiments using ensemble models comprising FinSoSent and the other current state-of-the-art FSA models used in this research, and a slight performance improvement was obtained based on majority voting. Based on the results obtained across all models in these experiments, the significance of this study is that it highlights the fact that, despite the recent advances of LLMs, sentiment analysis even in domain-specific contexts remains a difficult research problem.<\/jats:p>","DOI":"10.3390\/bdcc8080087","type":"journal-article","created":{"date-parts":[[2024,8,2]],"date-time":"2024-08-02T16:14:58Z","timestamp":1722615298000},"page":"87","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":19,"title":["FinSoSent: Advancing Financial Market Sentiment Analysis through Pretrained Large Language Models"],"prefix":"10.3390","volume":"8","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6160-4391","authenticated-orcid":false,"given":"Josiel","family":"Delgadillo","sequence":"first","affiliation":[{"name":"School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA 19104, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3618-2101","authenticated-orcid":false,"given":"Johnson","family":"Kinyua","sequence":"additional","affiliation":[{"name":"College of Information Sciences and Technology, Pennsylvania State University, Philadelphia, PA 19104, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8960-4844","authenticated-orcid":false,"given":"Charles","family":"Mutigwe","sequence":"additional","affiliation":[{"name":"College of Business, Western New England University, Springfield, MA 01119, USA"}]}],"member":"1968","published-online":{"date-parts":[[2024,8,2]]},"reference":[{"key":"ref_1","unstructured":"Financial Terms Dictionary (2021, November 30). Investopedia. Available online: https:\/\/www.investopedia.com\/financial-term-dictionary-4769738."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"55","DOI":"10.2469\/faj.v21.n5.55","article-title":"Random Walks in Stock Market Prices","volume":"21","author":"Fama","year":"1965","journal-title":"Financ. Anal. J."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"383","DOI":"10.2307\/2325486","article-title":"Efficient Capital Markets: A Review of Theory and Empirical Work","volume":"25","author":"Fama","year":"1970","journal-title":"J. Financ."},{"key":"ref_4","unstructured":"(2021, November 30). Twitter, Inc. Available online: https:\/\/twitter.com\/."},{"key":"ref_5","unstructured":"(2021, November 30). StockTwits, Inc. Available online: https:\/\/stocktwits.com\/."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Wang, G., Wang, T., Wang, B., Sambasivan, D., Zhang, Z., Zheng, H., and Zhao, B.Y. (2015, January 14\u201318). Crowds on Wall Street: Extracting value from collaborative investing platforms. Proceedings of the 18th ACM Conference on Computer Supported Cooperative Work & Social Computing, Vancouver, BC, Canada.","DOI":"10.1145\/2675133.2675144"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Sohangir, S., Petty, N., and Wang, D. (2018, January 12). Financial sentiment lexicon analysis. Proceedings of the IEEE 12th IEEE International Conference on Semantic Computing (ICSC), Laguna Hills, CA, USA.","DOI":"10.1109\/ICSC.2018.00052"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"3","DOI":"10.1186\/s40537-017-0111-6","article-title":"Big data: Deep learning for financial sentiment analysis","volume":"5","author":"Sohangir","year":"2018","journal-title":"J. Big Data"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"e1253","DOI":"10.1002\/widm.1253","article-title":"Deep learning for sentiment analysis: A survey","volume":"8","author":"Zhang","year":"2018","journal-title":"Wiley Interdiscip. Rev. Data Min. Knowl. Discov."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Zhao, L., Li, L., and Zheng, X. (2020). A BERT Based Sentiment Analysis and Key Entity Detection Approach for Online Financial Texts. arXiv.","DOI":"10.1109\/CSCWD49262.2021.9437616"},{"key":"ref_11","unstructured":"Cui, X., Lam, D., and Verma, A. (2021, November 30). Embedded Value in Bloomberg News and Social Sentiment Data; Bloomberg, Technical Report. Available online: https:\/\/developer.twitter.com\/content\/dam\/developer-twitter\/pdfs-and-files\/Bloomberg-Twitter-Data-Research-Report.pdf."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"1139","DOI":"10.1111\/j.1540-6261.2007.01232.x","article-title":"Giving Content to Investor Sentiment: The Role of Media in the Stock Market","volume":"62","author":"Tetlock","year":"2007","journal-title":"J. Financ."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"1437","DOI":"10.1111\/j.1540-6261.2008.01362.x","article-title":"More Than Words: Quantifying Language to Measure Firms\u2019 Fundamentals","volume":"63","author":"Tetlock","year":"2008","journal-title":"J. Financ."},{"key":"ref_14","unstructured":"Delgadillo, J., Kinyua, J.D., and Mutigwe, C. (2022, January 15\u201316). A BERT-based Model for Financial Social Media Sentiment Analysis. Proceedings of the International Conference on Applications of Sentiment Analysis (ICASA 2022), Cairo, Egypt."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/3185045","article-title":"The State-of-the-Art in Twitter Sentiment Analysis: A Review and Benchmark Evaluation","volume":"9","author":"Zimbra","year":"2018","journal-title":"ACM Trans. Manag. Inf. Syst."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Sun, C., Shrivastava, A., Singh, S., and Gupta, A. (2017, January 22\u201329). Revisiting Unreasonable Effectiveness of Data in Deep Learning Era. Proceedings of the 2017 IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.97"},{"key":"ref_17","unstructured":"Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019, January 2\u20137). BERT: Pretraining of Deep Bidirectional Transformers for Language Understanding. Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Minneapolis, MN, USA."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Howard, J., and Ruder, S. (2018, January 15\u201320). Universal Language Model Fine-tuning for Text Classification. Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, Melbourne, Australia.","DOI":"10.18653\/v1\/P18-1031"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Peters, M.E., Neumann, M., Iyyer, M., Gardner, M., Clark, C., Lee, K., and Zettlemoyer, L. (2018, January 1\u20136). Deep Contextualized Word Representations. Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, New Orleans, LA, USA.","DOI":"10.18653\/v1\/N18-1202"},{"key":"ref_20","first-page":"8","article-title":"Language models are unsupervised multitask learners","volume":"1","author":"Radford","year":"2019","journal-title":"OpenAI Blog"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Beltagy, I., Lo, K., and Cohan, A. (2019, January 3\u20137). SciBERT: A Pretrained Language Model for Scientific Text. Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Hong Kong, China.","DOI":"10.18653\/v1\/D19-1371"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"1234","DOI":"10.1093\/bioinformatics\/btz682","article-title":"BioBERT: A pretrained biomedical language representation model for biomedical text mining","volume":"36","author":"Lee","year":"2019","journal-title":"Bioinformatics"},{"key":"ref_23","unstructured":"Huang, K., Altosaar, J., and Ranganath, R. (2019). ClinicalBERT: Modeling clinical notes and predicting hospital readmission. arXiv."},{"key":"ref_24","first-page":"1","article-title":"Financial sentiment analysis using machine learning techniques","volume":"3","author":"Agaian","year":"2017","journal-title":"Int. J. Invest. Manag. Financ. Innov."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Man, X., Luo, T., and Lin, J. (2019, January 6\u20139). Financial Sentiment Analysis (FSA): A Survey. Proceedings of the IEEE International Conference on Industrial Cyber Physical Systems (ICPS), Taipei, Taiwan.","DOI":"10.1109\/ICPHYS.2019.8780312"},{"key":"ref_26","unstructured":"Yang, S., Rosenfeld, J., and Makutonin, J. (2018). Financial aspect-based sentiment analysis using deep representations. arXiv, Available online: http:\/\/arxiv.org\/abs\/1808.07931."},{"key":"ref_27","unstructured":"Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). RoBERTa: A Robustly Optimized BERT Pretraining Approach. arXiv, Available online: http:\/\/arxiv.org\/abs\/1907.11692."},{"key":"ref_28","unstructured":"Araci, D. (2019). FinBERT: Financial Sentiment Analysis with Pretrained Language Models. arXiv, Available online: http:\/\/arxiv.org\/abs\/1908.10063."},{"key":"ref_29","unstructured":"Araci, D.T., Zulkuf Genc, Z., and FinBERT: Financial Sentiment Analysis with BERT (2022, July 01). Prosus AI Tech Blog. Available online: https:\/\/medium.com\/prosus-ai-tech-blog\/finbert-financial-sentiment-analysis-with-bert-b277a3607101."},{"key":"ref_30","unstructured":"Reuters Corpora (RCV1, RCV2, TRC2) (2023, April 06). National Institute of Standards and Technology, Available online: https:\/\/trec.nist.gov\/data\/reuters\/reuters.html."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"782","DOI":"10.1002\/asi.23062","article-title":"Good debt or bad debt: Detecting semantic orientations in economic texts","volume":"65","author":"Malo","year":"2014","journal-title":"J. Assoc. Inf. Sci. Technol."},{"key":"ref_32","unstructured":"Desola, V., Hanna, K., and Nonis, P. (2019). FinBERT: Pretrained Model on SEC Filings for Financial Natural Language Tasks, University of California. Technical Report."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Liu, Z., Huang, D., Huang, K., Li, Z., and Zhao, J. (2021, January 7\u201315). FinBERT: A Pretrained Financial Language Representation Model for Financial Text Mining. Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence (IJCAI-20), Virtual.","DOI":"10.24963\/ijcai.2020\/622"},{"key":"ref_34","unstructured":"(2021, November 30). Common Crawl. Available online: https:\/\/commoncrawl.org\/."},{"key":"ref_35","unstructured":"(2021, November 30). FinancialWeb. Available online: https:\/\/www.finweb.com\/."},{"key":"ref_36","unstructured":"(2021, November 30). Yahoo! Finance. Available online: https:\/\/finance.yahoo.com\/."},{"key":"ref_37","unstructured":"(2021, November 30). Reddit. Available online: https:\/\/www.reddit.com\/."},{"key":"ref_38","unstructured":"(2021, November 30). Financial Opinion Mining and Question Answering. Available online: https:\/\/sites.google.com\/view\/fiqa\/."},{"key":"ref_39","unstructured":"(2021, November 30). The First Workshop on Financial Technology and Natural Language Processing (FinNLP) with a Shared Task for Sentence Boundary Detection in PDF Noisy Text in the Financial Domain (FinSBD). [n. d.]. Available online: https:\/\/sites.google.com\/nlg.csie.ntu.edu.tw\/finnlp\/."},{"key":"ref_40","unstructured":"Yang, Y., UY, M.C.S., and Huang, A. (2020). FinBERT: A Pretrained Language Model for Financial Communications. arXiv, Available online: https:\/\/arxiv.org\/abs\/2006.08097."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"2151","DOI":"10.2308\/accr-50833","article-title":"Evidence on the Information Content of Text in Analyst Reports","volume":"89","author":"Huang","year":"2014","journal-title":"Account. Rev."},{"key":"ref_42","first-page":"100171","article-title":"PyFin-sentiment: Towards a machine-learning-based model for deriving sentiment from financial tweets","volume":"3","author":"Wilksch","year":"2023","journal-title":"Int. J. Inf. Manag. Data Insights"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Hutto, C., and Gilbert, E. (2014, January 1\u20134). VADER: A Parsimonious Rule Based Model for Sentiment Analysis of Social Media Text. Proceedings of the International AAAI Conference on Web and Social Media, Ann Arbor, MI, USA.","DOI":"10.1609\/icwsm.v8i1.14550"},{"key":"ref_44","unstructured":"Chen, C.-C., Huang, H.-H., and Chen, H.-H. (2018, January 7\u201312). NTUSD-Fin: A market sentiment dictionary for financial social media data applications. Proceedings of the 1st Financial Narrative Processing Workshop (FNP 2018), Miyazaki, Japan."},{"key":"ref_45","unstructured":"Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R., and Lee, Q.V. (2019, January 8\u201314). XLNet: Generalized Autoregressive Pretraining for Language Understanding. Proceedings of the 33rd International Conference on Neural Information Processing Systems (NeurIPS 2019), Vancouver, BC, Canada."},{"key":"ref_46","unstructured":"Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., and Soricut, R. (2019). ALBERT: A Lite BERT for Self-Supervised Learning of Language Representations. arXiv, Available online: http:\/\/arxiv.org\/abs\/1909.11942."},{"key":"ref_47","unstructured":"Sanh, V., Debut, L., Chaumond, J., and Wolf, T. (2019). DistilBERT, a Distilled Version of BERT: Smaller, Faster, Cheaper and Lighter. arXiv, Available online: http:\/\/arxiv.org\/abs\/1910.01108."},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, A., Levy, O., Stoyanov, V., and Zettlemoyer, L. (2019). BART: Denoising Sequence-to-Sequence Pretraining for Natural Language Generation, Translation, and Comprehension. arXiv, Available online: http:\/\/arxiv.org\/abs\/1910.13461.","DOI":"10.18653\/v1\/2020.acl-main.703"},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"131662","DOI":"10.1109\/ACCESS.2020.3009626","article-title":"Evaluation of Sentiment Analysis in Finance: From Lexicons to Transformers","volume":"8","author":"Mishev","year":"2020","journal-title":"IEEE Access"},{"key":"ref_50","unstructured":"Bartunov, O., and Sigaev, T. (2007). Full-Text Search in PostgreSQL\u2014Gentle Introduction, Moscow University. Technical Report."},{"key":"ref_51","unstructured":"Gaillat, T., Zarrouk, M., Freitas, A., and Davis, B. (2018, January 7\u201312). The SSIX Corpora: Three Gold Standard Corpora for Sentiment Analysis in English, Spanish and German Financial Microblogs. Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Miyazaki, Japan."},{"key":"ref_52","unstructured":"Chen, C.-C., Huang, H.-H., and Chen, H.-H. (2020, January 11\u201316). Issues and Perspectives from 10,000 Annotated Financial Social Media Data. Proceedings of the 12th Language Resources and Evaluation Conference, Marseille, France."},{"key":"ref_53","unstructured":"(2021, November 30). SemEval-2017 Task 5: Fine-Grained Sentiment Analysis on Financial Microblogs and News. Available online: https:\/\/alt.qcri.org\/semeval2017\/task5\/."},{"key":"ref_54","unstructured":"Daudert, T. (2020). A Multi-Source Entity-Level Sentiment Corpus for the Financial Domain: The Fin-Lin Corpus. arXiv, Available online: http:\/\/arxiv.org\/abs\/2003.04073."},{"key":"ref_55","unstructured":"Saif, H., Fern\u00e1ndez, M., He, Y., and Alani, H. (2013, January 3). Evaluation datasets for Twitter sentiment analysis: A survey and a new dataset, the STS-Gold. Proceedings of the 1st International Workshop on Emotion and Sentiment in Social and Expressive Media: Approaches and Perspectives from AI (ESSEM 2013), Turin, Italy."},{"key":"ref_56","unstructured":"Taborda, B., de Almeida, A., Dias, J.C., Batista, F., and Ribeiro, R. (2021). Stock Market Tweets Data. IEEE Dataport."},{"key":"ref_57","doi-asserted-by":"crossref","unstructured":"Balaji, P., Nagaraju, O., and Haritha, D. (2017, January 23\u201325). Levels of Sentiment Analysis and its Challenges: A Literature Review. Proceedings of the International Conference of Big Data Analytics and Computational Intelligence (ICBDAC), Chirala, India.","DOI":"10.1109\/ICBDACI.2017.8070879"},{"key":"ref_58","doi-asserted-by":"crossref","first-page":"321","DOI":"10.1613\/jair.953","article-title":"SMOTE: Synthetic Minority Over-sampling Technique","volume":"16","author":"Chawla","year":"2002","journal-title":"J. Artif. Intell. Res."},{"key":"ref_59","unstructured":"He, H., Bai, Y., Garcia, E.A., and Li, S. (2008, January 1\u20138). ADASYN: Adaptive synthetic sampling approach for imbalance learning. Proceedings of the 2008 IEEE International Conference on Neural Networks (IJCNN 2008), Hong Kong, China."},{"key":"ref_60","doi-asserted-by":"crossref","unstructured":"Li, X., Wang, X., and Liu, H. (2021, January 14\u201316). Research on fine-tuning strategy of sentiment analysis model based on BERT. Proceedings of the 2021 IEEE 3rd International Conference on Communications, Information System and Computer Engineering (CISCE), Beijing, China.","DOI":"10.1109\/CISCE52179.2021.9445882"},{"key":"ref_61","unstructured":"Popel, M., and Bojar, O. (2018). Training Tips for the Transformer Model. arXiv, Available online: https:\/\/arxiv.org\/pdf\/1804.00247.pdf."},{"key":"ref_62","unstructured":"Amazon Web Services (2022, June 25). Amazon Comprehend: Features. Available online: https:\/\/aws.amazon.com\/comprehend\/features."},{"key":"ref_63","unstructured":"Amazon Web Services (2022, June 25). Amazon Comprehend Developer Guide. Available online: https:\/\/docs.aws.amazon.com\/comprehend\/latest\/dg\/comprehend-dg.pdf.how-sentiment."},{"key":"ref_64","unstructured":"(2024, March 15). OpenAI, GPT-3.5 Turbo. Available online: https:\/\/platform.openai.com\/docs\/models\/gpt-3-5-turbo."},{"key":"ref_65","unstructured":"(2022, June 25). IBM Cloud API Docs: Natural Language Understanding. Available online: https:\/\/cloud.ibm.com\/apidocs\/natural-language-understanding?code=python."},{"key":"ref_66","unstructured":"IBM (2022, June 25). Watson Natural Language Understanding: Features. Available online: https:\/\/www.ibm.com\/cloud\/watson-natural-language-understanding\/details."},{"key":"ref_67","unstructured":"(2022, June 25). SentiStrength. Available online: http:\/\/sentistrength.wlv.ac.uk\/."},{"key":"ref_68","unstructured":"Hoang, M., Bihorac, O.A., and Rouces, J. (October, January 30). Aspect-Based Sentiment Analysis using BERT. Proceedings of the 22nd Nordic Conference on Computational Linguistics, Turku, Finland. Available online: https:\/\/aclanthology.org\/W19-6120\/."},{"key":"ref_69","unstructured":"Goertzel, B. (2022, June 25). Generative AI vs. AGI: The Cognitive Strengths and Weaknesses of Modern LLMs. Available online: https:\/\/arxiv.org\/pdf\/2309.10371.pdf."},{"key":"ref_70","unstructured":"Rahutomo, F., Kitasuka, T., and Aritsugi, M. (2012, January 29\u201330). Semantic Cosine Similarity. Proceedings of the 7th International Student Conference on Advanced Science and Technology, Seoul, Republic of Korea. Available online: https:\/\/www.researchgate.net\/publication\/262525676_Semantic_Cosine_Similarity."},{"key":"ref_71","doi-asserted-by":"crossref","unstructured":"Nora Raju, T., Rahana, P.A., Moncy, R., Ajay, S., and Nambiar, S.K. (2022, January 23\u201325). Sentence Similarity\u2014A State of Art Approaches. Proceedings of the International Conference on Computing, Communication, Security and Intelligent Systems (IC3SIS), Kochi, India.","DOI":"10.1109\/IC3SIS54991.2022.9885721"}],"container-title":["Big Data and Cognitive Computing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2504-2289\/8\/8\/87\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T15:29:16Z","timestamp":1760110156000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2504-2289\/8\/8\/87"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,8,2]]},"references-count":71,"journal-issue":{"issue":"8","published-online":{"date-parts":[[2024,8]]}},"alternative-id":["bdcc8080087"],"URL":"https:\/\/doi.org\/10.3390\/bdcc8080087","relation":{},"ISSN":["2504-2289"],"issn-type":[{"value":"2504-2289","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,8,2]]}}}