{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,12]],"date-time":"2026-03-12T20:03:34Z","timestamp":1773345814510,"version":"3.50.1"},"reference-count":24,"publisher":"Sociedade Brasileira de Computa\u00e7\u00e3o - SBC","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"abstract":"<jats:p>A depress\u00e3o tem sido alvo de preocupa\u00e7\u00e3o na sociedade moderna e, conforme a OMS, pode se tornar a doen\u00e7a mais comum at\u00e9 2030. Antes restritos aos consult\u00f3rios, sentimentos com teor depressivo t\u00eam sido compartilhados em redes como a Reddit. Neste cen\u00e1rio, este trabalho prop\u00f5e uma abordagem para classifica\u00e7\u00e3o de postagens de redes sociais com sinais de depress\u00e3o, que se apoia na constru\u00e7\u00e3o de um corpus e de um modelo de linguagem pr\u00e9-treinado chamado DepreBERTBR, considerando o idioma portugu\u00eas brasileiro. O DepreBERTBR foi ajustado para a tarefa citada conforme tr\u00eas graus de depress\u00e3o: ausente, moderada ou grave. Os resultados demonstram que o DepreBERTBR \u00e9 competitivo com respeito a outros modelos de linguagem em portugu\u00eas.<\/jats:p>","DOI":"10.5753\/sbbd.2024.240807","type":"proceedings-article","created":{"date-parts":[[2024,10,28]],"date-time":"2024-10-28T19:31:33Z","timestamp":1730143893000},"page":"181-194","source":"Crossref","is-referenced-by-count":4,"title":["DepreBERTBR: Um Modelo de Linguagem Pr\u00e9-treinado para o Dom\u00ednio da Depress\u00e3o no Idioma Portugu\u00eas Brasileiro"],"prefix":"10.5753","author":[{"given":"Ayrton Douglas Rodrigues","family":"Herculano","sequence":"first","affiliation":[]},{"given":"Damires Yluska de Souza","family":"Souza","sequence":"additional","affiliation":[]},{"given":"Alex Sandro da Cunha","family":"Rego","sequence":"additional","affiliation":[]}],"member":"3742","published-online":{"date-parts":[[2024,10,14]]},"reference":[{"key":"1","doi-asserted-by":"crossref","unstructured":"American Psychiatric Association (2013). Diagnostic and statistical manual of mental disorders: DSM-5, volume 5. American psychiatric association Washington, DC.","DOI":"10.1176\/appi.books.9780890425596"},{"key":"2","doi-asserted-by":"crossref","unstructured":"Azam, F., Agro, M., Sami, M., Abro, M. H., and Dewani, A. (2021). Identifying depression among twitter users using sentiment analysis. In 2021 international conference on artificial intelligence (ICAI), pages 44\u201349. IEEE.","DOI":"10.1109\/ICAI52203.2021.9445271"},{"key":"3","doi-asserted-by":"crossref","unstructured":"Cacheda, F., Fernandez, D., Novoa, F. J., Carneiro, V., et al. (2019). Early detection of depression: social network analysis and random forest techniques. Journal of medical Internet research, 21(6):e12554.","DOI":"10.2196\/12554"},{"key":"4","unstructured":"Caseli, H. d. M. and Nunes, M. d. G. V. (2023). Processamento de linguagem natural: conceitos, t\u00e9cnicas e aplica\u00e7\u00f5es em portugu\u00eas. BPLN, 2a edition."},{"key":"5","doi-asserted-by":"crossref","unstructured":"Costa, P. B., Pavan, M. C., Santos, W. R., Silva, S. C., and Paraboni, I. (2023). Bertabaporu: assessing a genre-specific language model for portuguese nlp. In Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing, pages 217\u2013223.","DOI":"10.26615\/978-954-452-092-2_024"},{"key":"6","doi-asserted-by":"crossref","unstructured":"da Silva Nascimento, R., Parreira, P., dos Santos, G. N., and Guedes, G. P. (2018). Identificando sinais de comportamento depressivo em redes sociais. In Anais do VII Brazilian Workshop on Social Network Analysis and Mining. SBC.","DOI":"10.5753\/brasnam.2018.3597"},{"key":"7","unstructured":"de Psiquiatria, A. A. (2022). Manual Diagn\u00f3stico e Estat\u00edstico de Transtornos Mentais - DSM-5-TR. Artmed."},{"key":"8","unstructured":"Devlin, J., Chang, M.-W., Lee, K., and Toutanova, K. (2019). BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4171\u20134186. NAACL."},{"key":"9","unstructured":"Ji, S., Zhang, T., Ansari, L., Fu, J., Tiwari, P., and Cambria, E. (2022). MentalBERT: Publicly available pretrained language models for mental healthcare. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 7184\u20137190. European Language Resources Association."},{"key":"10","doi-asserted-by":"crossref","unstructured":"Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C. H., and Kang, J. (2020). Biobert: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics, 36(4):1234\u20131240.","DOI":"10.1093\/bioinformatics\/btz682"},{"key":"11","doi-asserted-by":"crossref","unstructured":"Liu, P., Yuan, W., Fu, J., Jiang, Z., Hayashi, H., and Neubig, G. (2023). Pre-train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Comput. Surv., 55(9).","DOI":"10.1145\/3560815"},{"key":"12","unstructured":"Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Levy, O., Lewis, M., Zettlemoyer, L., and Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692."},{"key":"13","doi-asserted-by":"crossref","unstructured":"Low, D. M., Rumker, L., Talkar, T., Torous, J., Cecchi, G., and Ghosh, S. S. (2020). Natural language processing reveals vulnerable mental health support groups and heightened health anxiety on reddit during covid-19: Observational study. Journal of medical Internet research, 22(10):e22635.","DOI":"10.2196\/22635"},{"key":"14","unstructured":"Oliveira, B. S. N., do R\u00eago, L. G. C., Peres, L., da Silva, T. L. C., and de Mac\u00eado, J. A. F. (2022). Processamento de linguagem natural via aprendizagem profunda. Sociedade Brasileira de Computa\u00e7\u00e3o."},{"key":"15","unstructured":"OMS (2023). Organiza\u00e7\u00e3o mundial de sa\u00fade (oms): Desordem depressiva (depress\u00e3o). <a href=\"https:\/\/www.who.int\/news-room\/fact-sheets\/detail\/depression\"target=\"_blank\">[link]<\/a>. \u00daltimo Acesso 28 de Mai 2024."},{"key":"16","doi-asserted-by":"crossref","unstructured":"Pan, S. J. and Yang, Q. (2009). A survey on transfer learning. IEEE Transactions on knowledge and data engineering, 22(10):1345\u20131359.","DOI":"10.1109\/TKDE.2009.191"},{"key":"17","doi-asserted-by":"crossref","unstructured":"Pos\u0301wiata, R. and Pere\u0142kiewicz, M. (2022). Opi@ lt-edi-acl2022: Detecting signs of depression from social media text using roberta pre-trained language models. In Proceedings of the Second Workshop on Language Technology for Equality, Diversity and Inclusion, pages 276\u2013282.","DOI":"10.18653\/v1\/2022.ltedi-1.40"},{"key":"18","doi-asserted-by":"crossref","unstructured":"Sampath, K. and Durairaj, T. (2022). Data set creation and empirical analysis for detecting signs of depression from social media postings. In International Conference on Computational Intelligence in Data Science, pages 136\u2013151. Springer.","DOI":"10.1007\/978-3-031-16364-7_11"},{"key":"19","doi-asserted-by":"crossref","unstructured":"Santos, W. R. d., de Oliveira, R. L., and Paraboni, I. (2023). Setembrobr: a social media corpus for depression and anxiety disorder prediction. Language Resources and Evaluation, pages 1\u201328.","DOI":"10.1007\/s10579-022-09633-0"},{"key":"20","doi-asserted-by":"crossref","unstructured":"Souza, F., Nogueira, R., and Lotufo, R. (2020). Bertimbau: pretrained bert models for brazilian portuguese. In Intelligent Systems: 9th Brazilian Conference, BRACIS 2020, Rio Grande, Brazil, October 20\u201323, 2020, Proceedings, Part I 9, pages 403\u2013417. Springer.","DOI":"10.1007\/978-3-030-61377-8_28"},{"key":"21","doi-asserted-by":"crossref","unstructured":"Uban, A.-S., Chulvi, B., and Rosso, P. (2021). An emotion and cognitive based analysis of mental health disorders from social media data. Future Generation Computer Systems, 124:480\u2013494.","DOI":"10.1016\/j.future.2021.05.032"},{"key":"22","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, \u0141., and Polosukhin, I. (2017). Attention is all you need. Advances in neural information processing systems, 30."},{"key":"23","unstructured":"Wagner Filho, J. A., Wilkens, R., Idiart, M., and Villavicencio, A. (2018). The brwac corpus: a new open resource for brazilian portuguese. In Proceedings of the eleventh international conference on language resources and evaluation (LREC 2018)."},{"key":"24","unstructured":"Wu, Y., Schuster, M., Chen, Z., Le, Q. V., Norouzi, M., Macherey, W., Krikun, M., Cao, Y., Gao, Q., Macherey, K., et al. (2016). Google\u2019s neural machine translation system: Bridging the gap between human and machine translation. arXiv preprint arXiv:1609.08144."}],"event":{"name":"Simp\u00f3sio Brasileiro de Banco de Dados","location":"Brasil","acronym":"SBBD 2024","number":"39"},"container-title":["Anais do XXXIX Simp\u00f3sio Brasileiro de Banco de Dados (SBBD 2024)"],"original-title":[],"link":[{"URL":"https:\/\/sol.sbc.org.br\/index.php\/sbbd\/article\/download\/30692\/30495","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/sol.sbc.org.br\/index.php\/sbbd\/article\/download\/30692\/30495","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,28]],"date-time":"2024-10-28T19:38:34Z","timestamp":1730144314000},"score":1,"resource":{"primary":{"URL":"https:\/\/sol.sbc.org.br\/index.php\/sbbd\/article\/view\/30692"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,10,14]]},"references-count":24,"URL":"https:\/\/doi.org\/10.5753\/sbbd.2024.240807","relation":{},"subject":[],"published":{"date-parts":[[2024,10,14]]}}}