{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,3]],"date-time":"2026-02-03T21:39:11Z","timestamp":1770154751766,"version":"3.49.0"},"publisher-location":"New York, NY, USA","reference-count":21,"publisher":"ACM","license":[{"start":{"date-parts":[[2023,7,14]],"date-time":"2023-07-14T00:00:00Z","timestamp":1689292800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,7,14]]},"DOI":"10.1145\/3614008.3614055","type":"proceedings-article","created":{"date-parts":[[2023,10,17]],"date-time":"2023-10-17T18:19:52Z","timestamp":1697566792000},"page":"308-314","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":4,"title":["Multimodal Sentiment Analysis Method Based on Multi-task Learning"],"prefix":"10.1145","author":[{"ORCID":"https:\/\/orcid.org\/0009-0009-6782-0103","authenticated-orcid":false,"given":"Jie","family":"Li","sequence":"first","affiliation":[{"name":"GNN Team, China Unicom Research Institute, China"}]},{"ORCID":"https:\/\/orcid.org\/0009-0005-1368-8101","authenticated-orcid":false,"given":"Hongwei","family":"Zhao","sequence":"additional","affiliation":[{"name":"Tianjin University of Science and Technology, China"}]}],"member":"320","published-online":{"date-parts":[[2023,10,17]]},"reference":[{"key":"e_1_3_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1002\/widm.1253"},{"issue":"2","key":"e_1_3_2_1_2_1","first-page":"38","article-title":"Multimodal sentiment analysis: a survey and comparison","volume":"10","author":"Kaur","year":"2019","unstructured":"Kaur , Ramandeep, and Sandeep Kautish . \" Multimodal sentiment analysis: a survey and comparison . \" International Journal of Service Science. Management, Engineering, and Technology (IJSSMET) 10 . 2 ( 2019 ): 38 - 58 . Kaur, Ramandeep, and Sandeep Kautish. \"Multimodal sentiment analysis: a survey and comparison. \"International Journal of Service Science. Management, Engineering, and Technology (IJSSMET) 10.2 (2019): 38-58.","journal-title":"International Journal of Service Science. Management, Engineering, and Technology (IJSSMET)"},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"crossref","unstructured":"Williams Jennifer \"Recognizing emotions in video using multimodal dnn feature fusion.\" Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML). 2018.  Williams Jennifer \"Recognizing emotions in video using multimodal dnn feature fusion.\" Proceedings of Grand Challenge and Workshop on Human Multimodal Language (Challenge-HML). 2018.","DOI":"10.18653\/v1\/W18-3302"},{"key":"e_1_3_2_1_4_1","volume-title":"multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos.\" arXiv preprint arXiv. 1606.06259","author":"Zadeh","year":"2016","unstructured":"Zadeh , Amir, \" Mosi : multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos.\" arXiv preprint arXiv. 1606.06259 ( 2016 ). Zadeh, Amir, \"Mosi: multimodal corpus of sentiment intensity and subjectivity analysis in online opinion videos.\" arXiv preprint arXiv. 1606.06259 (2016)."},{"key":"e_1_3_2_1_5_1","doi-asserted-by":"crossref","unstructured":"Zadeh Amir \"Tensor fusion network for multimodal sentiment analysis.\" arXiv preprint arXiv:1707.07250 (2017).  Zadeh Amir \"Tensor fusion network for multimodal sentiment analysis.\" arXiv preprint arXiv:1707.07250 (2017).","DOI":"10.18653\/v1\/D17-1115"},{"key":"e_1_3_2_1_6_1","volume-title":"IEEE","author":"Poria","year":"2016","unstructured":"Poria , Soujanya, \" Convolutional MKL based multimodal emotion recognition and sentiment analysis.\" 2016 IEEE 16th international conference on data mining (ICDM) . IEEE , 2016 . Poria, Soujanya, \"Convolutional MKL based multimodal emotion recognition and sentiment analysis.\" 2016 IEEE 16th international conference on data mining (ICDM). IEEE, 2016."},{"key":"e_1_3_2_1_7_1","volume-title":"learning robust joint representations by cyclic translations between modalities.\" Proceedings of the AAAI Conference on Artificial Intelligence","author":"Pham","unstructured":"Pham , Hai, \" Found in translation : learning robust joint representations by cyclic translations between modalities.\" Proceedings of the AAAI Conference on Artificial Intelligence . vol. 33 . no. 01. 2019. Pham, Hai, \"Found in translation: learning robust joint representations by cyclic translations between modalities.\" Proceedings of the AAAI Conference on Artificial Intelligence. vol. 33. no. 01. 2019."},{"key":"e_1_3_2_1_8_1","volume-title":"an adversarial representation learning and graph fusion network for multimodal fusion.\" Proceedings of the AAAI Conference on Artificial Intelligence","author":"Mai","unstructured":"Mai , Sijie, Haifeng Hu , and Songlong Xing. \"Modality to modality translation : an adversarial representation learning and graph fusion network for multimodal fusion.\" Proceedings of the AAAI Conference on Artificial Intelligence . vol. 34 . no. 01. 2020. Mai, Sijie, Haifeng Hu, and Songlong Xing. \"Modality to modality translation: an adversarial representation learning and graph fusion network for multimodal fusion.\" Proceedings of the AAAI Conference on Artificial Intelligence. vol. 34. no. 01. 2020."},{"key":"e_1_3_2_1_9_1","volume-title":"Dynamically adjusting word representations using nonverbal behaviors.\" Proceedings of the AAAI Conference on Artificial Intelligence","author":"Wang","unstructured":"Wang , Yansen, \" Words can shift : Dynamically adjusting word representations using nonverbal behaviors.\" Proceedings of the AAAI Conference on Artificial Intelligence . vol. 33 . no. 01. 2019. Wang, Yansen, \"Words can shift: Dynamically adjusting word representations using nonverbal behaviors.\" Proceedings of the AAAI Conference on Artificial Intelligence. vol. 33. no. 01. 2019."},{"key":"e_1_3_2_1_10_1","unstructured":"Chung Junyoung \"Empirical evaluation of gated recurrent neural networks on sequence modeling.\" arXiv preprint arXiv:1412.3555 (2014).  Chung Junyoung \"Empirical evaluation of gated recurrent neural networks on sequence modeling.\" arXiv preprint arXiv:1412.3555 (2014)."},{"key":"e_1_3_2_1_11_1","volume-title":"A unified architecture for natural language processing: deep neural networks with multitask learning[C]\/\/Proceedings of the 25th international conference on Machine learning. 2008: 160-167","author":"Collobert R","unstructured":"Collobert R , Weston J. A unified architecture for natural language processing: deep neural networks with multitask learning[C]\/\/Proceedings of the 25th international conference on Machine learning. 2008: 160-167 . Collobert R, Weston J. A unified architecture for natural language processing: deep neural networks with multitask learning[C]\/\/Proceedings of the 25th international conference on Machine learning. 2008: 160-167."},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"crossref","unstructured":"Ma J Zhao Z Yi X Modeling task relationships in multi-task learning with multi-gate mixture-of-experts[C]\/\/Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 2018: 1930-1939  Ma J Zhao Z Yi X Modeling task relationships in multi-task learning with multi-gate mixture-of-experts[C]\/\/Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining. 2018: 1930-1939","DOI":"10.1145\/3219819.3220007"},{"key":"e_1_3_2_1_13_1","doi-asserted-by":"crossref","unstructured":"Misra I Shrivastava A Gupta A Cross-stitch networks for multi-task learning[C]\/\/Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 3994-4003.  Misra I Shrivastava A Gupta A Cross-stitch networks for multi-task learning[C]\/\/Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 3994-4003.","DOI":"10.1109\/CVPR.2016.433"},{"key":"e_1_3_2_1_14_1","first-page":"234","article-title":"\"Nrc emotion lexicon","volume":"2","author":"Mohammad M.","year":"2013","unstructured":"Mohammad , Saif M. , and Peter D. Turney . \"Nrc emotion lexicon .\" National Research Council, Canada 2 ( 2013 ): 234 . Mohammad, Saif M., and Peter D. Turney. \"Nrc emotion lexicon.\" National Research Council, Canada 2 (2013): 234.","journal-title":"National Research Council, Canada"},{"key":"e_1_3_2_1_15_1","volume-title":"Cmu-mosei dataset and interpretable dynamic fusion graph.\" Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers)","author":"Zadeh","year":"2018","unstructured":"Zadeh , Amir, and Paul Pu . \"Multimodal language analysis in the wild : Cmu-mosei dataset and interpretable dynamic fusion graph.\" Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers) . 2018 . Zadeh, Amir, and Paul Pu. \"Multimodal language analysis in the wild: Cmu-mosei dataset and interpretable dynamic fusion graph.\" Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Long Papers). 2018."},{"key":"e_1_3_2_1_16_1","doi-asserted-by":"crossref","unstructured":"Zadeh Amir \"Tensor fusion network for multimodal sentiment analysis.\" arXiv preprint arXiv:1707.07250 (2017).  Zadeh Amir \"Tensor fusion network for multimodal sentiment analysis.\" arXiv preprint arXiv:1707.07250 (2017).","DOI":"10.18653\/v1\/D17-1115"},{"key":"e_1_3_2_1_17_1","volume-title":"Association for Meeting.","volume":"2020","author":"Rahman","year":"2020","unstructured":"Rahman , Wasifur, \" Integrating multimodal information in large pretrained transformers.\" Proceedings of the conference . Association for Meeting. vol. 2020 . NIH Public Access , 2020 . Rahman, Wasifur, \"Integrating multimodal information in large pretrained transformers.\" Proceedings of the conference. Association for Meeting. vol. 2020. NIH Public Access, 2020."},{"key":"e_1_3_2_1_18_1","volume-title":"Efficient low-rank multimodal fusion with modality-specific factors[J]. arXiv preprint arXiv:1806.00064","author":"Liu Z","year":"2018","unstructured":"Liu Z , Shen Y , Lakshminarasimhan V B , Efficient low-rank multimodal fusion with modality-specific factors[J]. arXiv preprint arXiv:1806.00064 , 2018 . Liu Z, Shen Y, Lakshminarasimhan V B, Efficient low-rank multimodal fusion with modality-specific factors[J]. arXiv preprint arXiv:1806.00064, 2018."},{"key":"e_1_3_2_1_19_1","volume-title":"Memory fusion network for multi-view sequential learning[C]\/\/Proceedings of the AAAI conference on artificial intelligence","author":"Zadeh A","year":"2018","unstructured":"Zadeh A , Liang P P , Mazumder N , Memory fusion network for multi-view sequential learning[C]\/\/Proceedings of the AAAI conference on artificial intelligence . 2018 , 32(1). Zadeh A, Liang P P, Mazumder N, Memory fusion network for multi-view sequential learning[C]\/\/Proceedings of the AAAI conference on artificial intelligence. 2018, 32(1)."},{"key":"e_1_3_2_1_20_1","volume-title":"Association Meeting. NIH Public Access, 2019","author":"Tsai Y H H","year":"2019","unstructured":"Tsai Y H H , Bai S , Liang P P , Multimodal transformer for unaligned multimodal language sequences[C]\/\/Proceedings of the conference . Association Meeting. NIH Public Access, 2019 , 2019 : 6558. Tsai Y H H, Bai S, Liang P P, Multimodal transformer for unaligned multimodal language sequences[C]\/\/Proceedings of the conference. Association Meeting. NIH Public Access, 2019, 2019: 6558."},{"key":"e_1_3_2_1_21_1","volume-title":"Poria S. Misa: Modality-invariant and-specific representations for multimodal sentiment analysis[C]\/\/Proceedings of the 28th ACM International Conference on Multimedia. 2020: 1122-1131","author":"Hazarika D","unstructured":"Hazarika D , Zimmermann R , Poria S. Misa: Modality-invariant and-specific representations for multimodal sentiment analysis[C]\/\/Proceedings of the 28th ACM International Conference on Multimedia. 2020: 1122-1131 . Hazarika D, Zimmermann R, Poria S. Misa: Modality-invariant and-specific representations for multimodal sentiment analysis[C]\/\/Proceedings of the 28th ACM International Conference on Multimedia. 2020: 1122-1131."}],"event":{"name":"SPML 2023: 2023 6th International Conference on Signal Processing and Machine Learning","location":"Tianjin China","acronym":"SPML 2023"},"container-title":["2023 6th International Conference on Signal Processing and Machine Learning (SPML)"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3614008.3614055","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3614008.3614055","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:37:27Z","timestamp":1750178247000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3614008.3614055"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,7,14]]},"references-count":21,"alternative-id":["10.1145\/3614008.3614055","10.1145\/3614008"],"URL":"https:\/\/doi.org\/10.1145\/3614008.3614055","relation":{},"subject":[],"published":{"date-parts":[[2023,7,14]]},"assertion":[{"value":"2023-10-17","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}