{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,12]],"date-time":"2025-12-12T13:08:11Z","timestamp":1765544891236,"version":"3.41.0"},"reference-count":23,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2024,1,15]],"date-time":"2024-01-15T00:00:00Z","timestamp":1705276800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Asian Low-Resour. Lang. Inf. Process."],"published-print":{"date-parts":[[2024,1,31]]},"abstract":"<jats:p>Social media platforms have made increasing use of irony in recent years. Users can express their ironic thoughts with audio, video, and images attached to text content. When you use irony, you are making fun of a situation or trying to make a point. It can also express frustration or highlight the absurdity of a situation. The use of irony in social media is likely to continue to increase, no matter the reason. By using syntactic information in conjunction with semantic exploration, we show that attention networks can be enhanced. Using learned embedding, unsupervised learning encodes word order into a joint space. By evaluating the entropy of an example class and adding instances, the active learning method uses the shared representation as a query to retrieve semantically similar sentences from a knowledge base. In this way, the algorithm can identify the instance with the maximum uncertainty and extract the most informative example from the training set. An ironic network trained for each labelled record is used to train a classifier (model). The partial training model and the original labelled data generate pseudo-labels for the unlabeled data. To correctly predict the label of a dataset, a classifier (attention network) updates the pseudo-labels for the remaining datasets. After the experimental evaluation of the 1,021 annotated texts, the proposed model performed better than the baseline models, achieving an F1 score of 0.63 on ironic tasks and 0.59 on non-ironic tasks. We also found that the proposed model generalized well to new instances of datasets.<\/jats:p>","DOI":"10.1145\/3580496","type":"journal-article","created":{"date-parts":[[2023,1,19]],"date-time":"2023-01-19T13:24:07Z","timestamp":1674134647000},"page":"1-19","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":5,"title":["Emotional Intelligence Attention Unsupervised Learning Using Lexicon Analysis for Irony-based Advertising"],"prefix":"10.1145","volume":"23","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-3933-4273","authenticated-orcid":false,"given":"Usman","family":"Ahmed","sequence":"first","affiliation":[{"name":"Western Norway Universityof Applied Sciences, Norway"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8768-9709","authenticated-orcid":false,"given":"Jerry Chun-Wei","family":"Lin","sequence":"additional","affiliation":[{"name":"Western Norway Universityof Applied Sciences, Norway"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9851-4103","authenticated-orcid":false,"given":"Gautam","family":"Srivastava","sequence":"additional","affiliation":[{"name":"Brandon University, Canada and China Medical University, Taiwan and Lebanese American University, Lebanon"}]}],"member":"320","published-online":{"date-parts":[[2024,1,15]]},"reference":[{"doi-asserted-by":"publisher","key":"e_1_3_2_2_2","DOI":"10.18653\/v1\/S19-2218"},{"doi-asserted-by":"publisher","key":"e_1_3_2_3_2","DOI":"10.1109\/JBHI.2022.3204633"},{"doi-asserted-by":"publisher","key":"e_1_3_2_4_2","DOI":"10.3389\/fpsyg.2021.642347"},{"doi-asserted-by":"publisher","key":"e_1_3_2_5_2","DOI":"10.18653\/v1\/S18-1095"},{"doi-asserted-by":"publisher","key":"e_1_3_2_6_2","DOI":"10.1016\/j.tele.2021.101752"},{"doi-asserted-by":"publisher","key":"e_1_3_2_7_2","DOI":"10.4000\/books.aaccademia.1992"},{"key":"e_1_3_2_8_2","first-page":"1","article-title":"State of the art: A review of sentiment analysis based on sequential transfer learning","author":"Chan Jireh Yi-Le","year":"2022","unstructured":"Jireh Yi-Le Chan, Khean Thye Bea, Steven Mun Hong Leow, Seuk Wai Phoong, and Wai Khuen Cheng. 2022. State of the art: A review of sentiment analysis based on sequential transfer learning. Artif. Intell. Rev. (2022), 1\u201332.","journal-title":"Artif. Intell. Rev."},{"doi-asserted-by":"publisher","key":"e_1_3_2_9_2","DOI":"10.1017\/S0142716400004057"},{"doi-asserted-by":"publisher","key":"e_1_3_2_10_2","DOI":"10.18653\/v1\/P17-1038"},{"doi-asserted-by":"publisher","key":"e_1_3_2_11_2","DOI":"10.1016\/j.foodqual.2022.104530"},{"doi-asserted-by":"publisher","key":"e_1_3_2_12_2","DOI":"10.1145\/3336191.3371796"},{"doi-asserted-by":"publisher","key":"e_1_3_2_13_2","DOI":"10.1016\/j.ipm.2020.102262"},{"doi-asserted-by":"publisher","key":"e_1_3_2_14_2","DOI":"10.1515\/9781400841424"},{"doi-asserted-by":"publisher","key":"e_1_3_2_15_2","DOI":"10.1145\/3501401"},{"issue":"1","key":"e_1_3_2_16_2","first-page":"1","article-title":"Sentiment analysis in Hindi\u2013A survey on the state-of-the-art techniques","volume":"21","author":"Kulkarni Dhanashree S.","year":"2021","unstructured":"Dhanashree S. Kulkarni and Sunil S. Rodd. 2021. Sentiment analysis in Hindi\u2013A survey on the state-of-the-art techniques. Trans. Asian Low-Resour. Lang. Inf. Process. 21, 1 (2021), 1\u201346.","journal-title":"Trans. Asian Low-Resour. Lang. Inf. Process."},{"doi-asserted-by":"publisher","key":"e_1_3_2_17_2","DOI":"10.1016\/j.neucom.2021.09.057"},{"doi-asserted-by":"publisher","key":"e_1_3_2_18_2","DOI":"10.1007\/978-981-19-0475-2_13"},{"doi-asserted-by":"publisher","key":"e_1_3_2_19_2","DOI":"10.1145\/1571941.1572085"},{"doi-asserted-by":"publisher","key":"e_1_3_2_20_2","DOI":"10.1109\/TAI.2021.3054609"},{"doi-asserted-by":"publisher","key":"e_1_3_2_21_2","DOI":"10.3115\/v1\/D14-1162"},{"doi-asserted-by":"publisher","key":"e_1_3_2_22_2","DOI":"10.1016\/j.knosys.2016.05.035"},{"doi-asserted-by":"publisher","key":"e_1_3_2_23_2","DOI":"10.1016\/j.knosys.2022.108586"},{"doi-asserted-by":"publisher","key":"e_1_3_2_24_2","DOI":"10.1016\/j.ipm.2019.04.006"}],"container-title":["ACM Transactions on Asian and Low-Resource Language Information Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3580496","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3580496","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:37:42Z","timestamp":1750178262000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3580496"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,1,15]]},"references-count":23,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2024,1,31]]}},"alternative-id":["10.1145\/3580496"],"URL":"https:\/\/doi.org\/10.1145\/3580496","relation":{},"ISSN":["2375-4699","2375-4702"],"issn-type":[{"type":"print","value":"2375-4699"},{"type":"electronic","value":"2375-4702"}],"subject":[],"published":{"date-parts":[[2024,1,15]]},"assertion":[{"value":"2022-08-04","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-01-14","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2024-01-15","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}