{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,12]],"date-time":"2025-11-12T21:06:48Z","timestamp":1762981608324,"version":"3.41.0"},"reference-count":30,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2023,9,28]],"date-time":"2023-09-28T00:00:00Z","timestamp":1695859200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["J. Data and Information Quality"],"published-print":{"date-parts":[[2023,9,30]]},"abstract":"<jats:p>As mobile networks and APPs are developed, user-generated content (UGC), which includes multi-source heterogeneous data like user reviews, tags, scores, images, and videos, has become an essential basis for improving the quality of personalized services. Due to the multi-source heterogeneous nature of the data, big data fusion offers both promise and drawbacks. With the rise of mobile networks and applications, UGC, which includes multi-source heterogeneous data including ratings, marks, scores, images, and videos, has gained importance. This information is very important for improving the calibre of customized services. The key to the application's success is representational learning of fusing and vectorization on the multi-source heterogeneous UGC. Multi-source text fusion and representation learning have become the key to its application. In this regard, a fusion representation learning for multi-source text and image is proposed. The convolutional fusion technique, in contrast to splicing and fusion, may take into consideration the varied data characteristics in each size. This research proposes a new data feature fusion strategy based on the convolution operation, which was inspired by the convolutional neural network. Using Doc2vec and LDA model, the vectorized representation of multi-source text is given, and the deep convolutional network is used to obtain it. Finally, the proposed algorithm is applied to Amazon's commodity dataset containing UGC content based on the classification accuracy of UGC vectorized representation items and shows the feasibility and impact of the proposed algorithm.<\/jats:p>","DOI":"10.1145\/3603712","type":"journal-article","created":{"date-parts":[[2023,6,14]],"date-time":"2023-06-14T11:26:05Z","timestamp":1686741965000},"page":"1-21","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":1,"title":["Fusion-based Representation Learning Model for Multimode User-generated Social Network Content"],"prefix":"10.1145","volume":"15","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4997-0203","authenticated-orcid":false,"given":"R. John","family":"Martin","sequence":"first","affiliation":[{"name":"Faculty of Computer Science and Information Technology, Jazan University, KSA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-1928-099X","authenticated-orcid":false,"given":"Rajvardhan","family":"Oak","sequence":"additional","affiliation":[{"name":"Department of Computer Science, University of California Davis, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9228-6071","authenticated-orcid":false,"given":"Mukesh","family":"Soni","sequence":"additional","affiliation":[{"name":"Department of CSE, University Centre for Research &amp; Development, Chandigarh University, Mohali, Punjab, India"}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-5628-0977","authenticated-orcid":false,"given":"V.","family":"Mahalakshmi","sequence":"additional","affiliation":[{"name":"Department of Computer Science, College of Computer Science &amp; Information Technology, Jazan University, Jazan, Saudi Arabia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0868-3139","authenticated-orcid":false,"given":"Arsalan Muhammad","family":"Soomar","sequence":"additional","affiliation":[{"name":"Department of Automation, Electronics and electrical engineering, Gdansk University of Technology, Poland"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8743-6944","authenticated-orcid":false,"given":"Anjali","family":"Joshi","sequence":"additional","affiliation":[{"name":"Marathwada Mitra Mandal's Institute of Technology, Lohagaon Pune, India"}]}],"member":"320","published-online":{"date-parts":[[2023,9,28]]},"reference":[{"key":"e_1_3_1_2_2","doi-asserted-by":"publisher","DOI":"10.1109\/TEM.2020.3021698"},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2018.2808349"},{"key":"e_1_3_1_4_2","doi-asserted-by":"publisher","DOI":"10.1109\/ojsp.2021.3090333"},{"key":"e_1_3_1_5_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2020.3020560"},{"key":"e_1_3_1_6_2","doi-asserted-by":"publisher","DOI":"10.1109\/JBHI.2022.3196631"},{"key":"e_1_3_1_7_2","doi-asserted-by":"publisher","DOI":"10.1109\/TCSS.2021.3120138"},{"key":"e_1_3_1_8_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNSM.2019.2961560"},{"key":"e_1_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2021.3072221"},{"key":"e_1_3_1_10_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2019.2938560"},{"key":"e_1_3_1_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2019.2935203"},{"key":"e_1_3_1_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2020.3027845"},{"key":"e_1_3_1_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2018.2842190"},{"key":"e_1_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/TEVC.2021.3109576"},{"key":"e_1_3_1_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/TWC.2018.2874229"},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2019.2946184"},{"key":"e_1_3_1_17_2","doi-asserted-by":"publisher","DOI":"10.1109\/TAFFC.2016.2622690"},{"key":"e_1_3_1_18_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2012.2217119"},{"key":"e_1_3_1_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2020.3034540"},{"key":"e_1_3_1_20_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2018.2842190"},{"key":"e_1_3_1_21_2","doi-asserted-by":"publisher","DOI":"10.1109\/ACCESS.2020.2975004"},{"issue":"10","key":"e_1_3_1_22_2","first-page":"51","article-title":"Fusion vectorized representation learning of multi","volume":"48","author":"Nan-xun Ji","year":"2021","unstructured":"Ji Nan-xun, Sun Xiao-yan, and Li Zhen-qi. 2021. Fusion vectorized representation learning of multi-source heterogeneous user-generated contents. Jisuanji Kexue 48, 10 (2021), 51\u201358.","journal-title":"Jisuanji Kexue"},{"key":"e_1_3_1_23_2","doi-asserted-by":"publisher","DOI":"10.1007\/s00530-010-0182-0"},{"key":"e_1_3_1_24_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-95729-6_10"},{"key":"e_1_3_1_25_2","volume-title":"Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP\u201903)","author":"Iyengar G.","year":"2003","unstructured":"G. Iyengar, H. J. Nock, and C. Neti. 2003. Audio-visual synchrony for detection of monologues in video archives. In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP\u201903)."},{"key":"e_1_3_1_26_2","doi-asserted-by":"publisher","DOI":"10.4018\/978-1-6684-6303-1.ch098"},{"key":"e_1_3_1_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/INFOP.2015.7489424"},{"key":"e_1_3_1_28_2","doi-asserted-by":"publisher","DOI":"10.3390\/s23052679"},{"key":"e_1_3_1_29_2","doi-asserted-by":"publisher","DOI":"10.3390\/info13040171"},{"key":"e_1_3_1_30_2","doi-asserted-by":"crossref","unstructured":"L. Huang D. Ma S. Li X. Zhang and H. Wang. 2019. Text level graph neural network for text classification. arXiv:1910.02356. Retrieved from https:\/\/arxiv.org\/abs\/1910.02356.","DOI":"10.18653\/v1\/D19-1345"},{"key":"e_1_3_1_31_2","doi-asserted-by":"crossref","unstructured":"Z. Li B. Xu C. Zhu and T. Zhao. 2022. CLMLF: A contrastive learning and multi-layer fusion method for multimodal sentiment detection. arXiv:2204.05515. Retrieved from https:\/\/arxiv.org\/abs\/2204.05515.","DOI":"10.18653\/v1\/2022.findings-naacl.175"}],"container-title":["Journal of Data and Information Quality"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3603712","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3603712","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T16:37:21Z","timestamp":1750178241000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3603712"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,9,28]]},"references-count":30,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2023,9,30]]}},"alternative-id":["10.1145\/3603712"],"URL":"https:\/\/doi.org\/10.1145\/3603712","relation":{},"ISSN":["1936-1955","1936-1963"],"issn-type":[{"type":"print","value":"1936-1955"},{"type":"electronic","value":"1936-1963"}],"subject":[],"published":{"date-parts":[[2023,9,28]]},"assertion":[{"value":"2022-12-10","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-05-19","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-09-28","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}