{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,21]],"date-time":"2026-01-21T22:15:35Z","timestamp":1769033735030,"version":"3.49.0"},"reference-count":44,"publisher":"Association for Computing Machinery (ACM)","issue":"1","funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["No. 62232004"],"award-info":[{"award-number":["No. 62232004"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Jiangsu Provincial Frontier Technology Research and Development Program","award":["BF2024070"],"award-info":[{"award-number":["BF2024070"]}]},{"name":"Shenzhen Science and Technology Program","award":["KJZD20240903100814018"],"award-info":[{"award-number":["KJZD20240903100814018"]}]},{"name":"Jiangsu Provincial Key Laboratory of Network and Information Security","award":["No. BM2003201"],"award-info":[{"award-number":["No. BM2003201"]}]},{"name":"Key Laboratory of Computer Network and Information Integration of Ministry of Education of China","award":["No. 93K-9"],"award-info":[{"award-number":["No. 93K-9"]}]},{"DOI":"10.13039\/501100002858","name":"China Postdoctoral Science Foundation","doi-asserted-by":"crossref","award":["2025M781474"],"award-info":[{"award-number":["2025M781474"]}],"id":[{"id":"10.13039\/501100002858","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Collaborative Innovation Center of Novel Software Technology and Industrialization"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Sen. Netw."],"published-print":{"date-parts":[[2026,1,31]]},"abstract":"<jats:p>\n                    Multi-view inference (MVI), which accepts images from multiple viewpoints as input of deep neural networks, is proposed to improve the inference accuracy of conventional single-view models. However, existing mechanisms face challenges in feature fusion and computation efficiency: (1) features from inter-view and intra-view contribute differently to inference, and uniform feature fusion limits MVI accuracy; (2) the sophisticated process and tremendous computational workload of MVI cause a considerable increase in inference latency. This article addresses the above challenges and enables high-accuracy and low-latency MVI for edge intelligence by proposing an end-to-edge synergistic multi-view inference (SMVI) framework. SMVI integrates the\n                    <jats:underline>f<\/jats:underline>\n                    eature f\n                    <jats:underline>u<\/jats:underline>\n                    sion module based on pairwise\n                    <jats:underline>m<\/jats:underline>\n                    utual-\n                    <jats:underline>a<\/jats:underline>\n                    ttention (FUMA), which incorporates the differences between features, enhancing MVI accuracy. To optimize the computation of FUMA-based SMVI, we present a joint optimization algorithm of\n                    <jats:underline>r<\/jats:underline>\n                    esource\n                    <jats:underline>a<\/jats:underline>\n                    llocation and\n                    <jats:underline>m<\/jats:underline>\n                    odel\n                    <jats:underline>p<\/jats:underline>\n                    artition (RAMP) to reduce MVI latency, considering device heterogeneity, dynamic network connection, and resource limitation in heterogeneous edge environments. We developed an SMVI prototype system with heterogeneous embedded GPUs and evaluated its performance in real-world MVI scenarios. Extensive experiments demonstrate that the proposed mechanism achieves a notable MVI accuracy improvement of approximately 4% and accelerates the process by 4.08 \u00d7 compared to state-of-the-art approaches.\n                  <\/jats:p>","DOI":"10.1145\/3785358","type":"journal-article","created":{"date-parts":[[2025,12,17]],"date-time":"2025-12-17T11:29:38Z","timestamp":1765970978000},"page":"1-31","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":0,"title":["Enabling Efficient Synergistic Multi-view Inference Across Heterogeneous Edge Devices"],"prefix":"10.1145","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6770-326X","authenticated-orcid":false,"given":"Fang","family":"Dong","sequence":"first","affiliation":[{"name":"School of Computer Science and Engineering, Southeast University","place":["Nanjing, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8654-2940","authenticated-orcid":false,"given":"Runze","family":"Chen","sequence":"additional","affiliation":[{"name":"School of Computer Science and Engineering, Southeast University","place":["Nanjing, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7566-9460","authenticated-orcid":false,"given":"Shucun","family":"Fu","sequence":"additional","affiliation":[{"name":"School of Software, Nanjing University of Information Science and Technology","place":["Nanjing, China"]}]},{"ORCID":"https:\/\/orcid.org\/0009-0007-5266-6022","authenticated-orcid":false,"given":"Wangbing","family":"Cheng","sequence":"additional","affiliation":[{"name":"School of Computer Science and Engineering, Southeast University","place":["Nanjing, China"]}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-9681-6482","authenticated-orcid":false,"given":"Ruiting","family":"Zhou","sequence":"additional","affiliation":[{"name":"School of Computer Science and Engineering, Southeast University","place":["Nanjing, China"]}]},{"ORCID":"https:\/\/orcid.org\/0009-0003-6763-0437","authenticated-orcid":false,"given":"Xu","family":"Zhang","sequence":"additional","affiliation":[{"name":"The Science and Technology Information Department, Jiangsu Coastal Development Group Co., Ltd","place":["Nanjing, China"]}]}],"member":"320","published-online":{"date-parts":[[2026,1,20]]},"reference":[{"key":"e_1_3_4_2_2","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2019.2921977"},{"key":"e_1_3_4_3_2","doi-asserted-by":"publisher","DOI":"10.1109\/RCAR47638.2019.9044007"},{"key":"e_1_3_4_4_2","doi-asserted-by":"publisher","DOI":"10.1145\/3373376.3378473"},{"key":"e_1_3_4_5_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMC.2022.3172402"},{"key":"e_1_3_4_6_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00035"},{"key":"e_1_3_4_7_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMC.2025.3540566"},{"key":"e_1_3_4_8_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSC.2023.3342435"},{"key":"e_1_3_4_9_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2019.2904460"},{"key":"e_1_3_4_10_2","doi-asserted-by":"publisher","DOI":"10.1145\/3673038.3673133"},{"key":"e_1_3_4_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_3_4_12_2","doi-asserted-by":"publisher","DOI":"10.1145\/3474085.3475310"},{"key":"e_1_3_4_13_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-58571-6_1"},{"key":"e_1_3_4_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/INFOCOM.2019.8737614"},{"key":"e_1_3_4_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00745"},{"key":"e_1_3_4_16_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMC.2020.3043051"},{"key":"e_1_3_4_17_2","doi-asserted-by":"publisher","DOI":"10.1145\/3093337.3037698"},{"key":"e_1_3_4_18_2","doi-asserted-by":"publisher","DOI":"10.1109\/TMECH.2020.3048433"},{"key":"e_1_3_4_19_2","doi-asserted-by":"publisher","DOI":"10.1145\/3065386"},{"key":"e_1_3_4_20_2","doi-asserted-by":"publisher","DOI":"10.1145\/3372224.3419194"},{"key":"e_1_3_4_21_2","doi-asserted-by":"publisher","DOI":"10.1038\/nature14539"},{"key":"e_1_3_4_22_2","doi-asserted-by":"publisher","DOI":"10.1109\/TWC.2019.2946140"},{"key":"e_1_3_4_23_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.future.2022.10.033"},{"key":"e_1_3_4_24_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2021.108104"},{"key":"e_1_3_4_25_2","doi-asserted-by":"publisher","DOI":"10.1006\/game.1996.0044"},{"key":"e_1_3_4_26_2","doi-asserted-by":"publisher","DOI":"10.1145\/3636534.3690661"},{"key":"e_1_3_4_27_2","doi-asserted-by":"publisher","DOI":"10.1109\/INFOCOM55648.2025.11044457"},{"key":"e_1_3_4_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2018.00474"},{"key":"e_1_3_4_29_2","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0245230"},{"key":"e_1_3_4_30_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-67537-0_15"},{"key":"e_1_3_4_31_2","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2016.2579198"},{"key":"e_1_3_4_32_2","unstructured":"Emerson Sie Bill Tao Aganze Mihigo Parithimaal Karmehan Max Zhang Arun N. Sivakumar Girish Chowdhary and Deepak Vasisht. 2025. BYON: Bring Your Own Networks for Digital Agriculture Applications. arxiv:2502.01478. Retrieved from https:\/\/arxiv.org\/abs\/2502.01478"},{"key":"e_1_3_4_33_2","unstructured":"Karen Simonyan and Andrew Zisserman. 2014. Very deep convolutional networks for large-scale image recognition. arXiv:1409.1556. Retrieved from https:\/\/arxiv.org\/abs\/1409.1556"},{"key":"e_1_3_4_34_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2015.114"},{"key":"e_1_3_4_35_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"e_1_3_4_36_2","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2020.3010258"},{"key":"e_1_3_4_37_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICDCS.2017.226"},{"key":"e_1_3_4_38_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jare.2021.03.015"},{"key":"e_1_3_4_39_2","doi-asserted-by":"publisher","DOI":"10.1109\/TIM.2021.3127648"},{"key":"e_1_3_4_40_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2013.109"},{"key":"e_1_3_4_41_2","first-page":"1912","volume-title":"Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition","author":"Wu Zhirong","year":"2015","unstructured":"Zhirong Wu, Shuran Song, Aditya Khosla, Fisher Yu, Linguang Zhang, Xiaoou Tang, and Jianxiong Xiao. 2015. 3d shapenets: A deep representation for volumetric shapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 1912\u20131920."},{"key":"e_1_3_4_42_2","first-page":"3202","volume-title":"Proceedings of the 29th International Conference on International Joint Conferences on Artificial Intelligence","author":"Xu Jinglin","year":"2021","unstructured":"Jinglin Xu, Xiangsen Zhang, Wenbin Li, Xinwang Liu, and Junwei Han. 2021. Joint multi-view 2D convolutional neural networks for 3D object classification. In Proceedings of the 29th International Conference on International Joint Conferences on Artificial Intelligence. 3202\u20133208."},{"key":"e_1_3_4_43_2","doi-asserted-by":"publisher","DOI":"10.1109\/COMST.2016.2610963"},{"key":"e_1_3_4_44_2","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2019.2925910"},{"key":"e_1_3_4_45_2","doi-asserted-by":"publisher","DOI":"10.1109\/JPROC.2019.2918951"}],"container-title":["ACM Transactions on Sensor Networks"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3785358","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,1,21]],"date-time":"2026-01-21T07:20:29Z","timestamp":1768980029000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3785358"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,1,20]]},"references-count":44,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2026,1,31]]}},"alternative-id":["10.1145\/3785358"],"URL":"https:\/\/doi.org\/10.1145\/3785358","relation":{},"ISSN":["1550-4859","1550-4867"],"issn-type":[{"value":"1550-4859","type":"print"},{"value":"1550-4867","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,1,20]]},"assertion":[{"value":"2025-01-22","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-12-07","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2026-01-20","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}