{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,7]],"date-time":"2026-03-07T18:22:08Z","timestamp":1772907728958,"version":"3.50.1"},"reference-count":61,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2019,3,29]],"date-time":"2019-03-29T00:00:00Z","timestamp":1553817600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Key Research Program of Frontier Sciences, CAS","award":["No. QYZDY-SSW- JSC002"],"award-info":[{"award-number":["No. QYZDY-SSW- JSC002"]}]},{"DOI":"10.13039\/501100011002","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["No.61625205, 61872447, 61632010, 61772546, 61751211, 61772488, 61520106007, 61672038, 61602067"],"award-info":[{"award-number":["No.61625205, 61872447, 61632010, 61772546, 61751211, 61772488, 61520106007, 61672038, 61602067"]}],"id":[{"id":"10.13039\/501100011002","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100005230","name":"Natural Science Foundation of Chongqing","doi-asserted-by":"publisher","award":["No.CSTC2018JCYJA1879"],"award-info":[{"award-number":["No.CSTC2018JCYJA1879"]}],"id":[{"id":"10.13039\/501100005230","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100004608","name":"Natural Science Foundation of Jiangsu Province","doi-asserted-by":"publisher","award":["BK20150030"],"award-info":[{"award-number":["BK20150030"]}],"id":[{"id":"10.13039\/501100004608","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."],"published-print":{"date-parts":[[2019,3,29]]},"abstract":"<jats:p>This paper explores the possibility of extending the input and interactions beyond the small screen of the mobile device onto ad hoc adjacent surfaces, e.g., a wooden tabletop with acoustic signals. While the existing finger tracking approaches employ the active acoustic signal with a fixed frequency, our proposed system Ipanel employs the acoustic signals generated by sliding of fingers on the table for tracking. Different from active signal tracking, the frequency of the finger-table generated acoustic signals keeps changing, making accurate tracking much more challenging than the traditional approaches with fix frequency signal from the speaker. Unique features are extracted by exploiting the spatio-temporal and frequency domain properties of the generated acoustic signals. The features are transformed into images and then we employ the convolutional neural network (CNN) to recognize the finger movement on the table. Ipanel is able to support not only commonly used gesture (click, flip, scroll, zoom, etc.) recognition, but also handwriting (10 numbers and 26 alphabets) recognition at high accuracies. We implement Ipanel on smartphones, and conduct extensive real environment experiments to evaluate its performance. The results validate the robustness of Ipanel, and show that it maintains high accuracies across different users with varying input behaviours (e.g., input strength, speed and region). Further, Ipanel's performance is robust against different levels of ambient noise and varying surface materials.<\/jats:p>","DOI":"10.1145\/3314390","type":"journal-article","created":{"date-parts":[[2019,4,2]],"date-time":"2019-04-02T11:57:40Z","timestamp":1554206260000},"page":"1-21","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":61,"title":["Your Table Can Be an Input Panel"],"prefix":"10.1145","volume":"3","author":[{"given":"Mingshi","family":"Chen","sequence":"first","affiliation":[{"name":"University of Science and Technology of China, Hefei, China"}]},{"given":"Panlong","family":"Yang","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China, Hefei, China"}]},{"given":"Jie","family":"Xiong","sequence":"additional","affiliation":[{"name":"University of Massachusetts Amherst, Amherst, MA, USA"}]},{"given":"Maotian","family":"Zhang","sequence":"additional","affiliation":[{"name":"University of Science and Technology of China, Hefei, China"}]},{"given":"Youngki","family":"Lee","sequence":"additional","affiliation":[{"name":"Seoul National University, Seoul, South Korea"}]},{"given":"Chaocan","family":"Xiang","sequence":"additional","affiliation":[{"name":"Chongqing University, Chongqing, China"}]},{"given":"Chang","family":"Tian","sequence":"additional","affiliation":[{"name":"Army Engineering University of PLA, Nanjing, China"}]}],"member":"320","published-online":{"date-parts":[[2019,3,29]]},"reference":[{"key":"e_1_2_2_1_1","unstructured":"2015. AN580: Ifrared gesture recognition by Silicon Labs. https:\/\/www.silabs.com\/Support%20Documents\/TechnicalDocs\/AN580.pdf  2015. AN580: Ifrared gesture recognition by Silicon Labs. https:\/\/www.silabs.com\/Support%20Documents\/TechnicalDocs\/AN580.pdf"},{"key":"e_1_2_2_2_1","unstructured":"2016. Google Project Soli. https:\/\/atap.google.com\/soli\/  2016. Google Project Soli. https:\/\/atap.google.com\/soli\/"},{"key":"e_1_2_2_3_1","unstructured":"2017. Kinect for Xbox One. https:\/\/www.xbox.com\/en-US\/xbox-one\/accessories\/kinect-for-xbox-one  2017. Kinect for Xbox One. https:\/\/www.xbox.com\/en-US\/xbox-one\/accessories\/kinect-for-xbox-one"},{"key":"e_1_2_2_4_1","unstructured":"2017. Leap Motion. https:\/\/www.leapmotion.com\/  2017. Leap Motion. https:\/\/www.leapmotion.com\/"},{"key":"e_1_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10032-010-0117-5"},{"key":"e_1_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/2486001.2486039"},{"key":"e_1_2_2_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/1999995.1999998"},{"key":"e_1_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1121\/1.1456514"},{"key":"e_1_2_2_9_1","volume-title":"Word Spotting and Recognition with Embedded Attributes","author":"Almaz\u00cd\u0107n J","unstructured":"J Almaz\u00cd\u0107n , A Gordo , A Forn\u00cd\u0119s , and E Valveny . 2014. Word Spotting and Recognition with Embedded Attributes . In Pattern Analysis and Machine Intelligence IEEE Transactions on. 2552--2566. J Almaz\u00cd\u0107n, A Gordo, A Forn\u00cd\u0119s, and E Valveny. 2014. Word Spotting and Recognition with Embedded Attributes. In Pattern Analysis and Machine Intelligence IEEE Transactions on. 2552--2566."},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10032-013-0201-8"},{"key":"e_1_2_2_11_1","unstructured":"Anne Laure Biannebernard. 2012. The A2iA French handwriting recognition system at the Rimes-ICDAR2011 competition. In Document Recognition and Retrieval XIX. 51.  Anne Laure Biannebernard. 2012. The A2iA French handwriting recognition system at the Rimes-ICDAR2011 competition. In Document Recognition and Retrieval XIX. 51."},{"key":"e_1_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-11397-5_15"},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/2790044.2790052"},{"key":"e_1_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/2501988.2502016"},{"key":"e_1_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/2501988.2502035"},{"key":"e_1_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/2858036.2858125"},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/2809695.2809711"},{"key":"e_1_2_2_18_1","volume-title":"Fast and Robust Training of Recurrent Neural Networks for Offline Handwriting Recognition. In International Conference on Frontiers in Handwriting Recognition. 279--284","author":"Doetsch Patrick","year":"2014","unstructured":"Patrick Doetsch , Michal Kozielski , and Hermann Ney . 2014 . Fast and Robust Training of Recurrent Neural Networks for Offline Handwriting Recognition. In International Conference on Frontiers in Handwriting Recognition. 279--284 . Patrick Doetsch, Michal Kozielski, and Hermann Ney. 2014. Fast and Robust Training of Recurrent Neural Networks for Offline Handwriting Recognition. In International Conference on Frontiers in Handwriting Recognition. 279--284."},{"key":"e_1_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/ASRU.1997.659110"},{"key":"e_1_2_2_20_1","doi-asserted-by":"crossref","unstructured":"Luo Gan Chen Mingshi Yang Panlong and Li. Ping. 2017. SoundWrite II: Ambient Acoustic Sensing for Noise Tolerant Device-Free Gesture Recognition. In ICPAD.  Luo Gan Chen Mingshi Yang Panlong and Li. Ping. 2017. SoundWrite II: Ambient Acoustic Sensing for Noise Tolerant Device-Free Gesture Recognition. In ICPAD.","DOI":"10.1109\/ICPADS.2017.00027"},{"key":"e_1_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/2556288.2557120"},{"key":"e_1_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/2594368.2594389"},{"key":"e_1_2_2_23_1","volume-title":"luo Gan, and Li. Ping","author":"Haishi Du","year":"2017","unstructured":"Du Haishi , Yang Panlong , luo Gan, and Li. Ping . 2017 . WordRecorder: Accurate Acoustic-based Handwriting Recognition Using Deep Learning. In Infocom . Du Haishi, Yang Panlong, luo Gan, and Li. Ping. 2017. WordRecorder: Accurate Acoustic-based Handwriting Recognition Using Deep Learning. In Infocom."},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/1449715.1449747"},{"key":"e_1_2_2_25_1","volume-title":"Deep Residual Learning for Image Recognition. In IEEE Conference on Computer Vision and Pattern Recognition. 770--778","author":"He Kaiming","year":"2016","unstructured":"Kaiming He , Xiangyu Zhang , Shaoqing Ren , and Jian Sun . 2016 . Deep Residual Learning for Image Recognition. In IEEE Conference on Computer Vision and Pattern Recognition. 770--778 . Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep Residual Learning for Image Recognition. In IEEE Conference on Computer Vision and Pattern Recognition. 770--778."},{"key":"e_1_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/INFOCOM.2014.6847959"},{"key":"e_1_2_2_27_1","unstructured":"CEI IEC. 1985. Integrating-averaging sound level meters. (1985).  CEI IEC. 1985. Integrating-averaging sound level meters. (1985)."},{"key":"e_1_2_2_28_1","volume-title":"Usenix NSDI","volume":"14","author":"Kellogg Bryce","year":"2014","unstructured":"Bryce Kellogg , Vamsi Talla , and Shyamnath Gollakota . 2014 . Bringing gesture recognition to all devices . In Usenix NSDI , Vol. 14 . Bryce Kellogg, Vamsi Talla, and Shyamnath Gollakota. 2014. Bringing gesture recognition to all devices. In Usenix NSDI, Vol. 14."},{"key":"e_1_2_2_29_1","volume-title":"International Conference on Neural Information Processing Systems. 1097--1105","author":"Krizhevsky Alex","unstructured":"Alex Krizhevsky , Ilya Sutskever , and Geoffrey E. Hinton . 2012. ImageNet classification with deep convolutional neural networks . In International Conference on Neural Information Processing Systems. 1097--1105 . Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet classification with deep convolutional neural networks. In International Conference on Neural Information Processing Systems. 1097--1105."},{"key":"e_1_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/2426656.2426667"},{"key":"e_1_2_2_31_1","volume-title":"Proceedings of the IEEE. 2278--2324","author":"Lecun Y.","unstructured":"Y. Lecun , L. Bottou , Y. Bengio , and P. Haffner . 1998. Gradient-based learning applied to document recognition . In Proceedings of the IEEE. 2278--2324 . Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner. 1998. Gradient-based learning applied to document recognition. In Proceedings of the IEEE. 2278--2324."},{"key":"e_1_2_2_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/2462456.2465426"},{"key":"e_1_2_2_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/2789168.2790110"},{"key":"e_1_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/2789168.2790122"},{"key":"e_1_2_2_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/2076354.2076364"},{"key":"e_1_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1145\/2973750.2973755"},{"key":"e_1_2_2_37_1","article-title":"The Limitation of Pre-processing Techniques to Enhance the Face Recognition System Based on LBP","volume":"58","author":"Muhammad Raafat Salih","year":"2017","unstructured":"Raafat Salih Muhammad and Mohammed Issam Younis . 2017 . The Limitation of Pre-processing Techniques to Enhance the Face Recognition System Based on LBP . Iraqi Journal of Science 58 , 581B (2017), 355--363. Raafat Salih Muhammad and Mohammed Issam Younis. 2017. The Limitation of Pre-processing Techniques to Enhance the Face Recognition System Based on LBP. Iraqi Journal of Science 58, 581B (2017), 355--363.","journal-title":"Iraqi Journal of Science"},{"key":"e_1_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/2858036.2858580"},{"key":"e_1_2_2_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/2426656.2426662"},{"key":"e_1_2_2_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/2742647.2742665"},{"key":"e_1_2_2_41_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICSENS.2002.1037150"},{"key":"e_1_2_2_42_1","doi-asserted-by":"publisher","DOI":"10.1145\/2070942.2070969"},{"key":"e_1_2_2_43_1","doi-asserted-by":"crossref","unstructured":"Arik Poznanski and Lior Wolf. 2016. CNN-N-Gram for HandwritingWord Recognition. In Computer Vision and Pattern Recognition. 2305--2314.  Arik Poznanski and Lior Wolf. 2016. CNN-N-Gram for HandwritingWord Recognition. In Computer Vision and Pattern Recognition. 2305--2314.","DOI":"10.1109\/CVPR.2016.253"},{"key":"e_1_2_2_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/2500423.2500436"},{"key":"e_1_2_2_45_1","volume-title":"Computer Vision\u0105\u0142ECCV'94","author":"Rehg James M","unstructured":"James M Rehg and Takeo Kanade . 1994. Visual tracking of high dof articulated structures: an application to human hand tracking . In Computer Vision\u0105\u0142ECCV'94 . Springer , 35--46. James M Rehg and Takeo Kanade. 1994. Visual tracking of high dof articulated structures: an application to human hand tracking. In Computer Vision\u0105\u0142ECCV'94. Springer, 35--46."},{"key":"e_1_2_2_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/2971648.2971736"},{"key":"e_1_2_2_47_1","doi-asserted-by":"publisher","DOI":"10.1145\/2820398"},{"key":"e_1_2_2_48_1","unstructured":"Karen Simonyan and Andrew Zisserman. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Computer Science.  Karen Simonyan and Andrew Zisserman. 2014. Very Deep Convolutional Networks for Large-Scale Image Recognition. In Computer Science."},{"key":"e_1_2_2_49_1","doi-asserted-by":"publisher","DOI":"10.1145\/2462456.2464437"},{"key":"e_1_2_2_50_1","doi-asserted-by":"publisher","DOI":"10.1145\/2030112.2030169"},{"key":"e_1_2_2_51_1","doi-asserted-by":"publisher","DOI":"10.1145\/2594368.2594380"},{"key":"e_1_2_2_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/2789168.2790102"},{"key":"e_1_2_2_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/2619239.2626330"},{"key":"e_1_2_2_54_1","doi-asserted-by":"publisher","DOI":"10.1145\/2594368.2594384"},{"key":"e_1_2_2_55_1","doi-asserted-by":"publisher","DOI":"10.1145\/2973750.2973764"},{"key":"e_1_2_2_56_1","doi-asserted-by":"publisher","DOI":"10.1145\/2628363.2628383"},{"key":"e_1_2_2_57_1","unstructured":"Jie Xiong and Kyle Jamieson. 2013. ArrayTrack: a fine-grained indoor location system. In USENIX NSDI.   Jie Xiong and Kyle Jamieson. 2013. ArrayTrack: a fine-grained indoor location system. In USENIX NSDI."},{"key":"e_1_2_2_58_1","doi-asserted-by":"publisher","DOI":"10.1145\/2742647.2742662"},{"key":"e_1_2_2_59_1","doi-asserted-by":"publisher","DOI":"10.1145\/2954003"},{"key":"e_1_2_2_60_1","doi-asserted-by":"publisher","DOI":"10.1145\/2307636.2307638"},{"key":"e_1_2_2_61_1","doi-asserted-by":"publisher","DOI":"10.1145\/2660267.2660296"}],"container-title":["Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3314390","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3314390","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T23:53:29Z","timestamp":1750204409000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3314390"}},"subtitle":["Acoustic-based Device-Free Interaction Recognition"],"short-title":[],"issued":{"date-parts":[[2019,3,29]]},"references-count":61,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2019,3,29]]}},"alternative-id":["10.1145\/3314390"],"URL":"https:\/\/doi.org\/10.1145\/3314390","relation":{},"ISSN":["2474-9567"],"issn-type":[{"value":"2474-9567","type":"electronic"}],"subject":[],"published":{"date-parts":[[2019,3,29]]},"assertion":[{"value":"2018-05-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2019-01-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2019-03-29","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}