{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,7]],"date-time":"2026-03-07T18:25:12Z","timestamp":1772907912666,"version":"3.50.1"},"reference-count":44,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2019,9,9]],"date-time":"2019-09-09T00:00:00Z","timestamp":1567987200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korea govern-ment","award":["No.B0717-16-0034"],"award-info":[{"award-number":["No.B0717-16-0034"]}]},{"name":"Ministry of Education of the Republic of Korea and the National Research Foundation of Korea","award":["NRF-2018S1A5A2A03037308"],"award-info":[{"award-number":["NRF-2018S1A5A2A03037308"]}]},{"name":"Next-Generation Information Computing Development Program through theNational Research Foundation of Korea (NRF) funded by the Ministry of Science and ICT","award":["NRF-2017M3C4A7083534"],"award-info":[{"award-number":["NRF-2017M3C4A7083534"]}]},{"name":"Industrial Technology Innovation Program funded by the Ministry of Trade, Industry & Energy","award":["0073154"],"award-info":[{"award-number":["0073154"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. ACM Interact. Mob. Wearable Ubiquitous Technol."],"published-print":{"date-parts":[[2019,9,9]]},"abstract":"<jats:p>While smartphones have enriched our lives with diverse applications and functionalities, the user experience still often involves manual cumbersome inputs. To purchase a bottle of water for instance, a user must locate an e-commerce app, type the keyword for a search, select the right item from the list, and finally place an order. This process could be greatly simplified if the smartphone identifies the object of interest and automatically executes the user preferred actions for the object. We present Knocker that identifies the object when a user simply knocks on an object with a smartphone. The basic principle of Knocker is leveraging a unique set of responses generated from the knock. Knocker takes a multimodal sensing approach that utilizes microphones, accelerometers, and gyroscopes to capture the knock responses, and exploits machine learning to accurately identify objects. We also present 15 applications enabled by Knocker that showcase the novel interaction method between users and objects. Knocker uses only the built-in smartphone sensors and thus is fully deployable without specialized hardware or tags on either the objects or the smartphone. Our experiments with 23 objects show that Knocker achieves an accuracy of 98% in a controlled lab and 83% in the wild.<\/jats:p>","DOI":"10.1145\/3351240","type":"journal-article","created":{"date-parts":[[2019,9,10]],"date-time":"2019-09-10T15:58:26Z","timestamp":1568131106000},"page":"1-21","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":40,"title":["Knocker"],"prefix":"10.1145","volume":"3","author":[{"given":"Taesik","family":"Gong","sequence":"first","affiliation":[{"name":"School of Computing, KAIST, Republic of Korea"}]},{"given":"Hyunsung","family":"Cho","sequence":"additional","affiliation":[{"name":"School of Computing, KAIST, Republic of Korea"}]},{"given":"Bowon","family":"Lee","sequence":"additional","affiliation":[{"name":"Department of Electronic Engineering, Inha University, Republic of Korea"}]},{"given":"Sung-Ju","family":"Lee","sequence":"additional","affiliation":[{"name":"School of Computing, KAIST, Republic of Korea"}]}],"member":"320","published-online":{"date-parts":[[2019,9,9]]},"reference":[{"key":"e_1_2_2_1_1","volume-title":"Retrieved","year":"2018","unstructured":"Android. 2018 . Audio Latency Measurements . Retrieved September 20, 2018 from https:\/\/source.android.com\/devices\/audio\/latency_measurements. Android. 2018. Audio Latency Measurements. Retrieved September 20, 2018 from https:\/\/source.android.com\/devices\/audio\/latency_measurements."},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/1866029.1866080"},{"key":"e_1_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3026044"},{"key":"e_1_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/3161173"},{"key":"e_1_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025991"},{"key":"e_1_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/2858036.2858177"},{"key":"e_1_2_2_7_1","volume-title":"Retrieved","author":"Developers Android","year":"2018","unstructured":"Android Developers . 2018 . Guides for Android audio latency . Retrieved September 20, 2018 from https:\/\/developer.android.com\/ndk\/guides\/audio\/audio-latency.html. Android Developers. 2018. Guides for Android audio latency. Retrieved September 20, 2018 from https:\/\/developer.android.com\/ndk\/guides\/audio\/audio-latency.html."},{"key":"e_1_2_2_8_1","volume-title":"Retrieved","author":"Dunn Jeff","year":"2017","unstructured":"Jeff Dunn . 2017 . It looks like Apple has some work to do if it wants Siri to be as smart as Google Assistant . Retrieved September 20, 2018 from https:\/\/goo.gl\/4spfhy. Jeff Dunn. 2017. It looks like Apple has some work to do if it wants Siri to be as smart as Google Assistant. Retrieved September 20, 2018 from https:\/\/goo.gl\/4spfhy."},{"key":"e_1_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/2634317.2634320"},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCCN.2017.8038410"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/3170427.3188514"},{"key":"e_1_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/3161165"},{"key":"e_1_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/2984511.2984518"},{"key":"e_1_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/1656274.1656278"},{"key":"e_1_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/2047196.2047279"},{"key":"e_1_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/2380116.2380187"},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025899"},{"key":"e_1_2_2_18_1","unstructured":"Jacob Kastrenakes. 2017. Burger King's new ad forces Google Home to advertise the Whopper. https:\/\/www.theverge.com\/2017\/4\/12\/15259400\/burger-king-google-home-ad-wikipedia  Jacob Kastrenakes. 2017. Burger King's new ad forces Google Home to advertise the Whopper. https:\/\/www.theverge.com\/2017\/4\/12\/15259400\/burger-king-google-home-ad-wikipedia"},{"key":"e_1_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1145\/274644.274718"},{"key":"e_1_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/2702123.2702416"},{"key":"e_1_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/2984511.2984582"},{"key":"e_1_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/2807442.2807481"},{"key":"e_1_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025773"},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/2702123.2702178"},{"key":"e_1_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/2076354.2076364"},{"key":"e_1_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2017.03.014"},{"key":"e_1_2_2_27_1","first-page":"2579","article-title":"Visualizing data using t-SNE","author":"van der Maaten Laurens","year":"2008","unstructured":"Laurens van der Maaten and Geoffrey Hinton . 2008 . Visualizing data using t-SNE . Journal of machine learning research 9 , Nov (2008), 2579 -- 2605 . Laurens van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-SNE. Journal of machine learning research 9, Nov (2008), 2579--2605.","journal-title":"Journal of machine learning research 9"},{"key":"e_1_2_2_28_1","volume-title":"Recognizing the Use of Portable Electrical Devices with Hand-Worn Magnetic Sensors","author":"Maekawa Takuya","unstructured":"Takuya Maekawa , Yasue Kishino , Yasushi Sakurai , and Takayuki Suyama . 2011. Recognizing the Use of Portable Electrical Devices with Hand-Worn Magnetic Sensors . In Pervasive Computing, Kent Lyons, Jeffrey Hightower, and Elaine M. Huang (Eds.). Springer Berlin Heidelberg , Berlin, Heidelberg , 276--293. Takuya Maekawa, Yasue Kishino, Yasushi Sakurai, and Takayuki Suyama. 2011. Recognizing the Use of Portable Electrical Devices with Hand-Worn Magnetic Sensors. In Pervasive Computing, Kent Lyons, Jeffrey Hightower, and Elaine M. Huang (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 276--293."},{"key":"e_1_2_2_29_1","volume-title":"2012 IEEE International Conference on Pervasive Computing and Communications Workshops. 510--512","author":"Maekawa T.","unstructured":"T. Maekawa , Y. Kishino , Y. Yanagisawa , and Y. Sakurai . 2012. WristSense: Wrist-worn sensor device with camera for daily activity recognition . In 2012 IEEE International Conference on Pervasive Computing and Communications Workshops. 510--512 . T. Maekawa, Y. Kishino, Y. Yanagisawa, and Y. Sakurai. 2012. WristSense: Wrist-worn sensor device with camera for daily activity recognition. In 2012 IEEE International Conference on Pervasive Computing and Communications Workshops. 510--512."},{"key":"e_1_2_2_30_1","volume-title":"Retrieved","author":"Olmstead Kenneth","year":"2017","unstructured":"Kenneth Olmstead . 2017 . Nearly half of Americans use digital voice assistants, mostly on their smartphones . Retrieved September 20, 2018 from https:\/\/goo.gl\/gRyF4R. Kenneth Olmstead. 2017. Nearly half of Americans use digital voice assistants, mostly on their smartphones. Retrieved September 20, 2018 from https:\/\/goo.gl\/gRyF4R."},{"key":"e_1_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/354666.354667"},{"key":"e_1_2_2_32_1","volume-title":"2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. 1--8.","author":"Ren X.","unstructured":"X. Ren and M. Philipose . 2009. Egocentric recognition of handled objects: Benchmark and analysis . In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. 1--8. X. Ren and M. Philipose. 2009. Egocentric recognition of handled objects: Benchmark and analysis. In 2009 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. 1--8."},{"key":"e_1_2_2_33_1","volume-title":"Inaudible Voice Commands: The Long-Range Attack and Defense. In 15th USENIX Symposium on Networked Systems Design and Implementation (NSDI 18)","author":"Roy Nirupam","year":"2018","unstructured":"Nirupam Roy , Sheng Shen , Haitham Hassanieh , and Romit Roy Choudhury . 2018 . Inaudible Voice Commands: The Long-Range Attack and Defense. In 15th USENIX Symposium on Networked Systems Design and Implementation (NSDI 18) . USENIX Association, 547--560. Nirupam Roy, Sheng Shen, Haitham Hassanieh, and Romit Roy Choudhury. 2018. Inaudible Voice Commands: The Long-Range Attack and Defense. In 15th USENIX Symposium on Networked Systems Design and Implementation (NSDI 18). USENIX Association, 547--560."},{"key":"e_1_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/3229434.3229453"},{"key":"e_1_2_2_35_1","doi-asserted-by":"publisher","DOI":"10.1109\/TOH.2016.2625787"},{"key":"e_1_2_2_36_1","volume-title":"Retrieved","author":"Studio Android","year":"2019","unstructured":"Android Studio . 2019 . Profile battery usage with Batterystats and Battery Historian . Retrieved May 10, 2019 from https:\/\/developer.android.com\/studio\/profile\/battery-historian. Android Studio. 2019. Profile battery usage with Batterystats and Battery Historian. Retrieved May 10, 2019 from https:\/\/developer.android.com\/studio\/profile\/battery-historian."},{"key":"e_1_2_2_37_1","volume-title":"Retrieved","author":"van der Velde Naomi","year":"2018","unstructured":"Naomi van der Velde . 2018 . A Complete Speech Recognition Technology Overview . Retrieved September 20, 2018 from https:\/\/www.globalme.net\/blog\/the-present-future-of-speech-recognition. Naomi van der Velde. 2018. A Complete Speech Recognition Technology Overview. Retrieved September 20, 2018 from https:\/\/www.globalme.net\/blog\/the-present-future-of-speech-recognition."},{"key":"e_1_2_2_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/2750858.2804271"},{"key":"e_1_2_2_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/302979.303111"},{"key":"e_1_2_2_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025828"},{"key":"e_1_2_2_41_1","volume-title":"2016 IEEE International Conference on RFID (RFID). 1--8.","author":"Yang C.","unstructured":"C. Yang and A. P. Sample . 2016. EM-ID: Tag-less identification of electrical devices via electromagnetic emissions . In 2016 IEEE International Conference on RFID (RFID). 1--8. C. Yang and A. P. Sample. 2016. EM-ID: Tag-less identification of electrical devices via electromagnetic emissions. In 2016 IEEE International Conference on RFID (RFID). 1--8."},{"key":"e_1_2_2_42_1","volume-title":"CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition. arXiv preprint arXiv:1801.08535","author":"Yuan Xuejing","year":"2018","unstructured":"Xuejing Yuan , Yuxuan Chen , Yue Zhao , Yunhui Long , Xiaokang Liu , Kai Chen , Shengzhi Zhang , Heqing Huang , Xiaofeng Wang , and Carl A Gunter . 2018. CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition. arXiv preprint arXiv:1801.08535 ( 2018 ). Xuejing Yuan, Yuxuan Chen, Yue Zhao, Yunhui Long, Xiaokang Liu, Kai Chen, Shengzhi Zhang, Heqing Huang, Xiaofeng Wang, and Carl A Gunter. 2018. CommanderSong: A Systematic Approach for Practical Adversarial Voice Recognition. arXiv preprint arXiv:1801.08535 (2018)."},{"key":"e_1_2_2_43_1","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3134052"},{"key":"e_1_2_2_44_1","doi-asserted-by":"publisher","DOI":"10.1007\/BF02943243"}],"container-title":["Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3351240","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3351240","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T00:25:50Z","timestamp":1750206350000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3351240"}},"subtitle":["Vibroacoustic-based Object Recognition with Smartphones"],"short-title":[],"issued":{"date-parts":[[2019,9,9]]},"references-count":44,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2019,9,9]]}},"alternative-id":["10.1145\/3351240"],"URL":"https:\/\/doi.org\/10.1145\/3351240","relation":{},"ISSN":["2474-9567"],"issn-type":[{"value":"2474-9567","type":"electronic"}],"subject":[],"published":{"date-parts":[[2019,9,9]]},"assertion":[{"value":"2019-09-09","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}