{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,11]],"date-time":"2025-09-11T22:50:21Z","timestamp":1757631021309,"version":"3.44.0"},"reference-count":9,"publisher":"Association for Computing Machinery (ACM)","issue":"12","content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["Proc. VLDB Endow."],"published-print":{"date-parts":[[2023,8]]},"abstract":"<jats:p>\n            Recent works explore several attacks against Machine-Learning-as-a-Service (MLaaS) platforms (e.g., the model stealing attack), allegedly posing potential real-world threats beyond viability in laboratories. However, hampered by\n            <jats:italic toggle=\"yes\">model-type-sensitive<\/jats:italic>\n            , most of the attacks can hardly break mainstream real-world MLaaS platforms. That is, many MLaaS attacks are designed against only one certain type of model, such as tree models or neural networks. As the black-box MLaaS interface hides model type info, the attacker cannot choose a proper attack method with confidence, limiting the attack performance. In this paper, we demonstrate a system, named Sniffer, that is capable of making model-type-sensitive attacks \"great again\" in real-world applications. Specifically, Sniffer consists of four components: Generator, Querier, Probe, and Arsenal. The first two components work for preparing attack samples. Probe, as the most characteristic component in Sniffer, implements a series of self-designed algorithms to determine the type of models hidden behind the black-box MLaaS interfaces. With model type info unraveled, an optimum method can be selected from Arsenal (containing multiple attack methods) to accomplish its attack. Our demonstration shows how the audience can interact with Sniffer in a web-based interface against five mainstream MLaaS platforms.\n          <\/jats:p>","DOI":"10.14778\/3611540.3611591","type":"journal-article","created":{"date-parts":[[2023,9,15]],"date-time":"2023-09-15T11:32:37Z","timestamp":1694777557000},"page":"3942-3945","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":2,"title":["Sniffer: A Novel Model Type Detection System against Machine-Learning-as-a-Service Platforms"],"prefix":"10.14778","volume":"16","author":[{"given":"Zhuo","family":"Ma","sequence":"first","affiliation":[{"name":"Xidian University"}]},{"given":"Yilong","family":"Yang","sequence":"additional","affiliation":[{"name":"Xidian University"}]},{"given":"Bin","family":"Xiao","sequence":"additional","affiliation":[{"name":"Chongqing University of Posts and Telecommunications"}]},{"given":"Yang","family":"Liu","sequence":"additional","affiliation":[{"name":"Xidian University"}]},{"given":"Xinjing","family":"Liu","sequence":"additional","affiliation":[{"name":"Xidian University"}]},{"given":"Zhuoran","family":"Ma","sequence":"additional","affiliation":[{"name":"Xidian University"}]},{"given":"Tong","family":"Yang","sequence":"additional","affiliation":[{"name":"Peking University"}]}],"member":"320","published-online":{"date-parts":[[2023,8]]},"reference":[{"doi-asserted-by":"publisher","key":"e_1_2_1_1_1","DOI":"10.1109\/SP46214.2022.9833649"},{"doi-asserted-by":"publisher","key":"e_1_2_1_2_1","DOI":"10.5555\/3489212.3489286"},{"doi-asserted-by":"publisher","key":"e_1_2_1_3_1","DOI":"10.1145\/3433210.3453090"},{"doi-asserted-by":"publisher","key":"e_1_2_1_4_1","DOI":"10.14778\/3415478.3415487"},{"doi-asserted-by":"publisher","key":"e_1_2_1_5_1","DOI":"10.1145\/3514221.3526141"},{"doi-asserted-by":"publisher","key":"e_1_2_1_6_1","DOI":"10.1109\/CVPR52688.2022.01485"},{"key":"e_1_2_1_7_1","volume-title":"25th USENIX Security Symposium (USENIX Security 16)","author":"Tram\u00e8r Florian","year":"2016","unstructured":"Florian Tram\u00e8r, Fan Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. 2016. Stealing Machine Learning Models via Prediction APIs. In 25th USENIX Security Symposium (USENIX Security 16). USENIX Association, 601--618."},{"unstructured":"Honggang Yu Kaichen Yang Teng Zhang Yun-Yun Tsai Tsung-Yi Ho and Yier Jin. 2020. CloudLeak: Large-Scale Deep Learning Models Stealing Through Adversarial Examples.. In NDSS.","key":"e_1_2_1_8_1"},{"key":"e_1_2_1_9_1","volume-title":"Membership Inference Attacks and Defenses in Neural Network Pruning. In 31st USENIX Security Symposium (USENIX Security 22)","author":"Yuan Xiaoyong","year":"2022","unstructured":"Xiaoyong Yuan and Lan Zhang. 2022. Membership Inference Attacks and Defenses in Neural Network Pruning. In 31st USENIX Security Symposium (USENIX Security 22). USENIX Association, 4561--4578."}],"container-title":["Proceedings of the VLDB Endowment"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.14778\/3611540.3611591","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,9,10]],"date-time":"2025-09-10T22:36:45Z","timestamp":1757543805000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.14778\/3611540.3611591"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,8]]},"references-count":9,"journal-issue":{"issue":"12","published-print":{"date-parts":[[2023,8]]}},"alternative-id":["10.14778\/3611540.3611591"],"URL":"https:\/\/doi.org\/10.14778\/3611540.3611591","relation":{},"ISSN":["2150-8097"],"issn-type":[{"type":"print","value":"2150-8097"}],"subject":[],"published":{"date-parts":[[2023,8]]},"assertion":[{"value":"2023-08-01","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}