{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,13]],"date-time":"2026-03-13T11:46:27Z","timestamp":1773402387035,"version":"3.50.1"},"reference-count":44,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2023,8,4]],"date-time":"2023-08-04T00:00:00Z","timestamp":1691107200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,8,4]],"date-time":"2023-08-04T00:00:00Z","timestamp":1691107200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"Joint Funds of the National Natural Science Foundation of China","doi-asserted-by":"crossref","award":["Grant No. U22A2036"],"award-info":[{"award-number":["Grant No. U22A2036"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100012226","name":"Fundamental Research Funds for the Central Universities","doi-asserted-by":"publisher","award":["HIT.OCEF.2021007"],"award-info":[{"award-number":["HIT.OCEF.2021007"]}],"id":[{"id":"10.13039\/501100012226","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100012166","name":"National Key Research and Development Program of China","doi-asserted-by":"publisher","award":["2020YFB1406902"],"award-info":[{"award-number":["2020YFB1406902"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Key-Area Research and Development Program of Guangdong Province","award":["2020B0101360001"],"award-info":[{"award-number":["2020B0101360001"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Cybersecurity"],"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Intrusion detection systems are increasingly using machine learning. While machine learning has shown excellent performance in identifying malicious traffic, it may increase the risk of privacy leakage. This paper focuses on implementing a model stealing attack on intrusion detection systems. Existing model stealing attacks are hard to implement in practical network environments, as they either need private data of the victim dataset or frequent access to the victim model. In this paper, we propose a novel solution called Fast Model Stealing Attack (FMSA) to address the problem in the field of model stealing attacks. We also highlight the risks of using ML-NIDS in network security. First, meta-learning frameworks are introduced into the model stealing algorithm to clone the victim model in a black-box state. Then, the number of accesses to the target model is used as an optimization term, resulting in minimal queries to achieve model stealing. Finally, adversarial training is used to simulate the data distribution of the target model and achieve the recovery of privacy data. Through experiments on multiple public datasets, compared to existing state-of-the-art algorithms, FMSA reduces the number of accesses to the target model and improves the accuracy of the clone model on the test dataset to 88.9% and the similarity with the target model to 90.1%. We can demonstrate the successful execution of model stealing attacks on the ML-NIDS system even with protective measures in place to limit the number of anomalous queries.<\/jats:p>","DOI":"10.1186\/s42400-023-00171-y","type":"journal-article","created":{"date-parts":[[2023,8,4]],"date-time":"2023-08-04T02:01:34Z","timestamp":1691114494000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":6,"title":["FMSA: a meta-learning framework-based fast model stealing attack technique against intelligent network intrusion detection systems"],"prefix":"10.1186","volume":"6","author":[{"given":"Kaisheng","family":"Fan","sequence":"first","affiliation":[]},{"given":"Weizhe","family":"Zhang","sequence":"additional","affiliation":[]},{"given":"Guangrui","family":"Liu","sequence":"additional","affiliation":[]},{"given":"Hui","family":"He","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,8,4]]},"reference":[{"issue":"6","key":"171_CR1","doi-asserted-by":"publisher","first-page":"4403","DOI":"10.1007\/s10462-021-10125-w","volume":"55","author":"A Aldahdooh","year":"2022","unstructured":"Aldahdooh A, Hamidouche W, Fezza SA, D\u00e9forges O (2022) Adversarial example detection for dnn models: a review and experimental comparison. Artif Intell Rev 55(6):4403\u20134462","journal-title":"Artif Intell Rev"},{"key":"171_CR2","doi-asserted-by":"crossref","unstructured":"Bao J, Chen D, Wen F, Li H, Hua G (2017) Cvae-gan: fine-grained image generation through asymmetric training. In: Proceedings of the IEEE international conference on computer vision, pp 2745\u20132754","DOI":"10.1109\/ICCV.2017.299"},{"key":"171_CR3","first-page":"1877","volume":"33","author":"T Brown","year":"2020","unstructured":"Brown T, Mann B, Ryder N, Subbiah M, Kaplan JD, Dhariwal P, Neelakantan A, Shyam P, Sastry G, Askell A et al (2020) Language models are few-shot learners. Adv Neural Inf Process Syst 33:1877\u20131901","journal-title":"Adv Neural Inf Process Syst"},{"key":"171_CR4","doi-asserted-by":"crossref","unstructured":"Chen J, Wang J, Peng T, Sun Y, Cheng P, Ji S, Ma X, Li B, Song D (2022a) Copy, right? A testing framework for copyright protection of deep learning models. In: 2022 IEEE symposium on security and privacy (SP), pp 824\u2013841","DOI":"10.1109\/SP46214.2022.9833747"},{"key":"171_CR5","doi-asserted-by":"crossref","unstructured":"Chen Y, Yang X-H, Wei Z, Heidari AA, Zheng N, Li Z, Chen H, Hu H, Zhou Q, Guan Q (2022b) Generative adversarial networks in medical image augmentation: a review. Comput Biol Med 144:105382","DOI":"10.1016\/j.compbiomed.2022.105382"},{"key":"171_CR6","unstructured":"Finn C, Abbeel P, Levine S (2017) Model-agnostic meta-learning for fast adaptation of deep networks. In: International conference on machine learning, pp 1126\u20131135"},{"issue":"11","key":"171_CR7","doi-asserted-by":"publisher","first-page":"139","DOI":"10.1145\/3422622","volume":"63","author":"I Goodfellow","year":"2020","unstructured":"Goodfellow I, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, Courville A, Bengio Y (2020) Generative adversarial networks. Commun ACM 63(11):139\u2013144","journal-title":"Commun ACM"},{"issue":"5","key":"171_CR8","doi-asserted-by":"publisher","first-page":"81","DOI":"10.15514\/ISPRAS-2020-32(5)-6","volume":"32","author":"MN Goryunov","year":"2020","unstructured":"Goryunov MN, Matskevich AG, Rybolovlev DA (2020) Synthesis of a machine learning model for detecting computer attacks based on the cicids2017 dataset. Proc Inst Syst Program RAS 32(5):81\u201394","journal-title":"Proc Inst Syst Program RAS"},{"key":"171_CR9","unstructured":"Gulrajani I, Ahmed F, Arjovsky M, Dumoulin V, Courville AC (2017) Improved training of Wasserstein Gans. Adv Neural Inf Process Syst 30:5767\u20135777"},{"key":"171_CR10","unstructured":"Heusel M, Ramsauer H, Unterthiner T, Nessler B, Hochreiter S (2017) Gans trained by a two time-scale update rule converge to a local nash equilibrium. Adv Neural Inf Processing Syst 30:6629\u20136640"},{"issue":"11s","key":"171_CR11","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3523273","volume":"54","author":"H Hu","year":"2022","unstructured":"Hu H, Salcic Z, Sun L, Dobbie G, Yu PS, Zhang X (2022) Membership inference attacks on machine learning: a survey. ACM Comput Surv (CSUR) 54(11s):1\u201337","journal-title":"ACM Comput Surv (CSUR)"},{"key":"171_CR12","doi-asserted-by":"crossref","unstructured":"Juuti M, Szyller S, Marchal S, Asokan N (2019) Prada: protecting against DNN model stealing attacks. In: 2019 IEEE European symposium on security and privacy (EuroS &P), pp 512\u2013527","DOI":"10.1109\/EuroSP.2019.00044"},{"key":"171_CR13","doi-asserted-by":"crossref","unstructured":"Kariyappa S, Prakash A, Qureshi MK (2021) Maze: data-free model stealing attack using zeroth-order gradient estimation. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 13814\u201313823","DOI":"10.1109\/CVPR46437.2021.01360"},{"key":"171_CR14","doi-asserted-by":"crossref","unstructured":"Kesarwani M, Mukhoty B, Arya V, Mehta S (2018) Model extraction warning in mlaas paradigm. In: Proceedings of the 34th annual computer security applications conference, pp 371\u2013380","DOI":"10.1145\/3274694.3274740"},{"issue":"6","key":"171_CR15","doi-asserted-by":"publisher","first-page":"4909","DOI":"10.1109\/TITS.2021.3054625","volume":"23","author":"BR Kiran","year":"2021","unstructured":"Kiran BR, Sobh I, Talpaert V, Mannion P, Al Sallab AA, Yogamani S, P\u00e9rez P (2021) Deep reinforcement learning for autonomous driving: a survey. IEEE Trans Intell Transp Syst 23(6):4909\u20134926","journal-title":"IEEE Trans Intell Transp Syst"},{"issue":"11","key":"171_CR16","doi-asserted-by":"publisher","first-page":"2278","DOI":"10.1109\/5.726791","volume":"86","author":"Y LeCun","year":"1998","unstructured":"LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278\u20132324","journal-title":"Proc IEEE"},{"issue":"2","key":"171_CR17","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3436755","volume":"54","author":"B Liu","year":"2021","unstructured":"Liu B, Ding M, Shaham S, Rahayu W, Farokhi F, Lin Z (2021) When machine learning meets privacy: a survey and outlook. ACM Comput Surv 54(2):1\u201336","journal-title":"ACM Comput Surv"},{"key":"171_CR18","unstructured":"Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2017) Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083"},{"key":"171_CR19","doi-asserted-by":"crossref","unstructured":"Mahmood K, Mahmood R, Van Dijk M (2021) On the robustness of vision transformers to adversarial examples. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp 7838\u20137847","DOI":"10.1109\/ICCV48922.2021.00774"},{"key":"171_CR20","doi-asserted-by":"crossref","unstructured":"Oh SJ, Schiele B, Fritz M (2019) Towards reverse-engineering black-box neural networks. Explain AI Interpret Explain Vis Deep Learn, 121\u2013144","DOI":"10.1007\/978-3-030-28954-6_7"},{"key":"171_CR21","doi-asserted-by":"crossref","unstructured":"Orekondy T, Schiele B, Fritz M (2019) Knockoff nets: Stealing functionality of black-box models. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 4954\u20134963","DOI":"10.1109\/CVPR.2019.00509"},{"issue":"3.24","key":"171_CR22","first-page":"479","volume":"7","author":"R Panigrahi","year":"2018","unstructured":"Panigrahi R, Borah S (2018) A detailed analysis of cicids2017 dataset for designing intrusion detection systems. Int J Eng Technol 7(3.24):479\u2013482","journal-title":"Int J Eng Technol"},{"key":"171_CR23","doi-asserted-by":"crossref","unstructured":"Papernot N, McDaniel P, Goodfellow I, Jha S, Celik ZB, Swami A (2017) Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia conference on computer and communications security, pp 506\u2013519","DOI":"10.1145\/3052973.3053009"},{"key":"171_CR24","doi-asserted-by":"crossref","unstructured":"Rakin AS, Chowdhuryy MHI, Yao F, Fan D (2022) Deepsteal: Advanced model extractions leveraging efficient weight stealing in memories. In: 2022 IEEE symposium on security and privacy (SP), pp 1157\u20131174","DOI":"10.1109\/SP46214.2022.9833743"},{"key":"171_CR25","unstructured":"Roberts N, Prabhu VU, McAteer M (2019) Model weight theft with just noise inputs: the curious case of the petulant attacker. arXiv preprint arXiv:1912.08987"},{"key":"171_CR26","doi-asserted-by":"crossref","unstructured":"Rong C, Gou G, Hou C, Li Z, Xiong G, Guo L (2021) Umvd-fsl: unseen malware variants detection using few-shot learning. In: 2021 international joint conference on neural networks (IJCNN), pp 1\u20138","DOI":"10.1109\/IJCNN52387.2021.9533759"},{"key":"171_CR27","doi-asserted-by":"crossref","unstructured":"R\u00fcping S, Schulz E, Sicking J, Wirtz T, Akila M, Gannamaneni S, Mock M, Poretschkin M, Rosenzweig J, Abrecht S et al (2022) Inspect, understand, overcome: a survey of practical methods for AI safety. Deep Neural Networks and Data for Automated Driving: Robustness, Uncertainty Quantification, and Insights Towards Safety 3","DOI":"10.1007\/978-3-031-01233-4_1"},{"key":"171_CR28","unstructured":"Salimans T, Goodfellow I, Zaremba W, Cheung V, Radford A, Chen X (2016) Improved techniques for training gans. Adv Neural Inf Processing Syst 29:2234\u20132242"},{"key":"171_CR29","doi-asserted-by":"crossref","unstructured":"Sanyal S, Addepalli S, Babu RV (2022) Towards data-free model stealing in a hard label setting. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 15284\u201315293","DOI":"10.1109\/CVPR52688.2022.01485"},{"key":"171_CR30","unstructured":"Snell J, Swersky K, Zemel R (2017) Prototypical networks for few-shot learning. Adv Neural Inf Process Syst 30"},{"key":"171_CR31","unstructured":"Stratosphere: stratosphere laboratory datasets (2015). https:\/\/www.stratosphereips.org\/datasets-overview Accessed 13 Mar 2020"},{"key":"171_CR32","doi-asserted-by":"crossref","unstructured":"Sun Q, Liu Y, Chua T-S, Schiele B (2019) Meta-transfer learning for few-shot learning. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 403\u2013412","DOI":"10.1109\/CVPR.2019.00049"},{"key":"171_CR33","doi-asserted-by":"crossref","unstructured":"Thanh-Tung H, Tran T (2020) Catastrophic forgetting and mode collapse in gans. In: 2020 international joint conference on neural networks (ijcnn), pp 1\u201310","DOI":"10.1109\/IJCNN48605.2020.9207181"},{"key":"171_CR34","doi-asserted-by":"publisher","first-page":"203","DOI":"10.1016\/j.neucom.2022.04.078","volume":"494","author":"Y Tian","year":"2022","unstructured":"Tian Y, Zhao X, Huang W (2022) Meta-learning approaches for learning-to-learn in deep learning: a survey. Neurocomputing 494:203\u2013223","journal-title":"Neurocomputing"},{"key":"171_CR35","doi-asserted-by":"crossref","unstructured":"Touvron H, Cord M, Sablayrolles A, Synnaeve G, J\u00e9gou H (2021) Going deeper with image transformers. In: Proceedings of the IEEE\/CVF international conference on computer vision, pp 32\u201342","DOI":"10.1109\/ICCV48922.2021.00010"},{"key":"171_CR36","unstructured":"Tram\u00e8r F, Zhang F, Juels A, Reiter MK, Ristenpart T (2016) Stealing machine learning models via prediction APIS. In: USENIX security symposium vol 16, pp 601\u2013618"},{"key":"171_CR37","doi-asserted-by":"crossref","unstructured":"Truong L, Jones C, Hutchinson B, August A, Praggastis B, Jasper R, Nichols N, Tuor A (2020) Systematic evaluation of backdoor data poisoning attacks on image classifiers. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition workshops, pp 788\u2013789","DOI":"10.1109\/CVPRW50498.2020.00402"},{"key":"171_CR38","unstructured":"Vanschoren J (2018) Meta-learning: a survey. arXiv preprint arXiv:1810.03548"},{"issue":"3","key":"171_CR39","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3386252","volume":"53","author":"Y Wang","year":"2020","unstructured":"Wang Y, Yao Q, Kwok JT, Ni LM (2020) Generalizing from a few examples: a survey on few-shot learning. ACM Comput Surv 53(3):1\u201334","journal-title":"ACM Comput Surv"},{"key":"171_CR40","doi-asserted-by":"crossref","unstructured":"Wang B, Gong NZ (2018) Stealing hyperparameters in machine learning. In: 2018 IEEE symposium on security and privacy (SP), pp 36\u201352","DOI":"10.1109\/SP.2018.00038"},{"key":"171_CR41","doi-asserted-by":"crossref","unstructured":"Wang B, Yao Y, Shan S, Li H, Viswanath B, Zheng H, Zhao BY (2019) Neural cleanse: identifying and mitigating backdoor attacks in neural networks. In: 2019 IEEE symposium on security and privacy (SP), pp 707\u2013723","DOI":"10.1109\/SP.2019.00031"},{"key":"171_CR42","doi-asserted-by":"crossref","unstructured":"Wang W, Zhu M, Zeng X, Ye X, Sheng Y (2017) Malware traffic classification using convolutional neural network for representation learning. In: 2017 international conference on information networking (ICOIN), pp 712\u2013717","DOI":"10.1109\/ICOIN.2017.7899588"},{"key":"171_CR43","doi-asserted-by":"publisher","first-page":"83286","DOI":"10.1109\/ACCESS.2019.2922692","volume":"7","author":"J Yang","year":"2019","unstructured":"Yang J, Li T, Liang G, He W, Zhao Y (2019) A simple recurrent unit model based intrusion detection system with dcgan. IEEE Access 7:83286\u201383296","journal-title":"IEEE Access"},{"key":"171_CR44","doi-asserted-by":"crossref","unstructured":"Yang Z, Liu X, Li T, Wu D, Wang J, Zhao Y, Han H (2022) A systematic literature review of methods and datasets for anomaly-based network intrusion detection. Comput Sec 116:102675","DOI":"10.1016\/j.cose.2022.102675"}],"container-title":["Cybersecurity"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s42400-023-00171-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s42400-023-00171-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s42400-023-00171-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,8,4]],"date-time":"2023-08-04T02:04:51Z","timestamp":1691114691000},"score":1,"resource":{"primary":{"URL":"https:\/\/cybersecurity.springeropen.com\/articles\/10.1186\/s42400-023-00171-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,8,4]]},"references-count":44,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2023,12]]}},"alternative-id":["171"],"URL":"https:\/\/doi.org\/10.1186\/s42400-023-00171-y","relation":{},"ISSN":["2523-3246"],"issn-type":[{"value":"2523-3246","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,8,4]]},"assertion":[{"value":"15 March 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"14 June 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"4 August 2023","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no competing interests","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"35"}}