{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,2]],"date-time":"2025-11-02T07:34:33Z","timestamp":1762068873990,"version":"build-2065373602"},"reference-count":51,"publisher":"MDPI AG","issue":"10","license":[{"start":{"date-parts":[[2022,9,27]],"date-time":"2022-09-27T00:00:00Z","timestamp":1664236800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Key R&amp;D Program of China","award":["2018AAA0103300","2018AAA0103302","BK20170900","19KJB520046","20KJA520001","2019K024","JY02","KYCX19_0921","KYCX19_0906","2021KF0AB05","NY219132"],"award-info":[{"award-number":["2018AAA0103300","2018AAA0103302","BK20170900","19KJB520046","20KJA520001","2019K024","JY02","KYCX19_0921","KYCX19_0906","2021KF0AB05","NY219132"]}]},{"name":"Natural Science Foundation of Jiangsu Province (Higher Education Institutions)","award":["2018AAA0103300","2018AAA0103302","BK20170900","19KJB520046","20KJA520001","2019K024","JY02","KYCX19_0921","KYCX19_0906","2021KF0AB05","NY219132"],"award-info":[{"award-number":["2018AAA0103300","2018AAA0103302","BK20170900","19KJB520046","20KJA520001","2019K024","JY02","KYCX19_0921","KYCX19_0906","2021KF0AB05","NY219132"]}]},{"name":"Innovative and Entrepreneurial talents projects of Jiangsu Province","award":["2018AAA0103300","2018AAA0103302","BK20170900","19KJB520046","20KJA520001","2019K024","JY02","KYCX19_0921","KYCX19_0906","2021KF0AB05","NY219132"],"award-info":[{"award-number":["2018AAA0103300","2018AAA0103302","BK20170900","19KJB520046","20KJA520001","2019K024","JY02","KYCX19_0921","KYCX19_0906","2021KF0AB05","NY219132"]}]},{"name":"Jiangsu Planned Projects for Postdoctoral Research Funds","award":["2018AAA0103300","2018AAA0103302","BK20170900","19KJB520046","20KJA520001","2019K024","JY02","KYCX19_0921","KYCX19_0906","2021KF0AB05","NY219132"],"award-info":[{"award-number":["2018AAA0103300","2018AAA0103302","BK20170900","19KJB520046","20KJA520001","2019K024","JY02","KYCX19_0921","KYCX19_0906","2021KF0AB05","NY219132"]}]},{"name":"Six talent peak projects in Jiangsu Province","award":["2018AAA0103300","2018AAA0103302","BK20170900","19KJB520046","20KJA520001","2019K024","JY02","KYCX19_0921","KYCX19_0906","2021KF0AB05","NY219132"],"award-info":[{"award-number":["2018AAA0103300","2018AAA0103302","BK20170900","19KJB520046","20KJA520001","2019K024","JY02","KYCX19_0921","KYCX19_0906","2021KF0AB05","NY219132"]}]},{"name":"Postgraduate Research &amp; Practice Innovation Program of Jiangsu Province","award":["2018AAA0103300","2018AAA0103302","BK20170900","19KJB520046","20KJA520001","2019K024","JY02","KYCX19_0921","KYCX19_0906","2021KF0AB05","NY219132"],"award-info":[{"award-number":["2018AAA0103300","2018AAA0103302","BK20170900","19KJB520046","20KJA520001","2019K024","JY02","KYCX19_0921","KYCX19_0906","2021KF0AB05","NY219132"]}]},{"name":"Open Research Project of Zhejiang Lab","award":["2018AAA0103300","2018AAA0103302","BK20170900","19KJB520046","20KJA520001","2019K024","JY02","KYCX19_0921","KYCX19_0906","2021KF0AB05","NY219132"],"award-info":[{"award-number":["2018AAA0103300","2018AAA0103302","BK20170900","19KJB520046","20KJA520001","2019K024","JY02","KYCX19_0921","KYCX19_0906","2021KF0AB05","NY219132"]}]},{"name":"NUPT DingShan Scholar Project and NUPTSF","award":["2018AAA0103300","2018AAA0103302","BK20170900","19KJB520046","20KJA520001","2019K024","JY02","KYCX19_0921","KYCX19_0906","2021KF0AB05","NY219132"],"award-info":[{"award-number":["2018AAA0103300","2018AAA0103302","BK20170900","19KJB520046","20KJA520001","2019K024","JY02","KYCX19_0921","KYCX19_0906","2021KF0AB05","NY219132"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Entropy"],"abstract":"<jats:p>Much research on adversarial attacks has proved that deep neural networks have certain security vulnerabilities. Among potential attacks, black-box adversarial attacks are considered the most realistic based on the the natural hidden nature of deep neural networks. Such attacks have become a critical academic emphasis in the current security field. However, current black-box attack methods still have shortcomings, resulting in incomplete utilization of query information. Our research, based on the newly proposed Simulator Attack, proves the correctness and usability of feature layer information in a simulator model obtained by meta-learning for the first time. Then, we propose an optimized Simulator Attack+ based on this discovery. Our optimization methods used in Simulator Attack+ include: (1) a feature attentional boosting module that uses the feature layer information of the simulator to enhance the attack and accelerate the generation of adversarial examples; (2) a linear self-adaptive simulator-predict interval mechanism that allows the simulator model to be fully fine-tuned in the early stage of the attack and dynamically adjusts the interval for querying the black-box model; and (3) an unsupervised clustering module to provide a warm-start for targeted attacks. Results from experiments on the CIFAR-10 and CIFAR-100 datasets clearly show that Simulator Attack+ can further reduce the number of consuming queries to improve query efficiency while maintaining the attack.<\/jats:p>","DOI":"10.3390\/e24101377","type":"journal-article","created":{"date-parts":[[2022,9,27]],"date-time":"2022-09-27T23:12:12Z","timestamp":1664320332000},"page":"1377","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["An Optimized Black-Box Adversarial Simulator Attack Based on Meta-Learning"],"prefix":"10.3390","volume":"24","author":[{"given":"Zhiyu","family":"Chen","sequence":"first","affiliation":[{"name":"School of Internet of Things, Nanjing University of Posts and Telecommunication, Nanjing 210023, China"}]},{"given":"Jianyu","family":"Ding","sequence":"additional","affiliation":[{"name":"School of Computer, Software and Cyberspace Security, Nanjing University of Posts and Telecommunication, Nanjing 210023, China"}]},{"given":"Fei","family":"Wu","sequence":"additional","affiliation":[{"name":"School of Computer, Software and Cyberspace Security, Nanjing University of Posts and Telecommunication, Nanjing 210023, China"}]},{"given":"Chi","family":"Zhang","sequence":"additional","affiliation":[{"name":"School of Computer, Software and Cyberspace Security, Nanjing University of Posts and Telecommunication, Nanjing 210023, China"}]},{"given":"Yiming","family":"Sun","sequence":"additional","affiliation":[{"name":"School of Computer, Software and Cyberspace Security, Nanjing University of Posts and Telecommunication, Nanjing 210023, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7293-9709","authenticated-orcid":false,"given":"Jing","family":"Sun","sequence":"additional","affiliation":[{"name":"School of Internet of Things, Nanjing University of Posts and Telecommunication, Nanjing 210023, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8511-7544","authenticated-orcid":false,"given":"Shangdong","family":"Liu","sequence":"additional","affiliation":[{"name":"School of Computer, Software and Cyberspace Security, Nanjing University of Posts and Telecommunication, Nanjing 210023, China"}]},{"given":"Yimu","family":"Ji","sequence":"additional","affiliation":[{"name":"School of Computer, Software and Cyberspace Security, Nanjing University of Posts and Telecommunication, Nanjing 210023, China"}]}],"member":"1968","published-online":{"date-parts":[[2022,9,27]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Biggio, B., Corona, I., Maiorca, D., and Nelson, B. (2013). Evasion attacks against machine learning at test time. Machine Learning and Knowledge Discovery in Databases, Springer.","DOI":"10.1007\/978-3-642-40994-3_25"},{"key":"ref_2","unstructured":"Goodfellow, I.J., Shlens, J., and Szegedy, C. (2014). Explaining and Harnessing Adversarial Examples. arXiv."},{"key":"ref_3","unstructured":"Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., and Fergus, R. (2014). Intriguing properties of neural networks. arXiv."},{"key":"ref_4","first-page":"9","article-title":"Towards deep learning models resistant to adversarial attacks","volume":"1050","author":"Madry","year":"2017","journal-title":"Stat"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Carlini, N., and Wagner, D. (2017, January 22\u201326). Towards evaluating the robustness of neural networks. Proceedings of the 2017 IEEE Symposium on Security and Privacy, San Jose, CA, USA.","DOI":"10.1109\/SP.2017.49"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Chen, P.Y., Zhang, H., Sharma, Y., Yi, J., and Hsieh, C.J. (2017, January 3). Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Dallas, TX, USA.","DOI":"10.1145\/3128572.3140448"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Tu, C.C., Ting, P., Chen, P.Y., Liu, S., and Cheng, S.M. (2019, January 8\u201312). AutoZOOM: Autoencoder-Based Zeroth Order Optimization Method for Attacking Black-Box Neural Networks. Proceedings of the AAAI Conference on Artificial Intelligence, Atlanta, Georgia.","DOI":"10.1609\/aaai.v33i01.3301742"},{"key":"ref_8","unstructured":"Ilyas, A., Engstrom, L., Athalye, A., and Lin, J. (2018, January 10\u201315). Black-box adversarial attacks with limited queries and information. Proceedings of the International Conference on Machine Learning, Stockholm, Sweden."},{"key":"ref_9","unstructured":"Ilyas, A., Engstrom, L., and Madry, A. (2018). Prior convictions: Black-box adversarial attacks with bandits and priors. arXiv."},{"key":"ref_10","unstructured":"Liu, Y., Chen, X., Liu, C., and Song, D. (2016). Delving into Transferable Adversarial Examples and Black-box Attacks. arXiv."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Oh, S.J., Schiele, B., and Fritz, M. (2019). Towards Reverse-Engineering Black-Box Neural Networks, Springer.","DOI":"10.1007\/978-3-030-28954-6_7"},{"key":"ref_12","unstructured":"Demontis, A., Melis, M., Pintor, M., Jagielski, M., Biggio, B., Oprea, A., Nita-Rotaru, C., and Roli, F. (2019, January 14\u201316). Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks. Proceedings of the 28th USENIX Security Symposium (USENIX Security 19), Santa Clara, CA, USA."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Huang, Q., Katsman, I., He, H., Gu, Z., Belongie, S., and Lim, S.N. (2019, January 27\u201328). Enhancing adversarial example transferability with an intermediate level attack. Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, Korea.","DOI":"10.1109\/ICCV.2019.00483"},{"key":"ref_14","first-page":"2188","article-title":"Universal Adversarial Attack on Attention and the Resulting Dataset DAmageNet","volume":"44","author":"Chen","year":"2022","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Orekondy, T., Schiele, B., and Fritz, M. (2019, January 15\u201320). Knockoff nets: Stealing functionality of black-box models. Proceedings of the Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00509"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., and Swami, A. (2017, January 2\u20136). Practical black-box attacks against machine learning. Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, Abu Dhabi, United Arab Emirates.","DOI":"10.1145\/3052973.3053009"},{"key":"ref_17","unstructured":"Tramr, F., Zhang, F., Juels, A., Reiter, M.K., and Ristenpart, T. (2016, January 10\u201312). Stealing machine learning models via prediction APIs. Proceedings of the 25th USENIX security symposium (USENIX Security 16), Austin, TX, USA."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Lee, T., Edwards, B., Molloy, I., and Su, D. (2019, January 19\u201323). Defending against neural network model stealing attacks using deceptive perturbations. Proceedings of the 2019 IEEE Security and Privacy Workshops, San Francisco, CA, USA.","DOI":"10.1109\/SPW.2019.00020"},{"key":"ref_19","unstructured":"Orekondy, T., Schiele, B., and Fritz, M. (2019). Prediction Poisoning: Towards Defenses Against DNN Model Stealing Attacks. arXiv."},{"key":"ref_20","first-page":"1","article-title":"Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark","volume":"60","author":"Xu","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Ma, C., Chen, L., and Yong, J.H. (2021, January 20\u201325). Simulating unknown target models for query-efficient black-box attacks. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.01166"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Zhou, B., Cui, Q., Wei, X.S., and Chen, Z.M. (2020, January 14\u201319). Bbn: Bilateral-branch network with cumulative learning for long-tailed visual recognition. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00974"},{"key":"ref_23","unstructured":"Du, J., Zhang, H., Zhou, J.T., Yang, Y., and Feng, J. (2020). Query-efficient Meta Attack to Deep Neural Networks. arXiv."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"527","DOI":"10.1007\/s10208-015-9296-2","article-title":"Random gradient-free minimization of convex functions","volume":"17","author":"Nesterov","year":"2017","journal-title":"Found. Comput. Math."},{"key":"ref_25","first-page":"10934","article-title":"Improving black-box adversarial attacks with a transfer-based prior","volume":"32","author":"Cheng","year":"2019","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_26","unstructured":"Krizhevsky, A., and Hinton, G. (2009). Learning Multiple Layers of Features from Tiny Images, University of Toronto."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Bhagoji, A.N., He, W., Li, B., and Song, D. (2018, January 8\u201314). Practical black-box attacks on deep neural networks using efficient query mechanisms. Proceedings of the European Conference on Computer Vision, Munich, Germany.","DOI":"10.1007\/978-3-030-01258-8_10"},{"key":"ref_28","unstructured":"Cheng, M., Le, T., Chen, P.Y., Zhang, H., Yi, J., and Hsieh, C.J. (2019). Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach. arXiv."},{"key":"ref_29","unstructured":"Brendel, W., Rauber, J., and Bethge, M. (2018). Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models. arXiv."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Wang, B., and Gong, N.Z. (2018, January 20\u201324). Stealing hyperparameters in machine learning. Proceedings of the 2018 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA.","DOI":"10.1109\/SP.2018.00038"},{"key":"ref_31","unstructured":"Ma, C., Cheng, S., Chen, L., Zhu, J., and Yong, J. (2020). Switching Transferable Gradient Directions for Query-Efficient Black-Box Adversarial Attacks. arXiv."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Pengcheng, L., Yi, J., and Zhang, L. (2018, January 17\u201320). Query-efficient black-box attack by active learning. Proceedings of the 2018 IEEE International Conference on Data Mining (ICDM), Singapore.","DOI":"10.1109\/ICDM.2018.00159"},{"key":"ref_33","unstructured":"Papernot, N., McDaniel, P., and Goodfellow, I. (2016). Transferability in machine learning: From phenomena to black-box attacks using adversarial samples. arXiv."},{"key":"ref_34","unstructured":"Guo, C., Gardner, J., You, Y., Wilson, A.G., and Weinberger, K. (2019, January 9\u201315). Simple black-box adversarial attacks. Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Andriushchenko, M., Croce, F., Flammarion, N., and Hein, M. (2020). Square attack: A query-efficient black-box adversarial attack via random search. Computer Vision\u2014ECCV 2020, Springer.","DOI":"10.1007\/978-3-030-58592-1_29"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Chen, J., Jordan, M.I., and Wainwright, M.J. (2020, January 18\u201321). Hopskipjumpattack: A query-efficient decision-based attack. Proceedings of the 2020 IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA.","DOI":"10.1109\/SP40000.2020.00045"},{"key":"ref_37","first-page":"12288","article-title":"Learning black-box attackers with transferable priors and query feedback","volume":"33","author":"Yang","year":"2020","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_38","first-page":"20791","article-title":"Perturbing across the feature hierarchy to improve standard and strict blackbox attack transferability","volume":"33","author":"Inkawhich","year":"2020","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Dong, Y., Su, H., Wu, B., Li, Z., Liu, W., Zhang, T., and Zhu, J. (2019, January 15\u201320). Efficient decision-based black-box adversarial attacks on face recognition. Proceedings of the Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00790"},{"key":"ref_40","unstructured":"Wu, L., Zhu, Z., and Tai, C. (2018). Understanding and enhancing the transferability of adversarial examples. arXiv."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Milli, S., Schmidt, L., Dragan, A.D., and Hardt, M. (2019, January 29\u201331). Model reconstruction from model explanations. Proceedings of the Conference on Fairness, Accountability, and Transparency, Atlanta, GA, USA.","DOI":"10.1145\/3287560.3287562"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Ma, C., Zhao, C., Shi, H., Chen, L., Yong, J., and Zeng, D. (2019, January 21\u201325). Metaadvdet: Towards robust detection of evolving adversarial attacks. Proceedings of the 27th ACM International Conference on Multimedia, Nice, France.","DOI":"10.1145\/3343031.3350887"},{"key":"ref_43","first-page":"3825","article-title":"Subspace attack: Exploiting promising subspaces for query-efficient black-box attacks","volume":"32","author":"Guo","year":"2019","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Han, D., Kim, J., and Kim, J. (2017, January 21\u201326). Deep pyramidal residual networks. Proceedings of the Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.668"},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"186126","DOI":"10.1109\/ACCESS.2019.2960566","article-title":"Shakedrop regularization for deep residual learning","volume":"7","author":"Yamada","year":"2019","journal-title":"IEEE Access"},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Cubuk, E.D., Zoph, B., Mane, D., Vasudevan, V., and Le, Q.V. (2019, January 15\u201320). Autoaugment: Learning augmentation strategies from data. Proceedings of the Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00020"},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Dong, X., and Yang, Y. (2019, January 15\u201320). Searching for a robust neural architecture in four gpu hours. Proceedings of the Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00186"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Zagoruyko, S., and Komodakis, N. (2016). Wide Residual Networks. arXiv.","DOI":"10.5244\/C.30.87"},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Jia, X., Wei, X., Cao, X., and Foroosh, H. (2019, January 15\u201320). Comdefend: An efficient image compression model to defend adversarial examples. Proceedings of the of Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00624"},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Mustafa, A., Khan, S., Hayat, M., Goecke, R., Shen, J., and Shao, L. (2019, January 15\u201320). Adversarial defense by restricting the hidden space of deep neural networks. Proceedings of the Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/ICCV.2019.00348"},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Liu, Z., Liu, Q., Liu, T., Xu, N., Lin, X., Wang, Y., and Wen, W. (2019, January 15\u201320). Feature distillation: Dnn-oriented jpeg compression against adversarial examples. Proceedings of the 2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00095"}],"container-title":["Entropy"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1099-4300\/24\/10\/1377\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:40:48Z","timestamp":1760143248000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1099-4300\/24\/10\/1377"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,27]]},"references-count":51,"journal-issue":{"issue":"10","published-online":{"date-parts":[[2022,10]]}},"alternative-id":["e24101377"],"URL":"https:\/\/doi.org\/10.3390\/e24101377","relation":{},"ISSN":["1099-4300"],"issn-type":[{"type":"electronic","value":"1099-4300"}],"subject":[],"published":{"date-parts":[[2022,9,27]]}}}