{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,30]],"date-time":"2025-10-30T07:13:48Z","timestamp":1761808428263,"version":"3.37.3"},"reference-count":56,"publisher":"Springer Science and Business Media LLC","issue":"4","license":[{"start":{"date-parts":[[2021,2,13]],"date-time":"2021-02-13T00:00:00Z","timestamp":1613174400000},"content-version":"tdm","delay-in-days":0,"URL":"http:\/\/www.springer.com\/tdm"},{"start":{"date-parts":[[2021,2,13]],"date-time":"2021-02-13T00:00:00Z","timestamp":1613174400000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/www.springer.com\/tdm"}],"funder":[{"name":"National Social Science Fund of China","award":["18BGJ071"],"award-info":[{"award-number":["18BGJ071"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Wireless Pers Commun"],"published-print":{"date-parts":[[2021,4]]},"DOI":"10.1007\/s11277-021-08284-8","type":"journal-article","created":{"date-parts":[[2021,2,14]],"date-time":"2021-02-14T13:43:26Z","timestamp":1613310206000},"page":"3505-3525","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":15,"title":["Security Threats and Defensive Approaches in Machine Learning System Under Big Data Environment"],"prefix":"10.1007","volume":"117","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8159-4984","authenticated-orcid":false,"given":"Chen","family":"Hongsong","sequence":"first","affiliation":[]},{"given":"Zhang","family":"Yongpeng","sequence":"additional","affiliation":[]},{"given":"Cao","family":"Yongrui","sequence":"additional","affiliation":[]},{"given":"Bharat","family":"Bhargava","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2021,2,13]]},"reference":[{"key":"8284_CR1","doi-asserted-by":"crossref","unstructured":"Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., Swami, A. (2016). The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P) (pp. 372\u2013387). IEEE.","DOI":"10.1109\/EuroSP.2016.36"},{"key":"8284_CR2","doi-asserted-by":"crossref","unstructured":"Kos, J., Fischer, I., Song, D. (2018). Adversarial examples for generative models. In 2018 IEEE security and privacy workshops (spw) (pp. 36\u201342). IEEE.","DOI":"10.1109\/SPW.2018.00014"},{"key":"8284_CR3","doi-asserted-by":"crossref","unstructured":"Nguyen, A., Yosinski, J., Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 427\u2013436).","DOI":"10.1109\/CVPR.2015.7298640"},{"issue":"5","key":"8284_CR4","doi-asserted-by":"publisher","first-page":"61","DOI":"10.1109\/MNET.001.1800505","volume":"33","author":"B Chen","year":"2019","unstructured":"Chen, B., Wan, J., Lan, Y., Imran, M., Li, D., & Guizani, N. (2019). Improving cognitive ability of edge intelligent IIoT through machine learning. IEEE Network, 33(5), 61\u201367.","journal-title":"IEEE Network"},{"key":"8284_CR5","doi-asserted-by":"crossref","unstructured":"Shailaja, K., Seetharamulu, B., Jabbar, M. A. (2018). Machine learning in healthcare: A review. In 2018 second international conference on electronics, communication and aerospace technology (ICECA) (pp. 910\u2013914). IEEE.","DOI":"10.1109\/ICECA.2018.8474918"},{"issue":"6","key":"8284_CR6","doi-asserted-by":"publisher","first-page":"1893","DOI":"10.1109\/JBHI.2014.2344095","volume":"19","author":"M Mozaffari-Kermani","year":"2014","unstructured":"Mozaffari-Kermani, M., Sur-Kolay, S., Raghunathan, A., & Jha, N. K. (2014). Systematic poisoning attacks on and defenses for machine learning in healthcare. IEEE journal of biomedical and health informatics, 19(6), 1893\u20131905.","journal-title":"IEEE journal of biomedical and health informatics"},{"key":"8284_CR7","unstructured":"Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., & Wierstra, D. (2015). Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971."},{"key":"8284_CR8","doi-asserted-by":"crossref","unstructured":"Okuyama, T., Gonsalves, T., & Upadhay, J. (2018 March). Autonomous driving system based on deep q learnig. In 2018 International conference on intelligent autonomous systems (ICoIAS) (pp. 201\u2013205). IEEE.","DOI":"10.1109\/ICoIAS.2018.8494053"},{"key":"8284_CR9","doi-asserted-by":"publisher","first-page":"1694","DOI":"10.1007\/s10922-020-09554-9","volume":"28","author":"X Pei","year":"2020","unstructured":"Pei, X., Tian, S., Yu, L., et al. (2020). A two-stream network based on capsule networks and sliced recurrent neural networks for DGA botnet detection. Journal of Network and Systems Management, 28, 1694\u20131721.","journal-title":"Journal of Network and Systems Management"},{"key":"8284_CR10","doi-asserted-by":"crossref","unstructured":"Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., & Song, D. (2018). Robust physical-world attacks on deep learning visual classification. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1625\u20131634).","DOI":"10.1109\/CVPR.2018.00175"},{"key":"8284_CR11","unstructured":"Gu, T., Dolan-Gavitt, B., & Garg, S. (2017). Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprintarXiv:1708.06733."},{"key":"8284_CR12","doi-asserted-by":"crossref","unstructured":"Carlini, N., & Wagner, D. (2017, May). Towards evaluating the robustness of neural networks. In 2017 IEEE symposium on security and privacy (sp) (pp. 39\u201357). IEEE.","DOI":"10.1109\/SP.2017.49"},{"key":"8284_CR13","unstructured":"Athalye, A., Carlini, N., & Wagner, D. (2018). Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420."},{"key":"8284_CR14","doi-asserted-by":"publisher","first-page":"317","DOI":"10.1016\/j.patcog.2018.07.023","volume":"84","author":"B Biggio","year":"2018","unstructured":"Biggio, B., & Roli, F. (2018). Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84, 317\u2013331.","journal-title":"Pattern Recognition"},{"key":"8284_CR15","doi-asserted-by":"crossref","unstructured":"Papernot, N., McDaniel, P., Sinha, A., & Wellman, M. P. (2018). SoK: Security and privacy in machine learning. In 2018 IEEE European Symposium on Security and Privacy (EuroS&P) (pp. 399\u2013414). IEEE.","DOI":"10.1109\/EuroSP.2018.00035"},{"key":"8284_CR16","doi-asserted-by":"publisher","first-page":"12103","DOI":"10.1109\/ACCESS.2018.2805680","volume":"6","author":"Q Liu","year":"2018","unstructured":"Liu, Q., Li, P., Zhao, W., Cai, W., Yu, S., & Leung, V. C. (2018). A survey on security threats and defensive techniques of machine learning: A data driven view. IEEE Access, 6, 12103\u201312117.","journal-title":"IEEE Access"},{"key":"8284_CR17","doi-asserted-by":"publisher","first-page":"14410","DOI":"10.1109\/ACCESS.2018.2807385","volume":"6","author":"N Akhtar","year":"2018","unstructured":"Akhtar, N., & Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, 14410\u201314430.","journal-title":"IEEE Access"},{"key":"8284_CR18","doi-asserted-by":"crossref","unstructured":"Idris, N., & Ahmad, K. (2011). Managing Data Source quality for data warehouse in manufacturing services. In Proceedings of the 2011 IEEE International conference on electrical engineering and informatics (pp. 1\u20136).","DOI":"10.1109\/ICEEI.2011.6021598"},{"key":"8284_CR19","doi-asserted-by":"crossref","unstructured":"Xiao, Q., Li, K., Zhang, D., & Xu, W. (2018). Security risks in deep learning implementations. In 2018 IEEE Security and privacy workshops (SPW) (pp. 123\u2013128)","DOI":"10.1109\/SPW.2018.00027"},{"key":"8284_CR20","doi-asserted-by":"crossref","unstructured":"Ji, Y., Zhang, X., Ji, S., Luo, X., & Wang, T. (2018). Model-reuse attacks on deep learning systems. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security (pp. 349\u2013363).","DOI":"10.1145\/3243734.3243757"},{"key":"8284_CR21","unstructured":"Tram\u00e8r, F., Zhang, F., Juels, A., Reiter, M. K., & Ristenpart, T. (2016). Stealing machine learning models via prediction apis. In 25th {USENIX} security symposium ({USENIX} security 16) (pp. 601\u2013618)."},{"key":"8284_CR22","doi-asserted-by":"crossref","unstructured":"Shi, Y., Sagduyu, Y., & Grushin, A. (2017). How to steal a machine learning classifier with deep learning. In 2017 IEEE International symposium on technologies for homeland security (HST) (pp. 1\u20135)","DOI":"10.1109\/THS.2017.7943475"},{"key":"8284_CR23","doi-asserted-by":"crossref","unstructured":"Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., & Swami, A. (2017). Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security (pp. 506\u2013519).","DOI":"10.1145\/3052973.3053009"},{"key":"8284_CR24","doi-asserted-by":"crossref","unstructured":"Chen, P. Y., Zhang, H., Sharma, Y., Yi, J., & Hsieh, C. J. (2017). Zoo: Zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In Proceedings of the 10th ACM workshop on artificial intelligence and security (pp. 15\u201326).","DOI":"10.1145\/3128572.3140448"},{"key":"8284_CR25","doi-asserted-by":"crossref","unstructured":"Shi, Y., Wang, S., & Han, Y. (2019). Curls & whey: Boosting black-box adversarial attacks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 6519\u20136527).","DOI":"10.1109\/CVPR.2019.00668"},{"key":"8284_CR26","doi-asserted-by":"crossref","unstructured":"Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., & Li, J. (2018). Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 9185\u20139193).","DOI":"10.1109\/CVPR.2018.00957"},{"issue":"5","key":"8284_CR27","doi-asserted-by":"publisher","first-page":"828","DOI":"10.1109\/TEVC.2019.2890858","volume":"23","author":"J Su","year":"2019","unstructured":"Su, J., Vargas, D. V., & Sakurai, K. (2019). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation, 23(5), 828\u2013841.","journal-title":"IEEE Transactions on Evolutionary Computation"},{"key":"8284_CR28","doi-asserted-by":"crossref","unstructured":"Chen, J., Jordan, M. I., & Wainwright, M. J. (2020). Hopskipjumpattack: A query-efficient decision-based attack. In 2020 IEEE symposium on security and privacy (sp) (pp. 1277\u20131294).","DOI":"10.1109\/SP40000.2020.00045"},{"key":"8284_CR29","unstructured":"Brendel, W., Rauber, J., & Bethge, M. (2017). Decision-based adversarial attacks: Reliable attacks against black-box machine learning models. arXiv preprint arXiv:1712.04248."},{"key":"8284_CR30","unstructured":"Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2017). Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083."},{"key":"8284_CR31","unstructured":"DroneDeploy. Introducing map engine[EB\/OL]. https:\/\/blog.dronedeploy.com\/introducing-map-engine-cd3ef93bc730?gi=8762541ecbbc,2018-8-17."},{"key":"8284_CR32","unstructured":"Brown, T. B., Man\u00e9, D., Roy, A., Abadi, M., & Gilmer, J. (2017). Adversarial patch. arXiv preprint arXiv:1712.09665."},{"key":"8284_CR33","unstructured":"Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., & Fergus, R. (2013). Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199."},{"key":"8284_CR34","unstructured":"Kurakin, A., Goodfellow, I., & Bengio, S. (2016). Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236."},{"key":"8284_CR35","doi-asserted-by":"crossref","unstructured":"Moosavi-Dezfooli, S. M., Fawzi, A., & Frossard, P. (2016). Deepfool: A simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2574\u20132582).","DOI":"10.1109\/CVPR.2016.282"},{"key":"8284_CR36","doi-asserted-by":"crossref","unstructured":"Moosavi-Dezfooli, S. M., Fawzi, A., Fawzi, O., & Frossard, P. (2017). Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1765\u20131773).","DOI":"10.1109\/CVPR.2017.17"},{"key":"8284_CR37","unstructured":"Xiaoqx.out of bound write cause segmentfault [EB\/OL]. https:\/\/github.com\/opencv\/opencv\/issues\/9443,2017-08-23."},{"key":"8284_CR38","unstructured":"Common vulnerabilities and exposures.google tensorflow 1.7 and below is affected by: Buffer overflow. [EB\/OL]. https:\/\/cve.mitre.org\/cgi-bin\/cvename.cgi?name=CVE-2018-8825,2018-3-20."},{"key":"8284_CR39","doi-asserted-by":"crossref","unstructured":"Dalvi, N., Domingos, P., Sanghai, S., & Verma, D. (2004, August). Adversarial classification. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 99\u2013108).","DOI":"10.1145\/1014052.1014066"},{"issue":"10","key":"8284_CR40","doi-asserted-by":"publisher","first-page":"1436","DOI":"10.1016\/j.patrec.2011.03.022","volume":"32","author":"B Biggio","year":"2011","unstructured":"Biggio, B., Fumera, G., Pillai, I., & Roli, F. (2011). A survey and experimental evaluation of image spam filtering techniques. Pattern recognition letters, 32(10), 1436\u20131446.","journal-title":"Pattern recognition letters"},{"key":"8284_CR41","unstructured":"Biggio, B., Corona, I., Nelson, B., Rubinstein, B. I., Maiorca, D., Fumera, G., & Roli, F. (2014). Security evaluation of support vector machines in adversarial environments. In Support Vector Machines Applications (pp. 105\u2013153). Cham: Springer."},{"key":"8284_CR42","unstructured":"Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572."},{"key":"8284_CR43","doi-asserted-by":"crossref","unstructured":"Papernot, N., McDaniel, P., Wu, X., Jha, S., & Swami, A. (2016). Distillation as a defense to adversarial perturbations against deep neural networks. In 2016 IEEE Symposium on Security and Privacy (SP) (pp. 582\u2013597).","DOI":"10.1109\/SP.2016.41"},{"key":"8284_CR44","doi-asserted-by":"crossref","unstructured":"Sharif, M., Bhagavatula, S., Bauer, L., & Reiter, M. K. (2016). Accessorize to a crime: Real and stealthy attacks on state-of-the-art face recognition. In Proceedings of the 2016 acm sigsac conference on computer and communications security (pp. 1528\u20131540).","DOI":"10.1145\/2976749.2978392"},{"key":"8284_CR45","unstructured":"Komkov, S., & Petiushko, A. (2019). Advhat: Real-world adversarial attack on arcface face id system. arXiv preprint arXiv:1908.08705."},{"issue":"1","key":"8284_CR46","doi-asserted-by":"publisher","first-page":"653","DOI":"10.1109\/JSYST.2019.2906120","volume":"14","author":"H Li","year":"2019","unstructured":"Li, H., Zhou, S., Yuan, W., Li, J., & Leung, H. (2019). Adversarial-example attacks toward android malware detection system. IEEE Systems Journal, 14(1), 653\u2013656.","journal-title":"IEEE Systems Journal"},{"key":"8284_CR47","doi-asserted-by":"crossref","unstructured":"Ayub, M. A., Johnson, W. A., Talbert, D. A., & Siraj, A. (2020). Model evasion attack on intrusion detection systems using adversarial machine learning. In 2020 IEEE 54th annual conference on information sciences and systems (CISS) (pp. 1\u20136)","DOI":"10.1109\/CISS48834.2020.1570617116"},{"key":"8284_CR48","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770\u2013778).","DOI":"10.1109\/CVPR.2016.90"},{"key":"8284_CR49","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248\u2013255).","DOI":"10.1109\/CVPR.2009.5206848"},{"issue":"11","key":"8284_CR50","doi-asserted-by":"publisher","first-page":"2278","DOI":"10.1109\/5.726791","volume":"86","author":"Y LeCun","year":"1998","unstructured":"LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278\u20132324.","journal-title":"Proceedings of the IEEE"},{"key":"8284_CR51","unstructured":"Samangouei, P., Kabkab, M., & Chellappa, R. (2018). Defense-gan: Protecting classifiers against adversarial attacks using generative models. arXiv preprint arXiv:1805.06605."},{"issue":"3","key":"8284_CR52","doi-asserted-by":"publisher","first-page":"766","DOI":"10.1109\/TCYB.2015.2415032","volume":"46","author":"F Zhang","year":"2015","unstructured":"Zhang, F., Chan, P. P., Biggio, B., Yeung, D. S., & Roli, F. (2015). Adversarial feature selection against evasion attacks. IEEE Transactions on Cybernetics, 46(3), 766\u2013777.","journal-title":"IEEE Transactions on Cybernetics"},{"key":"8284_CR53","doi-asserted-by":"crossref","unstructured":"Naghibijouybari, H., Neupane, A., Qian, Z., & Abu-Ghazaleh, N. (2018). Rendered insecure: GPU side channel attacks are practical. In Proceedings of the 2018 ACM SIGSAC conference on computer and communications security (pp. 2139\u20132153).","DOI":"10.1145\/3243734.3243831"},{"key":"8284_CR54","unstructured":"DroneDeploy. Capture. Analyze. Act. [EB\/OL]. https:\/\/www.dronedeploy.com\/."},{"key":"8284_CR55","unstructured":"DroneDeploy.Live Map [EB\/OL]. https:\/\/www.dronedeploy.com\/product\/live-map\/."},{"key":"8284_CR56","doi-asserted-by":"crossref","unstructured":"Shi, Y., Sagduyu, Y. E., Davaslioglu, K., & Li, J. H. (2018). Active deep learning attacks under strict rate limitations for online API calls. In 2018 IEEE International Symposium on Technologies for Homeland Security (HST) (pp. 1\u20136).","DOI":"10.1109\/THS.2018.8574124"}],"container-title":["Wireless Personal Communications"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1007\/s11277-021-08284-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/link.springer.com\/article\/10.1007\/s11277-021-08284-8\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1007\/s11277-021-08284-8.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,3,22]],"date-time":"2021-03-22T12:26:15Z","timestamp":1616415975000},"score":1,"resource":{"primary":{"URL":"http:\/\/link.springer.com\/10.1007\/s11277-021-08284-8"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,2,13]]},"references-count":56,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2021,4]]}},"alternative-id":["8284"],"URL":"https:\/\/doi.org\/10.1007\/s11277-021-08284-8","relation":{},"ISSN":["0929-6212","1572-834X"],"issn-type":[{"type":"print","value":"0929-6212"},{"type":"electronic","value":"1572-834X"}],"subject":[],"published":{"date-parts":[[2021,2,13]]},"assertion":[{"value":"8 February 2021","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"13 February 2021","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}