{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,20]],"date-time":"2025-11-20T13:08:50Z","timestamp":1763644130820,"version":"3.37.3"},"reference-count":60,"publisher":"Springer Science and Business Media LLC","issue":"2","license":[{"start":{"date-parts":[[2024,1,27]],"date-time":"2024-01-27T00:00:00Z","timestamp":1706313600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"},{"start":{"date-parts":[[2024,1,27]],"date-time":"2024-01-27T00:00:00Z","timestamp":1706313600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"}],"funder":[{"name":"Research and Innovation Centre for Science and Engineering","award":["2022-01-019"],"award-info":[{"award-number":["2022-01-019"]}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["SN COMPUT. SCI."],"DOI":"10.1007\/s42979-023-02556-9","type":"journal-article","created":{"date-parts":[[2024,1,27]],"date-time":"2024-01-27T10:02:23Z","timestamp":1706349743000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":8,"title":["Data Poisoning Attacks and Mitigation Strategies on Federated Support Vector Machines"],"prefix":"10.1007","volume":"5","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-0160-4212","authenticated-orcid":false,"given":"Israt Jahan","family":"Mouri","sequence":"first","affiliation":[]},{"given":"Muhammad","family":"Ridowan","sequence":"additional","affiliation":[]},{"given":"Muhammad Abdullah","family":"Adnan","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,1,27]]},"reference":[{"key":"2556_CR1","unstructured":"Anisetti M, Ardagna CA, Balestrucci A, et\u00a0al. On the robustness of ensemble-based machine learning against data poisoning. Preprint arXiv:2209.14013; 2022."},{"issue":"6","key":"2556_CR2","doi-asserted-by":"publisher","first-page":"22","DOI":"10.1109\/MIC.2023.3322327","volume":"27","author":"M Anisetti","year":"2023","unstructured":"Anisetti M, Ardagna CA, Bena N, et al. Rethinking certification for trustworthy machine-learning-based applications. IEEE Int Comput. 2023;27(6):22\u20138. https:\/\/doi.org\/10.1109\/MIC.2023.3322327.","journal-title":"IEEE Int Comput"},{"key":"2556_CR3","unstructured":"Bagdasaryan E, Veit A, Hua Y, et\u00a0al. How to backdoor federated learning. In: International conference on artificial intelligence and statistics, PMLR; 2020. p. 2938\u201348."},{"issue":"2","key":"2556_CR4","doi-asserted-by":"publisher","first-page":"121","DOI":"10.1007\/s10994-010-5188-5","volume":"81","author":"M Barreno","year":"2010","unstructured":"Barreno M, Nelson B, Joseph AD, et al. The security of machine learning. Mach Learn. 2010;81(2):121\u201348.","journal-title":"Mach Learn"},{"key":"2556_CR5","unstructured":"Bhagoji AN, Chakraborty S, Mittal P, et\u00a0al. Analyzing federated learning through an adversarial lens. In: International conference on machine learning, PMLR; 2019. p. 634\u201343."},{"key":"2556_CR6","doi-asserted-by":"publisher","first-page":"317","DOI":"10.1016\/j.patcog.2018.07.023","volume":"84","author":"B Biggio","year":"2018","unstructured":"Biggio B, Roli F. Wild patterns: ten years after the rise of adversarial machine learning. Pattern Recogn. 2018;84:317\u201331.","journal-title":"Pattern Recogn"},{"key":"2556_CR7","unstructured":"Biggio B, Nelson B, Laskov P. Support vector machines under adversarial label noise. In: Asian conference on machine learning, PMLR; 2011. p. 97\u2013112."},{"key":"2556_CR8","unstructured":"Biggio B, Nelson B, Laskov P. Poisoning attacks against support vector machines. In: Proceedings of the 29th international conference on international conference on machine learning. Omnipress, Madison, WI, USA, ICML\u201912; 2012. p. 1467\u201374."},{"key":"2556_CR9","first-page":"30","volume":"2017","author":"P Blanchard","year":"2017","unstructured":"Blanchard P, El Mhamdi EM, Guerraoui R, et al. Machine learning with adversaries: Byzantine tolerant gradient descent. Adv Neural Inf Process Syst. 2017;2017:30.","journal-title":"Adv Neural Inf Process Syst"},{"key":"2556_CR10","doi-asserted-by":"publisher","unstructured":"Bovenzi G, Foggia A, Santella S, et\u00a0al. Data poisoning attacks against autoencoder-based anomaly detection models: a robustness analysis. In: ICC 2022-IEEE international conference on communications; 2022. p. 5427\u201332. https:\/\/doi.org\/10.1109\/ICC45855.2022.9838942.","DOI":"10.1109\/ICC45855.2022.9838942"},{"key":"2556_CR11","doi-asserted-by":"crossref","unstructured":"Cao X, Fang M, Liu J, et\u00a0al. Fltrust: Byzantine-robust federated learning via trust bootstrapping. In: 28th annual network and distributed system security symposium, NDSS 2021, virtually, February 21\u201325, 2021. The Internet Society. https:\/\/www.ndss-symposium.org\/ndss-paper\/fltrust-byzantine-robust-federated-learning-via-trust-bootstrapping\/; 2021.","DOI":"10.14722\/ndss.2021.24434"},{"issue":"1","key":"2556_CR12","doi-asserted-by":"publisher","first-page":"269","DOI":"10.1109\/TWC.2020.3024629","volume":"20","author":"M Chen","year":"2020","unstructured":"Chen M, Yang Z, Saad W, et al. A joint learning and communications framework for federated learning over wireless networks. IEEE Trans Wirel Commun. 2020;20(1):269\u201383.","journal-title":"IEEE Trans Wirel Commun"},{"key":"2556_CR13","doi-asserted-by":"crossref","unstructured":"Dalvi N, Domingos P, Sanghai S, et\u00a0al. Adversarial classification. In: Proceedings of the 10th ACM SIGKDD international conference on knowledge discovery and data mining; 2004. p. 99\u2013108.","DOI":"10.1145\/1014052.1014066"},{"key":"2556_CR14","unstructured":"Demontis A, Melis M, Pintor M, et\u00a0al. Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks. In: 28th {USENIX} security symposium ({USENIX} security 19); 2019. p. 321\u201338."},{"key":"2556_CR15","unstructured":"Ding H, Yang F, Huang J. Defending SVMS against poisoning attacks: the hardness and dbscan approach. In: de\u00a0Campos C, Maathuis MH, etitors, Proceedings of the thirty-seventh conference on uncertainty in artificial intelligence, proceedings of machine learning research, PMLR, vol. 161; 2021. p. 268\u201378. https:\/\/proceedings.mlr.press\/v161\/ding21b.html."},{"key":"2556_CR16","doi-asserted-by":"crossref","unstructured":"Doku R, Rawat DB, Mitigating data poisoning attacks on a federated learning-edge computing network. In: 2021 IEEE 18th annual consumer communications and networking conference (CCNC). IEEE; 2021. p. 1\u20136.","DOI":"10.1109\/CCNC49032.2021.9369581"},{"key":"2556_CR17","unstructured":"Fang M, Cao X, Jia J, et\u00a0al. Local model poisoning attacks to {Byzantine-Robust} federated learning. In: 29th USENIX security symposium (USENIX Security 20); 2020. p. 1605\u201322."},{"key":"2556_CR18","doi-asserted-by":"publisher","first-page":"416","DOI":"10.1007\/978-3-030-61470-6_25","volume-title":"Leveraging applications of formal methods, verification and validation: engineering principles","author":"R Faqeh","year":"2020","unstructured":"Faqeh R, Fetzer C, Hermanns H, et al. Towards dynamic dependable systems through evidence-based continuous certification. In: Margaria T, Steffen B, et al., editors. Leveraging applications of formal methods, verification and validation: engineering principles. Cham: Springer; 2020. p. 416\u201339."},{"key":"2556_CR19","doi-asserted-by":"crossref","unstructured":"Gehr T, Mirman M, Drachsler-Cohen D, et al., Ai2: safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE symposium on security and privacy (SP). IEEE; 2018. p. 3\u201318.","DOI":"10.1109\/SP.2018.00058"},{"key":"2556_CR20","unstructured":"Goldstein M, Dengel A. Histogram-based outlier score (HBOS): a fast unsupervised anomaly detection algorithm. In: KI-2012: poster and demo track; 2012. p. 59\u201363."},{"key":"2556_CR21","doi-asserted-by":"crossref","unstructured":"Hsu RH, Wang YC, Fan CI, et al. A privacy-preserving federated learning system for android malware detection based on edge computing. In: 2020 15th Asia joint conference on information security (AsiaJCIS). IEEE; 2020. p. 128\u201336.","DOI":"10.1109\/AsiaJCIS50894.2020.00031"},{"key":"2556_CR22","doi-asserted-by":"crossref","unstructured":"Huang C, Huang J, Liu X. Cross-silo federated learning: challenges and opportunities. Preprint arXiv:2206.12949 [cs.LG]; 2022.","DOI":"10.1109\/MCOM.005.2300467"},{"key":"2556_CR23","doi-asserted-by":"publisher","unstructured":"Huang Y, Chu L, Zhou Z, et\u00a0al. Personalized cross-silo federated learning on non-iid data. In: Proceedings of the AAAI conference on artificial intelligence, vol. 35(9); 2021. p. 7865\u201373. https:\/\/doi.org\/10.1609\/aaai.v35i9.16960. https:\/\/ojs.aaai.org\/index.php\/AAAI\/article\/view\/16960.","DOI":"10.1609\/aaai.v35i9.16960"},{"key":"2556_CR24","doi-asserted-by":"crossref","unstructured":"Israt Jahan\u00a0Mouri MAAMuhammad\u00a0Ridowan. Towards poisoning of federated support vector machines with data poisoning attacks. In: Proceedings of the 13th international conference on cloud computing and services science-CLOSER, INSTICC. SciTePress; 2023. p. 24\u201333.","DOI":"10.5220\/0011825800003488"},{"key":"2556_CR25","doi-asserted-by":"crossref","unstructured":"Jagielski M, Oprea A, Biggio B, et al. Manipulating machine learning: Poisoning attacks and countermeasures for regression learning. In: 2018 IEEE symposium on security and privacy (SP). IEEE; 2018. p. 19\u201335.","DOI":"10.1109\/SP.2018.00057"},{"key":"2556_CR26","doi-asserted-by":"crossref","unstructured":"Kabir T, Adnan MA. A scalable algorithm for multi-class support vector machine on geo-distributed datasets. In: 2019 IEEE international conference on big data (big data). IEEE; 2019. p. 637\u201342.","DOI":"10.1109\/BigData47090.2019.9005990"},{"key":"2556_CR27","first-page":"28663","volume":"34","author":"SP Karimireddy","year":"2021","unstructured":"Karimireddy SP, Jaggi M, Kale S, et al. Breaking the centralized barrier for cross-device federated learning. Adv Neural Inf Process Syst. 2021;34:28663\u201376.","journal-title":"Adv Neural Inf Process Syst"},{"key":"2556_CR28","unstructured":"Konecn\u1ef3 J, McMahan HB, Ramage D, et\u00a0al. Federated optimization: distributed machine learning for on-device intelligence. CoRR; 2016."},{"key":"2556_CR29","unstructured":"Krizhevsky A, Hinton G, et\u00a0al. Learning multiple layers of features from tiny images; 2009."},{"key":"2556_CR30","unstructured":"Laishram R, Phoha VV. Curie: A method for protecting svm classifier from poisoning attack. Preprint arXiv:1606.01584; 2016."},{"key":"2556_CR31","unstructured":"LeCun Y. The mnist database of handwritten digits. http:\/\/yann.lecun.com\/exdb\/mnist\/; 1998."},{"key":"2556_CR32","doi-asserted-by":"crossref","unstructured":"Li Z, Zhao Y, Botta N, et al. Copod: copula-based outlier detection. In: 2020 IEEE international conference on data mining (ICDM). IEEE; 2020. p. 1118\u201323.","DOI":"10.1109\/ICDM50108.2020.00135"},{"key":"2556_CR33","doi-asserted-by":"publisher","DOI":"10.1111\/exsy.13072","author":"P Manoharan","year":"2022","unstructured":"Manoharan P, Walia R, Iwendi C, et al. SVM-based generative adverserial networks for federated learning and edge computing attack model and outpoising. Expert Syst. 2022. https:\/\/doi.org\/10.1111\/exsy.13072.","journal-title":"Expert Syst."},{"key":"2556_CR34","unstructured":"McMahan B, Moore E, Ramage D, et\u00a0al. Communication-Efficient Learning of Deep Networks from Decentralized Data. In: Singh A, Zhu J (eds) Proceedings of the 20th international conference on artificial intelligence and statistics, proceedings of machine learning research, PMLR, vol. 54; 2017. p. 1273\u201382. https:\/\/proceedings.mlr.press\/v54\/mcmahan17a.html."},{"key":"2556_CR35","doi-asserted-by":"crossref","unstructured":"Mei S, Zhu X. Using machine teaching to identify optimal training-set attacks on machine learners. In: Twenty-ninth AAAI conference on artificial intelligence; 2015.","DOI":"10.1609\/aaai.v29i1.9569"},{"key":"2556_CR36","unstructured":"Melis M, Demontis A, Pintor M, et\u00a0al. SECML: a Python library for secure and explainable machine learning. Preprint arXiv:1912.10013; 2019."},{"key":"2556_CR37","doi-asserted-by":"crossref","unstructured":"Mu\u00f1oz-Gonz\u00e1lez L, Biggio B, Demontis A, et\u00a0al. Towards poisoning of deep learning algorithms with back-gradient optimization. In: Proceedings of the 10th ACM workshop on artificial intelligence and security; 2017. p. 27\u201338.","DOI":"10.1145\/3128572.3140451"},{"key":"2556_CR38","doi-asserted-by":"publisher","first-page":"295","DOI":"10.1007\/978-981-19-1018-0_25","volume-title":"Advances in distributed computing and machine learning","author":"DG Nair","year":"2022","unstructured":"Nair DG, Aswartha Narayana CV, Jaideep Reddy K, et al. Exploring SVM for federated machine learning applications. In: Rout RR, Ghosh SK, Jana PK, et al., editors. Advances in distributed computing and machine learning. Singapore: Springer; 2022. p. 295\u2013305."},{"issue":"6","key":"2556_CR39","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3539734","volume":"13","author":"A Navia-V\u00e1zquez","year":"2022","unstructured":"Navia-V\u00e1zquez A, D\u00edaz-Morales R, Fern\u00e1ndez-D\u00edaz M. Budget distributed support vector machine for non-id federated learning scenarios. ACM Trans Intel Syst Technol (TIST). 2022;13(6):1\u201325.","journal-title":"ACM Trans Intel Syst Technol (TIST)"},{"key":"2556_CR40","unstructured":"Paudice A, Mu\u00f1oz-Gonz\u00e1lez L, Gyorgy A, et\u00a0al. Detection of adversarial training examples in poisoning attacks through anomaly detection. Preprint arXiv:1802.03041; 2018."},{"key":"2556_CR41","doi-asserted-by":"publisher","first-page":"5","DOI":"10.1007\/978-3-030-13453-2_1","volume-title":"ECML PKDD 2018 workshops","author":"A Paudice","year":"2019","unstructured":"Paudice A, Mu\u00f1oz-Gonz\u00e1lez L, Lupu EC, et al. Label sanitization against label flipping poisoning attacks. In: Alzate C, Monreale A, Assem H, et al., editors. ECML PKDD 2018 workshops. Cham: Springer; 2019. p. 5\u201315."},{"key":"2556_CR42","first-page":"2825","volume":"12","author":"F Pedregosa","year":"2011","unstructured":"Pedregosa F, Varoquaux G, Gramfort A, et al. Scikit-learn: machine learning in Python. J Mach Learn Res. 2011;12:2825\u201330.","journal-title":"J Mach Learn Res"},{"key":"2556_CR43","doi-asserted-by":"crossref","unstructured":"Peri N, Gupta N, Huang WR, et\u00a0al. Deep k-nn defense against clean-label data poisoning attacks. In: Bartoli A, Fusiello A, editors., et\u00a0al., Computer vision-ECCV 2020 workshops. Cham: Springer; 2020. p. 55\u201370.","DOI":"10.1007\/978-3-030-66415-2_4"},{"key":"2556_CR44","doi-asserted-by":"publisher","unstructured":"Pitropakis N, Panaousis E, Giannetsos T, et al. A taxonomy and survey of attacks against machine learning. Comput Sci Rev. 2019;34: 100199. https:\/\/doi.org\/10.1016\/j.cosrev.2019.100199. www.sciencedirect.com\/science\/article\/pii\/S1574013718303289.","DOI":"10.1016\/j.cosrev.2019.100199"},{"issue":"97","key":"2556_CR45","first-page":"38","volume":"1","author":"D Prokhorov","year":"2001","unstructured":"Prokhorov D. Ijcnn 2001 neural network competition. Slide Present IJCNN. 2001;1(97):38.","journal-title":"Slide Present IJCNN"},{"key":"2556_CR46","unstructured":"Radford BJ, Apolonio LM, Trias AJ, et\u00a0al. Network traffic anomaly detection using recurrent neural networks. CoRR arXiv:1803.10769; 2018"},{"key":"2556_CR47","doi-asserted-by":"crossref","unstructured":"Ramaswamy S, Rastogi R, Shim K. Efficient algorithms for mining outliers from large data sets. In: Proceedings of the 2000 ACM SIGMOD international conference on management of data; 2000. p. 427\u201338.","DOI":"10.1145\/342009.335437"},{"key":"2556_CR48","doi-asserted-by":"publisher","unstructured":"Rehman MHu, Dirir AM, Salah K, et\u00a0al. TrustFed: A framework for fair and trustworthy cross-device federated learning in IIoT. IEEE Trans Ind Inform 2021;17(12):8485\u201394. https:\/\/doi.org\/10.1109\/TII.2021.3075706.","DOI":"10.1109\/TII.2021.3075706"},{"key":"2556_CR49","doi-asserted-by":"crossref","unstructured":"Shejwalkar V, Houmansadr A, Kairouz P, et\u00a0al. Back to the drawing board: a critical evaluation of poisoning attacks on production federated learning. In: IEEE symposium on security and privacy; 2022.","DOI":"10.1109\/SP46214.2022.9833647"},{"key":"2556_CR50","unstructured":"Steinhardt J, Koh PW, Liang P. Certified defenses for data poisoning attacks. In: Proceedings of the 31st international conference on neural information processing systems; 2017. p. 3520\u201332."},{"key":"2556_CR51","first-page":"1","volume":"2021","author":"G Sun","year":"2021","unstructured":"Sun G, Cong Y, Dong J, et al. Data poisoning attacks on federated machine learning. IEEE Int Things J. 2021;2021:1.","journal-title":"IEEE Int Things J"},{"key":"2556_CR52","doi-asserted-by":"crossref","unstructured":"Tolpegin V, Truex S, Gursoy ME, et\u00a0al. Data poisoning attacks against federated learning systems. In: European symposium on research in computer security. London: Springer; 2020. p. 480\u2013501.","DOI":"10.1007\/978-3-030-58951-6_24"},{"key":"2556_CR53","doi-asserted-by":"crossref","unstructured":"Wang S, Chen M, Saad W, et\u00a0al. Federated learning for energy-efficient task computing in wireless networks. In: ICC 2020-2020 IEEE international conference on communications (ICC). IEEE; 2020. p. 1\u20136.","DOI":"10.1109\/ICC40277.2020.9148625"},{"key":"2556_CR54","unstructured":"Xiao H, Biggio B, Brown G, et\u00a0al. Is feature selection secure against training data poisoning? In: International conference on machine learning, PMLR; 2015. p. 1689\u201398."},{"key":"2556_CR55","unstructured":"Xiao H, Rasul K, Vollgraf R. Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. Preprint arXiv:1708.07747; 2017."},{"key":"2556_CR56","unstructured":"Yin D, Chen Y, Kannan R, et\u00a0al. Byzantine-robust distributed learning: Towards optimal statistical rates. In: International conference on machine learning, PMLR; 2018. p. 5650\u20139."},{"key":"2556_CR57","doi-asserted-by":"crossref","unstructured":"Zhang R, Zhu Q. A game-theoretic defense against data poisoning attacks in distributed support vector machines. In: 2017 IEEE 56th annual conference on decision and control (CDC). IEEE; 2017. p. 4582\u20137.","DOI":"10.1109\/CDC.2017.8264336"},{"key":"2556_CR58","unstructured":"Zhao Y, Nasrullah Z, Li Z. PYOD: a Python toolbox for scalable outlier detection. J Mach Lear Res. 2019;20(96):1\u20137. http:\/\/jmlr.org\/papers\/v20\/19-011.html."},{"key":"2556_CR59","doi-asserted-by":"crossref","unstructured":"Zhou Y, Kantarcioglu M, Thuraisingham B, et\u00a0al. Adversarial support vector machine learning. In: Proceedings of the 18th ACM SIGKDD international conference on knowledge discovery and data mining; 2012. p. 1059\u201367.","DOI":"10.1145\/2339530.2339697"},{"key":"2556_CR60","doi-asserted-by":"publisher","DOI":"10.1016\/j.cose.2022.102922","volume":"123","author":"Y Zhu","year":"2022","unstructured":"Zhu Y, Cui L, Ding Z, et al. Black box attack and network intrusion detection using machine learning for malicious traffic. Comput Secur. 2022;123: 102922.","journal-title":"Comput Secur"}],"container-title":["SN Computer Science"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s42979-023-02556-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s42979-023-02556-9\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s42979-023-02556-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,1,27]],"date-time":"2024-01-27T10:24:06Z","timestamp":1706351046000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s42979-023-02556-9"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,1,27]]},"references-count":60,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2024,2]]}},"alternative-id":["2556"],"URL":"https:\/\/doi.org\/10.1007\/s42979-023-02556-9","relation":{},"ISSN":["2661-8907"],"issn-type":[{"type":"electronic","value":"2661-8907"}],"subject":[],"published":{"date-parts":[[2024,1,27]]},"assertion":[{"value":"17 August 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"11 December 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"27 January 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"On behalf of all the authors, the corresponding author states that there is no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of Interest"}},{"value":"Our  codes are open source and available in this URL: .","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Code Availability"}}],"article-number":"241"}}