{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,20]],"date-time":"2026-03-20T15:50:18Z","timestamp":1774021818277,"version":"3.50.1"},"reference-count":103,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2019,8,30]],"date-time":"2019-08-30T00:00:00Z","timestamp":1567123200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Comput. Surv."],"published-print":{"date-parts":[[2020,7,31]]},"abstract":"<jats:p>Malware still constitutes a major threat in the cybersecurity landscape, also due to the widespread use of infection vectors such as documents. These infection vectors hide embedded malicious code to the victim users, facilitating the use of social engineering techniques to infect their machines. Research showed that machine-learning algorithms provide effective detection mechanisms against such threats, but the existence of an arms race in adversarial settings has recently challenged such systems. In this work, we focus on malware embedded in PDF files as a representative case of such an arms race. We start by providing a comprehensive taxonomy of the different approaches used to generate PDF malware and of the corresponding learning-based detection systems. We then categorize threats specifically targeted against learning-based PDF malware detectors using a well-established framework in the field of adversarial machine learning. This framework allows us to categorize known vulnerabilities of learning-based PDF malware detectors and to identify novel attacks that may threaten such systems, along with the potential defense mechanisms that can mitigate the impact of such threats. We conclude the article by discussing how such findings highlight promising research directions towards tackling the more general challenge of designing robust malware detectors in adversarial settings.<\/jats:p>","DOI":"10.1145\/3332184","type":"journal-article","created":{"date-parts":[[2019,9,3]],"date-time":"2019-09-03T12:47:00Z","timestamp":1567514820000},"page":"1-36","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":74,"title":["Towards Adversarial Malware Detection"],"prefix":"10.1145","volume":"52","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-2640-4663","authenticated-orcid":false,"given":"Davide","family":"Maiorca","sequence":"first","affiliation":[{"name":"University of Cagliari, Cagliari, Italy"}]},{"given":"Battista","family":"Biggio","sequence":"additional","affiliation":[{"name":"University of Cagliari and Pluribus One, Cagliari, Italy"}]},{"given":"Giorgio","family":"Giacinto","sequence":"additional","affiliation":[{"name":"University of Cagliari and Pluribus One, Cagliari, Italy"}]}],"member":"320","published-online":{"date-parts":[[2019,8,30]]},"reference":[{"key":"e_1_2_1_1_1","unstructured":"Adobe. 2006. PDF Reference. Adobe Portable Document Format Version 1.7. https:\/\/www.adobe.com\/content\/dam\/acom\/en\/devnet\/pdf\/pdf_reference_archive\/pdf_reference_1-7.pdf."},{"key":"e_1_2_1_2_1","unstructured":"Adobe. 2007. JavaScript for Acrobat API Reference. https:\/\/www.adobe.com\/content\/dam\/acom\/en\/devnet\/acrobat\/pdfs\/js_api_reference.pdf."},{"key":"e_1_2_1_3_1","unstructured":"Adobe. 2008. Adobe Supplement to ISO 32000. https:\/\/www.adobe.com\/content\/dam\/acom\/en\/devnet\/acrobat\/pdfs\/adobe_supplement_iso32000.pdf."},{"key":"e_1_2_1_4_1","volume-title":"Proceedings of the International Conference on Machine Learning (ICML\u201918)","volume":"80","author":"Athalye Anish","unstructured":"Anish Athalye, Nicholas Carlini, and David A. Wagner. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In Proceedings of the International Conference on Machine Learning (ICML\u201918) (JMLR Workshop and Conference Proceedings), Vol. 80. JMLR.org, 274--283."},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.5555\/1756006.1859912"},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10994-010-5188-5"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1145\/1128817.1128824"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-40994-3_25"},{"key":"e_1_2_1_9_1","volume-title":"Security Evaluation of Support Vector Machines in Adversarial Environments","author":"Biggio Battista","unstructured":"Battista Biggio, Igino Corona, Blaine Nelson, Benjamin I. P. Rubinstein, Davide Maiorca, Giorgio Fumera, Giorgio Giacinto, and Fabio Roli. 2014. Security Evaluation of Support Vector Machines in Adversarial Environments. In Support Vector Machines Applications, Yunqian Ma and Guodong Guo (Eds.). Springer International Publishing, Cham, 105--153."},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICB.2013.6613006"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1007\/s13042-010-0007-7"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1142\/S0218001414600027"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2013.57"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-34166-3_46"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.5555\/3042573.3042761"},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2018.07.023"},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.5555\/2503308.2503326"},{"key":"e_1_2_1_18_1","volume-title":"Proceedings of the IEEE Symposium on Security and Privacy. IEEE Computer Society, 39--57","author":"Carlini Nicholas","unstructured":"Nicholas Carlini and David A. Wagner. 2017. Towards evaluating the robustness of neural networks. In Proceedings of the IEEE Symposium on Security and Privacy. IEEE Computer Society, 39--57."},{"key":"e_1_2_1_19_1","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2016.23483"},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/3128572.3140448"},{"key":"e_1_2_1_21_1","volume-title":"Targeted backdoor attacks on deep learning systems using data poisoning. Retrieved from: ArXiv E-prints abs\/1712.05526","author":"Chen Xinyun","year":"2017","unstructured":"Xinyun Chen, Chang Liu, Bo Li, Kimberly Lu, and Dawn Song. 2017. Targeted backdoor attacks on deep learning systems using data poisoning. Retrieved from: ArXiv E-prints abs\/1712.05526 (2017)."},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/2666652.2666657"},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/1772690.1772720"},{"key":"e_1_2_1_24_1","volume-title":"Proceeding of the 15th International Conference on Security and Cryptography (SECRYPT'18)","author":"Cuan Bonan","year":"2018","unstructured":"Bonan Cuan, Ali\u00e9nor Damien, Claire Delaplace, and Mathieu Valois. 2018. Malware detection in PDF files using machine learning. In Proceeding of the 15th International Conference on Security and Cryptography (SECRYPT'18)."},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/1014052.1014066"},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1145\/3133956.3133978"},{"key":"e_1_2_1_27_1","volume-title":"Proceedings of the 3rd Italian Conference on Cyber Security (ITASEC\u201919)","volume":"2315","author":"Demetrio Luca","year":"2019","unstructured":"Luca Demetrio, Battista Biggio, Giovanni Lagorio, Fabio Roli, and Alessandro Armando. 2019. Explaining vulnerabilities of deep learning to adversarial malware binaries. In Proceedings of the 3rd Italian Conference on Cyber Security (ITASEC\u201919), Vol. 2315. CEUR Workshop Proceedings."},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/TDSC.2017.2700270"},{"key":"e_1_2_1_29_1","unstructured":"Ambra Demontis Marco Melis Maura Pintor Matthew Jagielski Battista Biggio Alina Oprea Cristina Nita-Rotaru and Fabio Roli. 2018. Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks. Retrieved from: Arxiv E-prints Article arXiv:1809.02861."},{"key":"e_1_2_1_30_1","volume-title":"Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR\u201918)","author":"Dong Yinpeng","year":"2018","unstructured":"Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Xiaolin Hu, and Jun Zhu. 2018. Boosting adversarial examples with momentum. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR\u201918)."},{"key":"e_1_2_1_31_1","volume-title":"PDF-Malware Detection: A Survey and Taxonomy of Current Techniques","author":"Elingiusti Michele","unstructured":"Michele Elingiusti, Leonardo Aniello, Leonardo Querzoni, and Roberto Baldoni. 2018. PDF-Malware Detection: A Survey and Taxonomy of Current Techniques. Springer International Publishing, Cham, 169--191."},{"key":"e_1_2_1_32_1","unstructured":"ESET. 2018. A tale of two zero-days. Retrieved from: https:\/\/www.welivesecurity.com\/2018\/05\/15\/tale-two-zero-days\/."},{"key":"e_1_2_1_33_1","unstructured":"Jose Miguel Esparza. 2017. PeePDF. Retrieved from: http:\/\/eternal-todo.com\/tools\/peepdf-pdf-analysis-tool."},{"key":"e_1_2_1_34_1","unstructured":"Fortinet. 2016. Analysis of CVE-2016-4203\u2014Adobe Acrobat and Reader CoolType Handling Heap Overflow Vulnerability. Retrieved from: https:\/\/www.fortinet.com\/blog\/threat-research\/analysis-of-cve-2016-4203-adobe-acrobat-and-reader-cooltype-handling-heap-overflow-vulnerability.html."},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/2810103.2813677"},{"key":"e_1_2_1_36_1","unstructured":"FreeDesktop.org. 2018. Poppler. Retrieved from: https:\/\/poppler.freedesktop.org\/."},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/3003816"},{"key":"e_1_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/1143844.1143889"},{"key":"e_1_2_1_39_1","volume-title":"Proceedings of the International Conference on Learning Representations.","author":"Goodfellow Ian J.","year":"2015","unstructured":"Ian J. Goodfellow, Jonathon Shlens, and Christian Szegedy. 2015. Explaining and harnessing adversarial examples. In Proceedings of the International Conference on Learning Representations."},{"key":"e_1_2_1_40_1","unstructured":"Google. 2018. Virustotal. Retrieved from: http:\/\/www.virustotal.com."},{"key":"e_1_2_1_41_1","volume-title":"Proceedings of the European Symposium on Research in Computer Security (ESORICS\u201917)","volume":"10493","author":"Grosse Kathrin","unstructured":"Kathrin Grosse, Nicolas Papernot, Praveen Manoharan, Michael Backes, and Patrick D. McDaniel. 2017. Adversarial examples for malware detection. In Proceedings of the European Symposium on Research in Computer Security (ESORICS\u201917) (LNCS), Vol. 10493. Springer, 62--79."},{"key":"e_1_2_1_42_1","volume-title":"Proceedings of the NIPS Workshop on Machine Learning and Computer Security","volume":"1708","author":"Gu Tianyu","year":"2017","unstructured":"Tianyu Gu, Brendan Dolan-Gavitt, and Siddharth Garg. 2017. BadNets: Identifying vulnerabilities in the machine learning model supply chain. In Proceedings of the NIPS Workshop on Machine Learning and Computer Security, Vol. abs\/1708.06733."},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1145\/3236009"},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/2046684.2046692"},{"key":"e_1_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2018.00057"},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/3243734.3243757"},{"key":"e_1_2_1_47_1","unstructured":"Kaspersky. 2017. Machine Learning for Malware Detection. https:\/\/media.kaspersky.com\/en\/enterprise-security\/Kaspersky-Lab-Whitepaper-Machine-Learning.pdf."},{"key":"e_1_2_1_48_1","doi-asserted-by":"publisher","DOI":"10.5555\/2503308.2503359"},{"key":"e_1_2_1_49_1","volume-title":"Proceedings of the International Conference on Machine Learning (ICML\u201917)","author":"Koh Pang Wei","year":"2017","unstructured":"Pang Wei Koh and Percy Liang. 2017. Understanding black-box predictions via influence functions. In Proceedings of the International Conference on Machine Learning (ICML\u201917)."},{"key":"e_1_2_1_50_1","volume-title":"Proceedings of the 6th Conference on Email and Anti-Spam (CEAS\u201909)","author":"Kolcz Aleksander","year":"2009","unstructured":"Aleksander Kolcz and Choon Hui Teo. 2009. Feature weighting for improved classifier robustness. In Proceedings of the 6th Conference on Email and Anti-Spam (CEAS\u201909)."},{"key":"e_1_2_1_51_1","doi-asserted-by":"publisher","DOI":"10.23919\/EUSIPCO.2018.8553214"},{"key":"e_1_2_1_52_1","unstructured":"Sogeti ESEC Lab. 2015. Origami Framework. Retrieved from: http:\/\/esec-lab.sogeti.com\/pages\/origami.html."},{"key":"e_1_2_1_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/2076732.2076785"},{"key":"e_1_2_1_54_1","doi-asserted-by":"publisher","DOI":"10.1109\/DSN.2014.92"},{"key":"e_1_2_1_55_1","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR\u201917)","author":"Liu Yanpei","year":"2017","unstructured":"Yanpei Liu, Xinyun Chen, Chang Liu, and Dawn Song. 2017. Delving into transferable adversarial examples and black-box attacks. In Proceedings of the International Conference on Learning Representations (ICLR\u201917)."},{"key":"e_1_2_1_56_1","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2018.23291"},{"key":"e_1_2_1_57_1","doi-asserted-by":"publisher","DOI":"10.1145\/1081870.1081950"},{"key":"e_1_2_1_58_1","doi-asserted-by":"publisher","DOI":"10.1109\/HICSS.2013.166"},{"key":"e_1_2_1_59_1","volume-title":"Proceedings of the International Conference on Learning Representations (ICLR\u201918)","author":"Madry A.","unstructured":"A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu. 2018. Towards deep learning models resistant to adversarial attacks. In Proceedings of the International Conference on Learning Representations (ICLR\u201918)."},{"key":"e_1_2_1_60_1","volume-title":"An evasion resilient approach to the detection of malicious PDF files","author":"Maiorca Davide","unstructured":"Davide Maiorca, Davide Ariu, Igino Corona, and Giorgio Giacinto. 2015. An evasion resilient approach to the detection of malicious PDF files. In Information Systems Security and Privacy, Olivier Camp, Edgar Weippl, Christophe Bidan, and Esma A\u00efmeur (Eds.). Springer International Publishing, Cham, 68--85."},{"key":"e_1_2_1_61_1","doi-asserted-by":"publisher","DOI":"10.5220\/0005264400270036"},{"key":"e_1_2_1_62_1","doi-asserted-by":"publisher","DOI":"10.1109\/MSEC.2018.2875879"},{"key":"e_1_2_1_63_1","doi-asserted-by":"publisher","DOI":"10.1145\/2484313.2484327"},{"key":"e_1_2_1_64_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-31537-4_40"},{"key":"e_1_2_1_65_1","doi-asserted-by":"publisher","DOI":"10.5555\/2886521.2886721"},{"key":"e_1_2_1_66_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCVW.2017.94"},{"key":"e_1_2_1_67_1","doi-asserted-by":"publisher","DOI":"10.23919\/EUSIPCO.2018.8553598"},{"key":"e_1_2_1_68_1","doi-asserted-by":"publisher","DOI":"10.1145\/3128572.3140451"},{"key":"e_1_2_1_69_1","doi-asserted-by":"publisher","DOI":"10.1145\/3243734.3243855"},{"key":"e_1_2_1_70_1","doi-asserted-by":"publisher","DOI":"10.5555\/1387709.1387716"},{"key":"e_1_2_1_71_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cose.2014.10.014"},{"key":"e_1_2_1_72_1","unstructured":"Sentinel One. 2018. SentinelOne Detects New Malicious PDF File. Retrieved from: https:\/\/www.sentinelone.com\/blog\/sentinelone-detects-new-malicious-pdf-file\/."},{"key":"e_1_2_1_73_1","doi-asserted-by":"publisher","DOI":"10.1145\/3052973.3053009"},{"key":"e_1_2_1_74_1","doi-asserted-by":"publisher","DOI":"10.1109\/EuroSP.2016.36"},{"key":"e_1_2_1_75_1","unstructured":"Rapid7. 2019. Metasploit framework. Retrieved from: https:\/\/www.metasploit.com\/."},{"key":"e_1_2_1_76_1","doi-asserted-by":"publisher","DOI":"10.1145\/2939672.2939778"},{"key":"e_1_2_1_77_1","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2016.2593488"},{"key":"e_1_2_1_78_1","doi-asserted-by":"publisher","DOI":"10.1109\/PST.2012.6297926"},{"key":"e_1_2_1_79_1","unstructured":"Offensive Security. 2018. Exploit Database. Retrieved from: https:\/\/www.exploit-db.com\/."},{"key":"e_1_2_1_80_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-540-70542-0_5"},{"key":"e_1_2_1_81_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2017.41"},{"key":"e_1_2_1_82_1","doi-asserted-by":"publisher","DOI":"10.1145\/2420950.2420987"},{"key":"e_1_2_1_83_1","doi-asserted-by":"publisher","DOI":"10.14722\/ndss.2016.23078"},{"key":"e_1_2_1_84_1","doi-asserted-by":"publisher","DOI":"10.5555\/2028067.2028076"},{"key":"e_1_2_1_85_1","doi-asserted-by":"publisher","DOI":"10.1186\/s13635-016-0045-0"},{"key":"e_1_2_1_86_1","unstructured":"Didier Stevens. 2008. PDF Tools. Retrieved from: http:\/\/blog.didierstevens.com\/programs\/pdf-tools."},{"key":"e_1_2_1_87_1","unstructured":"Symantec. 2018. Internet Security Threat Report (Vol. 23). https:\/\/www.symantec.com\/content\/dam\/symantec\/docs\/reports\/istr-23-2018-en.pdf."},{"key":"e_1_2_1_88_1","volume-title":"Proceedings of the International Conference on Learning Representations. Retrieved from: http:\/\/arxiv.org\/abs\/1312","author":"Szegedy Christian","year":"2014","unstructured":"Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, and Rob Fergus. 2014. Intriguing properties of neural networks. In Proceedings of the International Conference on Learning Representations. Retrieved from: http:\/\/arxiv.org\/abs\/1312.6199."},{"key":"e_1_2_1_89_1","doi-asserted-by":"publisher","DOI":"10.1145\/1599272.1599278"},{"key":"e_1_2_1_90_1","unstructured":"Trevor Tonn and Kiran Bandla. 2013. PhoneyPDF. Retrieved from: https:\/\/github.com\/kbandla\/phoneypdf."},{"key":"e_1_2_1_91_1","unstructured":"Malware Tracker. 2018. PDF Current Threats. Retrieved from: https:\/\/www.malwaretracker.com\/pdfthreat.php."},{"key":"e_1_2_1_92_1","doi-asserted-by":"publisher","DOI":"10.5555\/3241094.3241142"},{"key":"e_1_2_1_93_1","doi-asserted-by":"publisher","DOI":"10.1145\/1972551.1972555"},{"key":"e_1_2_1_94_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11416-012-0166-z"},{"key":"e_1_2_1_95_1","volume-title":"Proceedings of the 20th Network 8 Distributed System Security Symposium (NDSS\u201913)","author":"\u0160rndi\u0107 Nedim","year":"2013","unstructured":"Nedim \u0160rndi\u0107 and Pavel Laskov. 2013. Detection of malicious PDF files based on hierarchical document structure. In Proceedings of the 20th Network 8 Distributed System Security Symposium (NDSS\u201913)."},{"key":"e_1_2_1_96_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2014.20"},{"key":"e_1_2_1_97_1","doi-asserted-by":"publisher","DOI":"10.1109\/SP.2014.20"},{"key":"e_1_2_1_98_1","unstructured":"VulDB. 2018. The Crowd-Based Vulnerability Database. Retrieved from: https:\/\/vuldb.com."},{"key":"e_1_2_1_99_1","doi-asserted-by":"publisher","DOI":"10.1145\/3097983.3098158"},{"key":"e_1_2_1_100_1","doi-asserted-by":"publisher","DOI":"10.1145\/2420950.2420979"},{"key":"e_1_2_1_101_1","doi-asserted-by":"publisher","DOI":"10.5555\/3241189.3241212"},{"key":"e_1_2_1_102_1","volume-title":"Proceedings of the 23rd Network 8 Distributed System Security Symposium (NDSS\u201916)","author":"Xu Weilin","year":"2016","unstructured":"Weilin Xu, Yanjun Qi, and David Evans. 2016. Automatically evading classifiers. In Proceedings of the 23rd Network 8 Distributed System Security Symposium (NDSS\u201916)."},{"key":"e_1_2_1_103_1","doi-asserted-by":"publisher","DOI":"10.1145\/3073559"}],"container-title":["ACM Computing Surveys"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3332184","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3332184","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,25]],"date-time":"2025-06-25T13:24:50Z","timestamp":1750857890000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3332184"}},"subtitle":["Lessons Learned from PDF-based Attacks"],"short-title":[],"issued":{"date-parts":[[2019,8,30]]},"references-count":103,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2020,7,31]]}},"alternative-id":["10.1145\/3332184"],"URL":"https:\/\/doi.org\/10.1145\/3332184","relation":{},"ISSN":["0360-0300","1557-7341"],"issn-type":[{"value":"0360-0300","type":"print"},{"value":"1557-7341","type":"electronic"}],"subject":[],"published":{"date-parts":[[2019,8,30]]},"assertion":[{"value":"2018-10-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2019-05-01","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2019-08-30","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}