{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,5,3]],"date-time":"2026-05-03T11:01:42Z","timestamp":1777806102465,"version":"3.51.4"},"reference-count":71,"publisher":"SAGE Publications","issue":"1","license":[{"start":{"date-parts":[[2019,10,22]],"date-time":"2019-10-22T00:00:00Z","timestamp":1571702400000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"content-domain":{"domain":["journals.sagepub.com"],"crossmark-restriction":true},"short-container-title":["Journal of Computer Security"],"published-print":{"date-parts":[[2020,2,4]]},"abstract":"<jats:p>Machine learning algorithms, when applied to sensitive data, pose a distinct threat to privacy. A growing body of prior work demonstrates that models produced by these algorithms may leak specific private information in the training data to an attacker, either through the models\u2019 structure or their observable behavior. This article examines the factors that can allow a training set membership inference attacker or an attribute inference attacker to learn such information. Using both formal and empirical analyses, we illustrate a clear relationship between these factors and the privacy risk that arises in several popular machine learning algorithms.<\/jats:p>\n                  <jats:p>We find that overfitting is sufficient to allow an attacker to perform membership inference and, when the target attribute meets certain conditions about its influence, attribute inference attacks. We also explore the connection between membership inference and attribute inference, showing that there are deep connections between the two that lead to effective new attacks. We show that overfitting is not necessary for these attacks, demonstrating that other factors, such as robustness to norm-bounded input perturbations and malicious training algorithms, can also significantly increase the privacy risk. Notably, as robustness is intended to be a defense against attacks on the integrity of model predictions, these results suggest it may be difficult in some cases to simultaneously defend against privacy and integrity attacks.<\/jats:p>","DOI":"10.3233\/jcs-191362","type":"journal-article","created":{"date-parts":[[2019,10,23]],"date-time":"2019-10-23T12:07:12Z","timestamp":1571832432000},"page":"35-70","update-policy":"https:\/\/doi.org\/10.1177\/sage-journals-update-policy","source":"Crossref","is-referenced-by-count":26,"title":["Overfitting, robustness, and malicious algorithms: A study of potential causes of privacy risk in machine learning"],"prefix":"10.1177","volume":"28","author":[{"given":"Samuel","family":"Yeom","sequence":"first","affiliation":[{"name":"Carnegie Mellon University, Pittsburgh, PA, USA. E-mails:\u00a0,\u00a0,\u00a0"}]},{"given":"Irene","family":"Giacomelli","sequence":"additional","affiliation":[{"name":"University of Wisconsin\u2013Madison, Madison, WI, USA. E-mails:\u00a0,\u00a0"},{"name":"Protocol Labs, San Francisco, CA, USA"}]},{"given":"Alan","family":"Menaged","sequence":"additional","affiliation":[{"name":"Carnegie Mellon University, Pittsburgh, PA, USA. E-mails:\u00a0,\u00a0,\u00a0"}]},{"given":"Matt","family":"Fredrikson","sequence":"additional","affiliation":[{"name":"Carnegie Mellon University, Pittsburgh, PA, USA. E-mails:\u00a0,\u00a0,\u00a0"}]},{"given":"Somesh","family":"Jha","sequence":"additional","affiliation":[{"name":"University of Wisconsin\u2013Madison, Madison, WI, USA. E-mails:\u00a0,\u00a0"}]}],"member":"179","published-online":{"date-parts":[[2019,10,22]]},"reference":[{"key":"ref001","unstructured":"M.\u00a0Abadi, A.\u00a0Agarwal, P.\u00a0Barham, E.\u00a0Brevdo, Z.\u00a0Chen, C.\u00a0Citro, G.S.\u00a0Corrado, A.\u00a0Davis, J.\u00a0Dean, M.\u00a0Devin, S.\u00a0Ghemawat, I.\u00a0Goodfellow, A.\u00a0Harp, G.\u00a0Irving, M.\u00a0Isard, Y.\u00a0Jia, R.\u00a0Jozefowicz, L.\u00a0Kaiser, M.\u00a0Kudlur, J.\u00a0Levenberg, D.\u00a0Man\u00e9, R.\u00a0Monga, S.\u00a0Moore, D.\u00a0Murray, C.\u00a0Olah, M.\u00a0Schuster, J.\u00a0Shlens, B.\u00a0Steiner, I.\u00a0Sutskever, K.\u00a0Talwar, P.\u00a0Tucker, V.\u00a0Vanhoucke, V.\u00a0Vasudevan, F.\u00a0Vi\u00e9gas, O.\u00a0Vinyals, P.\u00a0Warden, M.\u00a0Wattenberg, M.\u00a0Wicke, Y.\u00a0Yu and X.\u00a0Zheng, TensorFlow: Large-scale machine learning on heterogeneous systems, 2015."},{"key":"ref002","doi-asserted-by":"crossref","unstructured":"G.\u00a0Ateniese, B.\u00a0Magri and D.\u00a0Venturi, Subversion-resilient signature schemes, in: ACM Conference on Computer and Communications Security, 2015, pp.\u00a0364\u2013375.","DOI":"10.1145\/2810103.2813635"},{"key":"ref003","doi-asserted-by":"publisher","DOI":"10.1504\/IJSN.2015.071829"},{"key":"ref004","doi-asserted-by":"crossref","unstructured":"R.\u00a0Bassily, K.\u00a0Nissim, A.\u00a0Smith, T.\u00a0Steinke, U.\u00a0Stemmer and J.\u00a0Ullman, Algorithmic stability for adaptive data analysis, in: ACM Symposium on Theory of Computing, 2016, pp.\u00a01046\u20131059.","DOI":"10.1145\/2897518.2897566"},{"key":"ref005","doi-asserted-by":"crossref","unstructured":"R.\u00a0Bassily, A.\u00a0Smith and A.\u00a0Thakurta, Private empirical risk minimization: Efficient algorithms and tight error bounds, in: IEEE Symposium on Foundations of Computer Science, 2014.","DOI":"10.1109\/FOCS.2014.56"},{"key":"ref006","doi-asserted-by":"crossref","unstructured":"M.\u00a0Bellare, J.\u00a0Jaeger and D.\u00a0Kane, Mass-surveillance without the state: Strongly undetectable algorithm-substitution attacks, in: ACM Conference on Computer and Communications Security, 2015, pp.\u00a01431\u20131440.","DOI":"10.1145\/2810103.2813681"},{"key":"ref007","doi-asserted-by":"crossref","unstructured":"M.\u00a0Bellare, K.G.\u00a0Paterson and P.\u00a0Rogaway, Security of symmetric encryption against mass surveillance, in: Advances in Cryptology\u00a0\u2013 CRYPTO, 2014, pp.\u00a01\u201319.","DOI":"10.1007\/978-3-662-44371-2_1"},{"key":"ref008","first-page":"499","volume":"2","author":"Bousquet O.","year":"2002","journal-title":"Journal of Machine Learning Research"},{"key":"ref009","doi-asserted-by":"crossref","unstructured":"J.\u00a0Brickell and V.\u00a0Shmatikov, The cost of privacy: Destruction of data-mining utility in anonymized data publishing, in: ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2008, pp.\u00a070\u201378.","DOI":"10.1145\/1401890.1401904"},{"key":"ref010","doi-asserted-by":"crossref","unstructured":"J.A.\u00a0Calandrino, A.\u00a0Kilzer, A.\u00a0Narayanan, E.W.\u00a0Felten and V.\u00a0Shmatikov, \u201cyou might also like:\u201d privacy risks of collaborative filtering, in: IEEE Symposium on Security and Privacy (Oakland), 2011.","DOI":"10.1109\/SP.2011.40"},{"key":"ref011","unstructured":"K.\u00a0Chaudhuri, C.\u00a0Monteleoni and A.D.\u00a0Sarwate, Differentially private empirical risk minimization,\n                      Journal of Machine Learning Research\n                      (2011)."},{"key":"ref012","unstructured":"F.\u00a0Chollet et al., Keras, 2015."},{"key":"ref013","unstructured":"J.\u00a0Cohen, E.\u00a0Rosenfeld and Z.\u00a0Kolter, Certified adversarial robustness via randomized smoothing, in: Proceedings of the 36th International Conference on Machine Learning, K.\u00a0Chaudhuri and R.\u00a0Salakhutdinov, eds, Proceedings of Machine Learning Research, Vol.\u00a097, 2019, pp.\u00a01310\u20131320."},{"key":"ref014","doi-asserted-by":"crossref","unstructured":"G.\u00a0Cormode, Personal privacy vs population privacy: Learning to attack anonymization, in: ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2011, pp.\u00a01253\u20131261.","DOI":"10.1145\/2020408.2020598"},{"key":"ref015","doi-asserted-by":"publisher","DOI":"10.1142\/S0218488512400247"},{"key":"ref016","doi-asserted-by":"crossref","unstructured":"I.\u00a0Dinur and K.\u00a0Nissim, Revealing information while preserving privacy, in: ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, 2003, pp.\u00a0202\u2013210.","DOI":"10.1145\/773153.773173"},{"key":"ref017","doi-asserted-by":"crossref","unstructured":"C.\u00a0Dwork, Differential privacy, in: International Colloquium on Automata, Languages and Programming, 2006, pp.\u00a01\u201312.","DOI":"10.1007\/11787006_1"},{"key":"ref018","unstructured":"C.\u00a0Dwork, V.\u00a0Feldman, M.\u00a0Hardt, T.\u00a0Pitassi, O.\u00a0Reingold and A.\u00a0Roth, Generalization in adaptive data analysis and holdout reuse, in: Neural Information Processing Systems, 2015, pp.\u00a02350\u20132358."},{"key":"ref019","doi-asserted-by":"crossref","unstructured":"C.\u00a0Dwork, V.\u00a0Feldman, M.\u00a0Hardt, T.\u00a0Pitassi, O.\u00a0Reingold and A.L.\u00a0Roth, Preserving statistical validity in adaptive data analysis, in: ACM Symposium on Theory of Computing, 2015, pp.\u00a0117\u2013126.","DOI":"10.1145\/2746539.2746580"},{"key":"ref020","doi-asserted-by":"crossref","unstructured":"C.\u00a0Dwork, F.\u00a0McSherry and K.\u00a0Talwar, The price of privacy and the limits of LP decoding, in: ACM Symposium on Theory of Computing, 2007, pp.\u00a085\u201394.","DOI":"10.1145\/1250790.1250804"},{"key":"ref021","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pone.0028071"},{"key":"ref022","doi-asserted-by":"crossref","unstructured":"M.\u00a0Fredrikson, S.\u00a0Jha and T.\u00a0Ristenpart, Model inversion attacks that exploit confidence information and basic countermeasures, in: ACM Conference on Computer and Communications Security (CCS), 2015.","DOI":"10.1145\/2810103.2813677"},{"key":"ref023","unstructured":"M.\u00a0Fredrikson, E.\u00a0Lantz, S.\u00a0Jha, S.\u00a0Lin, D.\u00a0Page and T.\u00a0Ristenpart, Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing, in: USENIX Security Symposium, 2014, pp.\u00a017\u201332."},{"key":"ref024","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-26823-1_4"},{"key":"ref025","unstructured":"I.\u00a0Goodfellow, Y.\u00a0Bengio and A.\u00a0Courville, Deep Learning, MIT Press, 2016, http:\/\/www.deeplearningbook.org."},{"key":"ref026","unstructured":"I.J.\u00a0Goodfellow, J.\u00a0Shlens and C.\u00a0Szegedy, Explaining and harnessing adversarial examples, in: International Conference on Learning Representations, 2015."},{"key":"ref027","doi-asserted-by":"publisher","DOI":"10.1126\/science.1229566"},{"key":"ref028","doi-asserted-by":"publisher","DOI":"10.1371\/journal.pgen.1000167"},{"key":"ref029","unstructured":"G.B.\u00a0Huang, M.\u00a0Ramesh, T.\u00a0Berg and E.\u00a0Learned-Miller, Labeled faces in the wild: A database for studying face recognition in unconstrained environments, Technical Report, 07-49, University of Massachusetts, Amherst, 2007."},{"key":"ref030","doi-asserted-by":"publisher","DOI":"10.1056\/NEJMoa0809329"},{"key":"ref031","doi-asserted-by":"publisher","DOI":"10.1137\/1.9781611973105.102"},{"key":"ref032","doi-asserted-by":"crossref","unstructured":"S.P.\u00a0Kasiviswanathan, M.\u00a0Rudelson, A.\u00a0Smith and J.\u00a0Ullman, The price of privately releasing contingency tables and the spectra of random matrices with correlated rows, in: ACM Symposium on Theory of Computing, 2010, pp.\u00a0775\u2013784.","DOI":"10.1145\/1806689.1806795"},{"key":"ref033","unstructured":"D.P.\u00a0Kingma and J.\u00a0Ba, Adam: A method for stochastic optimization, in: International Conference for Learning Representations (ICLR), 2015."},{"key":"ref034","doi-asserted-by":"publisher","DOI":"10.7208\/chicago\/9780226206981.003.0010"},{"key":"ref035","unstructured":"P.\u00a0Kr\u00e4henb\u00fchl, C.\u00a0Doersch, J.\u00a0Donahue and T.\u00a0Darrell, Data-dependent initializations of convolutional neural networks, arXiv preprint arXiv:1511.06856 (2015)."},{"key":"ref036","unstructured":"A.\u00a0Krizhevsky and G.\u00a0Hinton, Learning multiple layers of features from tiny images (2009)."},{"key":"ref037","unstructured":"Y.\u00a0LeCun, C.\u00a0Cortes and C.\u00a0Burges, The MNIST database of handwritten digits, 1998."},{"key":"ref038","unstructured":"J.\u00a0Lei, Differentially private m-estimators, in: Neural Information Processing Systems, 2011, pp.\u00a0361\u2013369."},{"key":"ref039","doi-asserted-by":"crossref","unstructured":"N.\u00a0Li, W.\u00a0Qardaji, D.\u00a0Su, Y.\u00a0Wu and W.\u00a0Yang, Membership privacy: A unifying framework for privacy definitions, in: ACM SIGSAC Conference on Computer and Communications Security, 2013, pp.\u00a0889\u2013900.","DOI":"10.1145\/2508859.2516686"},{"key":"ref040","unstructured":"A.\u00a0M\u0105dry, A.\u00a0Makelov, L.\u00a0Schmidt, D.\u00a0Tsipras and A.\u00a0Vladu, Towards deep learning models resistant to adversarial attacks, in: International Conference on Learning Representations, 2018."},{"key":"ref041","unstructured":"K.P.\u00a0Murphy, Machine Learning: A Probabilistic Perspective, The MIT Press, 2012."},{"key":"ref042","unstructured":"Netflix, Netflix Prize, 2006."},{"key":"ref043","unstructured":"B.\u00a0Neuberg, Personal Photos Model, GitHub, 2017."},{"key":"ref044","doi-asserted-by":"publisher","DOI":"10.1017\/CBO9781139814782"},{"key":"ref045","unstructured":"N.\u00a0Papernot, F.\u00a0Faghri, N.\u00a0Carlini, I.\u00a0Goodfellow, R.\u00a0Feinman, A.\u00a0Kurakin, C.\u00a0Xie, Y.\u00a0Sharma, T.\u00a0Brown, A.\u00a0Roy, A.\u00a0Matyasko, V.\u00a0Behzadan, K.\u00a0Hambardzumyan, Z.\u00a0Zhang, Y.L.\u00a0Juang, Z.\u00a0Li, R.\u00a0Sheatsley, A.\u00a0Garg, J.\u00a0Uesato, W.\u00a0Gierke, Y.\u00a0Dong, D.\u00a0Berthelot, P.\u00a0Hendricks, J.\u00a0Rauber and R.\u00a0Long, Technical report on the CleverHans v2.1.0 adversarial examples library, arXiv preprint arXiv:1610.00768 (2018)."},{"key":"ref046","doi-asserted-by":"crossref","unstructured":"N.\u00a0Papernot, P.\u00a0McDaniel, I.\u00a0Goodfellow, S.\u00a0Jha, Z.B.\u00a0Celik and A.\u00a0Swami, Practical black-box attacks against machine learning, in: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, 2017.","DOI":"10.1145\/3052973.3053009"},{"key":"ref047","doi-asserted-by":"crossref","unstructured":"N.\u00a0Papernot, P.\u00a0McDaniel, S.\u00a0Jha, M.\u00a0Fredrikson, Z.B.\u00a0Celik and A.\u00a0Swami, The limitations of deep learning in adversarial settings, in: IEEE European Symposium on Security and Privacy, 2016, pp.\u00a0372\u2013387.","DOI":"10.1109\/EuroSP.2016.36"},{"key":"ref048","first-page":"2825","volume":"12","author":"Pedregosa F.","year":"2011","journal-title":"Journal of Machine Learning Research"},{"key":"ref049","unstructured":"A.\u00a0Raghunathan, S.M.\u00a0Xie, F.\u00a0Yang, J.C.\u00a0Duchi and P.\u00a0Liang, Adversarial training can hurt generalization, arXiv preprint arXiv:1906.06032 (2019)."},{"key":"ref050","doi-asserted-by":"publisher","DOI":"10.1038\/ng.436"},{"key":"ref051","doi-asserted-by":"publisher","DOI":"10.1073\/pnas.0602562103"},{"key":"ref052","unstructured":"L.\u00a0Schmidt, S.\u00a0Santurkar, D.\u00a0Tsipras, K.\u00a0Talwar and A.\u00a0Madry, Adversarially robust generalization requires more data, in: Advances in Neural Information Processing Systems, 2018, pp.\u00a05014\u20135026."},{"key":"ref053","unstructured":"S.\u00a0Shalev-Shwartz, O.\u00a0Shamir, N.\u00a0Srebro and K.\u00a0Sridharan, Learnability, stability and uniform convergence, Journal of Machine Learning Research 11 (2010)."},{"key":"ref054","doi-asserted-by":"crossref","unstructured":"R.\u00a0Shokri, M.\u00a0Stronati, C.\u00a0Song and V.\u00a0Shmatikov, Membership inference attacks against machine learning models, in: IEEE Symposium on Security and Privacy (Oakland), 2017, pp.\u00a03\u201318.","DOI":"10.1109\/SP.2017.41"},{"key":"ref055","doi-asserted-by":"publisher","DOI":"10.1016\/j.ajhg.2015.09.010"},{"key":"ref056","unstructured":"K.\u00a0Simonyan and A.\u00a0Zisserman, Very deep convolutional networks for large-scale image recognition, arXiv preprint arXiv:1409.1556 (2014)."},{"key":"ref057","doi-asserted-by":"crossref","unstructured":"C.\u00a0Song, T.\u00a0Ristenpart and V.\u00a0Shmatikov, Machine learning models that remember too much, in: ACM Conference on Computer and Communications Security, 2017, pp.\u00a0587\u2013601.","DOI":"10.1145\/3133956.3134077"},{"key":"ref058","doi-asserted-by":"crossref","unstructured":"L.\u00a0Song, R.\u00a0Shokri and P.\u00a0Mittal, Privacy risks of securing machine learning models against adversarial examples, arXiv preprint arXiv:1905.10291 (2019).","DOI":"10.1145\/3319535.3354211"},{"key":"ref059","unstructured":"C.\u00a0Szegedy, W.\u00a0Zaremba, I.\u00a0Sutskever, J.\u00a0Bruna, D.\u00a0Erhan, I.\u00a0Goodfellow and R.\u00a0Fergus, Intriguing properties of neural networks, in: International Conference on Learning Representations, 2014. http:\/\/arxiv.org\/abs\/1312.6199."},{"key":"ref060","unstructured":"A.G.\u00a0Thakurta and A.\u00a0Smith, Differentially private feature selection via stability arguments, and the robustness of the lasso, in: Conference on Learning Theory, Vol.\u00a030, 2013, pp.\u00a0819\u2013850."},{"key":"ref061","unstructured":"D.\u00a0Tsipras, S.\u00a0Santurkar, L.\u00a0Engstrom, A.\u00a0Turner and A.\u00a0Madry, Robustness may be at odds with accuracy, in: International Conference on Learning Representations, 2019."},{"key":"ref062","doi-asserted-by":"crossref","unstructured":"R.\u00a0Wang, Y.F.\u00a0Li, X.\u00a0Wang, H.\u00a0Tang and X.\u00a0Zhou, Learning your identity and disease from research papers: Information leaks in genome wide association studies, in: ACM Conference on Computer and Communications Security, 2009, pp.\u00a0534\u2013544.","DOI":"10.1145\/1653662.1653726"},{"issue":"183","key":"ref063","first-page":"1","volume":"17","author":"Wang Y.-X.","year":"2016","journal-title":"Journal of Machine Learning Research"},{"key":"ref064","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-45381-1_10"},{"key":"ref065","unstructured":"E.\u00a0Wong and Z.\u00a0Kolter, Provable defenses against adversarial examples via the convex outer adversarial polytope, in: International Conference on Machine Learning, 2018, pp.\u00a05283\u20135292."},{"key":"ref066","doi-asserted-by":"crossref","unstructured":"X.\u00a0Wu, M.\u00a0Fredrikson, S.\u00a0Jha and J.F.\u00a0Naughton, A methodology for formalizing model-inversion attacks, in: IEEE Computer Security Foundations Symposium (CSF), 2016.","DOI":"10.1109\/CSF.2016.32"},{"key":"ref067","unstructured":"X.\u00a0Wu, M.\u00a0Fredrikson, W.\u00a0Wu, S.\u00a0Jha and J.F.\u00a0Naughton, Revisiting Differentially private regression: Lessons from learning theory and their consequences, arXiv preprint arXiv:1512.06388 (2015)."},{"key":"ref068","doi-asserted-by":"crossref","unstructured":"S.\u00a0Yeom, I.\u00a0Giacomelli, M.\u00a0Fredrikson and S.\u00a0Jha, Privacy risk in machine learning: Analyzing the connection to overfitting, in: IEEE Computer Security Foundations Symposium (CSF), 2018, pp.\u00a0268\u2013282.","DOI":"10.1109\/CSF.2018.00027"},{"key":"ref069","unstructured":"C.\u00a0Zhang, S.\u00a0Bengio, M.\u00a0Hardt, B.\u00a0Recht and O.\u00a0Vinyals, Understanding deep learning requires rethinking generalization, arXiv preprint arXiv:1611.03530 (2016)."},{"key":"ref070","unstructured":"H.\u00a0Zhang, Y.\u00a0Yu, J.\u00a0Jiao, E.\u00a0Xing, L.\u00a0El\u00a0Ghaoui and M.\u00a0Jordan, Theoretically principled trade-off between robustness and accuracy, in: International Conference on Machine Learning, 2019, pp.\u00a07472\u20137482."},{"key":"ref071","doi-asserted-by":"publisher","DOI":"10.14778\/2350229.2350253"}],"container-title":["Journal of Computer Security"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/journals.sagepub.com\/doi\/pdf\/10.3233\/JCS-191362","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/journals.sagepub.com\/doi\/full-xml\/10.3233\/JCS-191362","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/journals.sagepub.com\/doi\/pdf\/10.3233\/JCS-191362","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,4,29]],"date-time":"2026-04-29T20:45:21Z","timestamp":1777495521000},"score":1,"resource":{"primary":{"URL":"https:\/\/journals.sagepub.com\/doi\/10.3233\/JCS-191362"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,10,22]]},"references-count":71,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2020,2,4]]}},"alternative-id":["10.3233\/JCS-191362"],"URL":"https:\/\/doi.org\/10.3233\/jcs-191362","relation":{},"ISSN":["0926-227X","1875-8924"],"issn-type":[{"value":"0926-227X","type":"print"},{"value":"1875-8924","type":"electronic"}],"subject":[],"published":{"date-parts":[[2019,10,22]]}}}