{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,2,21]],"date-time":"2025-02-21T23:58:46Z","timestamp":1740182326022,"version":"3.37.3"},"reference-count":50,"publisher":"IOP Publishing","issue":"2","license":[{"start":{"date-parts":[[2021,3,2]],"date-time":"2021-03-02T00:00:00Z","timestamp":1614643200000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2021,3,2]],"date-time":"2021-03-02T00:00:00Z","timestamp":1614643200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/iopscience.iop.org\/info\/page\/text-and-data-mining"}],"funder":[{"DOI":"10.13039\/501100001711","name":"Schweizerischer Nationalfonds zur F\u00f6rderung der Wissenschaftlichen Forschung","doi-asserted-by":"crossref","award":["00021-165509"],"award-info":[{"award-number":["00021-165509"]}],"id":[{"id":"10.13039\/501100001711","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/100000893","name":"Simons Foundation","doi-asserted-by":"crossref","award":["45495"],"award-info":[{"award-number":["45495"]}],"id":[{"id":"10.13039\/100000893","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["iopscience.iop.org"],"crossmark-restriction":false},"short-container-title":["Mach. Learn.: Sci. Technol."],"published-print":{"date-parts":[[2021,6,1]]},"abstract":"<jats:title>Abstract<\/jats:title>\n               <jats:p>We investigate how the training curve of isotropic kernel methods depends on the symmetry of the task to be learned, in several settings. (i) We consider a regression task, where the target function is a Gaussian random field that depends only on <jats:inline-formula>\n                     <jats:tex-math\/>\n                     <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" overflow=\"scroll\">\n                        <mml:msub>\n                           <mml:mi>d<\/mml:mi>\n                           <mml:mo>\u2225<\/mml:mo>\n                        <\/mml:msub>\n                     <\/mml:math>\n                     <jats:inline-graphic xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"mlstabd485ieqn1.gif\" xlink:type=\"simple\"\/>\n                  <\/jats:inline-formula> variables, fewer than the input dimension <jats:italic>d<\/jats:italic>. We compute the expected test error <jats:italic>\u03f5<\/jats:italic> that follows <jats:inline-formula>\n                     <jats:tex-math\/>\n                     <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" overflow=\"scroll\">\n                        <mml:mi>\u03f5<\/mml:mi>\n                        <mml:mo>\u223c<\/mml:mo>\n                        <mml:msup>\n                           <mml:mi>p<\/mml:mi>\n                           <mml:mrow>\n                              <mml:mo>\u2212<\/mml:mo>\n                              <mml:mi>\u03b2<\/mml:mi>\n                           <\/mml:mrow>\n                        <\/mml:msup>\n                     <\/mml:math>\n                     <jats:inline-graphic xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"mlstabd485ieqn2.gif\" xlink:type=\"simple\"\/>\n                  <\/jats:inline-formula> where <jats:italic>p<\/jats:italic> is the size of the training set. We find that <jats:italic>\u03b2<\/jats:italic>\u2009\u223c\u20091\/<jats:italic>d<\/jats:italic> independently of <jats:inline-formula>\n                     <jats:tex-math\/>\n                     <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" overflow=\"scroll\">\n                        <mml:msub>\n                           <mml:mi>d<\/mml:mi>\n                           <mml:mo>\u2225<\/mml:mo>\n                        <\/mml:msub>\n                     <\/mml:math>\n                     <jats:inline-graphic xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"mlstabd485ieqn3.gif\" xlink:type=\"simple\"\/>\n                  <\/jats:inline-formula>, supporting previous findings that the presence of invariants does not resolve the curse of dimensionality for kernel regression. (ii) Next we consider support-vector binary classification and introduce the <jats:italic>stripe model<\/jats:italic>, where the data label depends on a single coordinate <jats:inline-formula>\n                     <jats:tex-math\/>\n                     <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" overflow=\"scroll\">\n                        <mml:mi>y<\/mml:mi>\n                        <mml:mo stretchy=\"false\">(<\/mml:mo>\n                        <mml:munder>\n                           <mml:mi>x<\/mml:mi>\n                           <mml:mo accent=\"true\" stretchy=\"true\">_<\/mml:mo>\n                        <\/mml:munder>\n                        <mml:mo stretchy=\"false\">)<\/mml:mo>\n                        <mml:mo>=<\/mml:mo>\n                        <mml:mi>y<\/mml:mi>\n                        <mml:mo stretchy=\"false\">(<\/mml:mo>\n                        <mml:msub>\n                           <mml:mi>x<\/mml:mi>\n                           <mml:mn>1<\/mml:mn>\n                        <\/mml:msub>\n                        <mml:mo stretchy=\"false\">)<\/mml:mo>\n                     <\/mml:math>\n                     <jats:inline-graphic xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"mlstabd485ieqn4.gif\" xlink:type=\"simple\"\/>\n                  <\/jats:inline-formula>, corresponding to parallel decision boundaries separating labels of different signs, and consider that there is no margin at these interfaces. We argue and confirm numerically that, for large bandwidth, <jats:inline-formula>\n                     <jats:tex-math\/>\n                     <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" overflow=\"scroll\">\n                        <mml:mi>\u03b2<\/mml:mi>\n                        <mml:mo>=<\/mml:mo>\n                        <mml:mfrac>\n                           <mml:mrow>\n                              <mml:mi>d<\/mml:mi>\n                              <mml:mo>\u2212<\/mml:mo>\n                              <mml:mn>1<\/mml:mn>\n                              <mml:mo>+<\/mml:mo>\n                              <mml:mi>\u03be<\/mml:mi>\n                           <\/mml:mrow>\n                           <mml:mrow>\n                              <mml:mn>3<\/mml:mn>\n                              <mml:mi>d<\/mml:mi>\n                              <mml:mo>\u2212<\/mml:mo>\n                              <mml:mn>3<\/mml:mn>\n                              <mml:mo>+<\/mml:mo>\n                              <mml:mi>\u03be<\/mml:mi>\n                           <\/mml:mrow>\n                        <\/mml:mfrac>\n                     <\/mml:math>\n                     <jats:inline-graphic xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"mlstabd485ieqn5.gif\" xlink:type=\"simple\"\/>\n                  <\/jats:inline-formula>, where <jats:italic>\u03be<\/jats:italic>\u2009\u2208\u2009(0,\u20092) is the exponent characterizing the singularity of the kernel at the origin. This estimation improves classical bounds obtainable from Rademacher complexity. In this setting there is no curse of dimensionality since <jats:inline-formula>\n                     <jats:tex-math\/>\n                     <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" overflow=\"scroll\">\n                        <mml:mi>\u03b2<\/mml:mi>\n                        <mml:mo stretchy=\"false\">\u2192<\/mml:mo>\n                        <mml:mn>1<\/mml:mn>\n                        <mml:mrow>\n                           <mml:mo>\/<\/mml:mo>\n                        <\/mml:mrow>\n                        <mml:mn>3<\/mml:mn>\n                     <\/mml:math>\n                     <jats:inline-graphic xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"mlstabd485ieqn6.gif\" xlink:type=\"simple\"\/>\n                  <\/jats:inline-formula> as <jats:inline-formula>\n                     <jats:tex-math\/>\n                     <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" overflow=\"scroll\">\n                        <mml:mi>d<\/mml:mi>\n                        <mml:mo stretchy=\"false\">\u2192<\/mml:mo>\n                        <mml:mi mathvariant=\"normal\">\u221e<\/mml:mi>\n                     <\/mml:math>\n                     <jats:inline-graphic xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"mlstabd485ieqn7.gif\" xlink:type=\"simple\"\/>\n                  <\/jats:inline-formula>. (iii) We confirm these findings for the <jats:italic>spherical model<\/jats:italic>, for which <jats:inline-formula>\n                     <jats:tex-math\/>\n                     <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" overflow=\"scroll\">\n                        <mml:mi>y<\/mml:mi>\n                        <mml:mo stretchy=\"false\">(<\/mml:mo>\n                        <mml:munder>\n                           <mml:mi>x<\/mml:mi>\n                           <mml:mo accent=\"true\" stretchy=\"true\">_<\/mml:mo>\n                        <\/mml:munder>\n                        <mml:mo stretchy=\"false\">)<\/mml:mo>\n                        <mml:mo>=<\/mml:mo>\n                        <mml:mi>y<\/mml:mi>\n                        <mml:mo stretchy=\"false\">(<\/mml:mo>\n                        <mml:mrow>\n                           <mml:mo>|<\/mml:mo>\n                        <\/mml:mrow>\n                        <mml:mrow>\n                           <mml:mo>|<\/mml:mo>\n                        <\/mml:mrow>\n                        <mml:munder>\n                           <mml:mi>x<\/mml:mi>\n                           <mml:mo accent=\"true\" stretchy=\"true\">_<\/mml:mo>\n                        <\/mml:munder>\n                        <mml:mrow>\n                           <mml:mo>|<\/mml:mo>\n                        <\/mml:mrow>\n                        <mml:mrow>\n                           <mml:mo>|<\/mml:mo>\n                        <\/mml:mrow>\n                        <mml:mo stretchy=\"false\">)<\/mml:mo>\n                     <\/mml:math>\n                     <jats:inline-graphic xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"mlstabd485ieqn8.gif\" xlink:type=\"simple\"\/>\n                  <\/jats:inline-formula>. (iv) In the stripe model, we show that, if the data are compressed along their invariants by some factor <jats:italic>\u03bb<\/jats:italic> (an operation believed to take place in deep networks), the test error is reduced by a factor <jats:inline-formula>\n                     <jats:tex-math\/>\n                     <mml:math xmlns:mml=\"http:\/\/www.w3.org\/1998\/Math\/MathML\" overflow=\"scroll\">\n                        <mml:msup>\n                           <mml:mi>\u03bb<\/mml:mi>\n                           <mml:mrow>\n                              <mml:mo>\u2212<\/mml:mo>\n                              <mml:mfrac>\n                                 <mml:mrow>\n                                    <mml:mn>2<\/mml:mn>\n                                    <mml:mo stretchy=\"false\">(<\/mml:mo>\n                                    <mml:mi>d<\/mml:mi>\n                                    <mml:mo>\u2212<\/mml:mo>\n                                    <mml:mn>1<\/mml:mn>\n                                    <mml:mo stretchy=\"false\">)<\/mml:mo>\n                                 <\/mml:mrow>\n                                 <mml:mrow>\n                                    <mml:mn>3<\/mml:mn>\n                                    <mml:mi>d<\/mml:mi>\n                                    <mml:mo>\u2212<\/mml:mo>\n                                    <mml:mn>3<\/mml:mn>\n                                    <mml:mo>+<\/mml:mo>\n                                    <mml:mi>\u03be<\/mml:mi>\n                                 <\/mml:mrow>\n                              <\/mml:mfrac>\n                           <\/mml:mrow>\n                        <\/mml:msup>\n                     <\/mml:math>\n                     <jats:inline-graphic xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" xlink:href=\"mlstabd485ieqn9.gif\" xlink:type=\"simple\"\/>\n                  <\/jats:inline-formula>.<\/jats:p>","DOI":"10.1088\/2632-2153\/abd485","type":"journal-article","created":{"date-parts":[[2020,12,17]],"date-time":"2020-12-17T22:15:32Z","timestamp":1608243332000},"page":"025020","update-policy":"https:\/\/doi.org\/10.1088\/crossmark-policy","source":"Crossref","is-referenced-by-count":5,"title":["How isotropic kernels perform on simple invariants"],"prefix":"10.1088","volume":"2","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0348-9986","authenticated-orcid":false,"given":"Jonas","family":"Paccolat","sequence":"first","affiliation":[]},{"given":"Stefano","family":"Spigler","sequence":"additional","affiliation":[]},{"given":"Matthieu","family":"Wyart","sequence":"additional","affiliation":[]}],"member":"266","published-online":{"date-parts":[[2021,3,2]]},"reference":[{"author":"Hestness","key":"mlstabd485bib1"},{"article-title":"Asymptotic learning curves of kernel methods: empirical data vs teacher-student paradigm","year":"2019","author":"Spigler","key":"mlstabd485bib2"},{"key":"mlstabd485bib3","first-page":"669","article-title":"Distance-based classification with Lipschitz functions","volume":"5","author":"von Luxburg","year":"2004","journal-title":"J. Mach. Learn. Res."},{"key":"mlstabd485bib4","first-page":"629","article-title":"Breaking the curse of dimensionality with convex neural networks","volume":"18","author":"Bach","year":"2017","journal-title":"J. Mach. Learn. Res."},{"key":"mlstabd485bib5","doi-asserted-by":"publisher","DOI":"10.1098\/rsta.2015.0203","article-title":"Understanding deep convolutional networks","volume":"374","author":"Mallat","year":"2016","journal-title":"Phil. Trans. R. Soc. A"},{"article-title":"Geometry of optimization and implicit regularization in deep learning","year":"2017","author":"Neyshabur","key":"mlstabd485bib6"},{"article-title":"Towards understanding the role of over-parametrization in generalization of neural networks","year":"2018","author":"Neyshabur","key":"mlstabd485bib7"},{"article-title":"Minnorm training: an algorithm for training over-parameterized deep neural networks","year":"2018","author":"Bansal","key":"mlstabd485bib8"},{"article-title":"High-dimensional dynamics of generalization error in neural networks","year":"2017","author":"Advani","key":"mlstabd485bib9"},{"key":"mlstabd485bib10","doi-asserted-by":"publisher","DOI":"10.1088\/1751-8121\/ab4c8b","article-title":"A jamming transition from under-to over-parametrization affects generalization in deep learning","volume":"52","author":"Spigler","year":"2019","journal-title":"J. Phys. A: Math. Theor."},{"key":"mlstabd485bib11","doi-asserted-by":"publisher","DOI":"10.1088\/1742-5468\/ab633c","article-title":"Scaling description of generalization with number of parameters in deep learning","volume":"2020","author":"Geiger","year":"2020","journal-title":"J. Stat. Mech.: Theory Exp."},{"key":"mlstabd485bib12","first-page":"pp 8571","article-title":"Neural tangent kernel: convergence and generalization in neural networks.","author":"Jacot","year":"2018"},{"key":"mlstabd485bib13","doi-asserted-by":"publisher","first-page":"1872","DOI":"10.1109\/TPAMI.2012.230","article-title":"Invariant scattering convolution networks","volume":"35","author":"Bruna","year":"2013","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"article-title":"On exact computation with an infinitely wide neural net","year":"2019","author":"Arora","key":"mlstabd485bib14"},{"key":"mlstabd485bib15","doi-asserted-by":"publisher","first-page":"102","DOI":"10.1016\/j.jco.2005.09.001","article-title":"Approximation by neural networks and learning theory","volume":"22","author":"Maiorov","year":"2006","journal-title":"J. Complexity"},{"key":"mlstabd485bib16","first-page":"pp 369","article-title":"Learning intrinsic dimension and intrinsic entropy of high-dimensional datasets","author":"Costa","year":"2004"},{"key":"mlstabd485bib17","first-page":"pp 289","article-title":"Intrinsic dimensionality estimation of submanifolds in R d","author":"Hein","year":"2005"},{"key":"mlstabd485bib18","doi-asserted-by":"publisher","first-page":"37","DOI":"10.1007\/s10994-012-5294-7","article-title":"Novel high intrinsic dimensionality estimators","volume":"89","author":"Rozza","year":"2012","journal-title":"Mach. Learn."},{"key":"mlstabd485bib19","doi-asserted-by":"publisher","DOI":"10.1038\/s41598-017-11873-y","article-title":"Estimating the intrinsic dimension of datasets by a minimal neighborhood information","volume":"7","author":"Facco","year":"2017","journal-title":"Sci. Rep."},{"article-title":"Modelling the influence of data structure on learning in neural networks","year":"2019","author":"Goldt","key":"mlstabd485bib20"},{"article-title":"Geometric compression of invariant manifolds in neural nets","year":"2020","author":"Paccolat","key":"mlstabd485bib21"},{"article-title":"Opening the black box of deep neural networks via information","year":"2017","author":"Shwartz-Ziv","key":"mlstabd485bib22"},{"key":"mlstabd485bib23","first-page":"pp 344","article-title":"Learning curves for Gaussian processes","author":"Sollich","year":"1999"},{"key":"mlstabd485bib24","first-page":"pp 519","article-title":"Gaussian process regression with mismatched models","author":"Sollich","year":"2002"},{"article-title":"Spectrum dependent learning curves in kernel regression and wide neural networks","year":"2020","author":"Bordelon","key":"mlstabd485bib25"},{"key":"mlstabd485bib26","doi-asserted-by":"publisher","first-page":"242","DOI":"10.1214\/aoap\/1029962604","article-title":"Predicting random fields with increasing dense observations","volume":"9","author":"Stein","year":"1999","journal-title":"Ann. Appl. Probab."},{"article-title":"Sobolev norm learning rates for regularized least-squares algorithm","year":"2017","author":"Fischer","key":"mlstabd485bib27"},{"key":"mlstabd485bib28","doi-asserted-by":"publisher","first-page":"331","DOI":"10.1007\/s10208-006-0196-8","article-title":"Optimal rates for the regularized least-squares algorithm","volume":"7","author":"Caponnetto","year":"2007","journal-title":"Found. Comput. Math."},{"key":"mlstabd485bib29","first-page":"pp 8114","article-title":"Statistical optimality of stochastic gradient descent on hard learning problems through multiple passes","author":"Pillaud-Vivien","year":"2018"},{"key":"mlstabd485bib30","doi-asserted-by":"publisher","first-page":"257","DOI":"10.1088\/0305-4470\/21\/1\/030","article-title":"The space of interactions in neural network models","volume":"21","author":"Gardner","year":"1988","journal-title":"J. Phys. A: Math. Gen."},{"key":"mlstabd485bib31","doi-asserted-by":"publisher","first-page":"2975","DOI":"10.1103\/PhysRevLett.82.2975","article-title":"Statistical mechanics of support vector networks","volume":"82","author":"Dietrich","year":"1999","journal-title":"Phys. Rev. Lett."},{"article-title":"The generalization error of max-margin linear classifiers: high-dimensional asymptotics in the overparametrized regime","year":"2019","author":"Montanari","key":"mlstabd485bib32"},{"article-title":"Generalisation error in learning with random features and the hidden manifold model","year":"2020","author":"Gerace","key":"mlstabd485bib33"},{"article-title":"Double trouble in double descent: bias and variance(s) in the lazy regime","year":"2020","author":"d\u2019Ascoli","key":"mlstabd485bib34"},{"year":"2001","author":"Engel","key":"mlstabd485bib35"},{"key":"mlstabd485bib36","first-page":"463","article-title":"Rademacher and gaussian complexities: risk bounds and structural results","volume":"3","author":"Bartlett","year":"2002","journal-title":"J. Mach. Learn. Res."},{"key":"mlstabd485bib37","doi-asserted-by":"publisher","first-page":"323","DOI":"10.1051\/ps:2005018","article-title":"Theory of classification: a survey of some recent advances","volume":"9","author":"Boucheron","year":"2005","journal-title":"ESAIM: Probab. Stat."},{"key":"mlstabd485bib38","doi-asserted-by":"publisher","first-page":"4225","DOI":"10.1103\/PhysRevE.52.4225","article-title":"On-line learning in soft committee machines","volume":"52","author":"Saad","year":"1995","journal-title":"Phys. Rev. E"},{"key":"mlstabd485bib39","doi-asserted-by":"publisher","first-page":"2432","DOI":"10.1103\/PhysRevLett.75.2432","article-title":"Weight space structure and internal representations: a direct approach to learning and generalization in multilayer neural networks","volume":"75","author":"Monasson","year":"1995","journal-title":"Phys. Rev. Lett."},{"year":"2001","author":"Opper","key":"mlstabd485bib40"},{"key":"mlstabd485bib41","first-page":"pp 3223","article-title":"The committee machine: computational to statistical gaps in learning a two-layers neural network","author":"Aubin","year":"2018"},{"article-title":"Jamming in multilayer supervised learning models","year":"2018","author":"Franz","key":"mlstabd485bib42"},{"year":"2001","author":"Scholkopf","key":"mlstabd485bib43"},{"article-title":"Gaussian processes and kernel methods: a review on connections and equivalences","year":"2018","author":"Kanagawa","key":"mlstabd485bib44"},{"key":"mlstabd485bib45","doi-asserted-by":"publisher","first-page":"637","DOI":"10.1016\/S0893-6080(98)00032-X","article-title":"The connection between regularization operators and support vector kernels","volume":"11","author":"Smola","year":"1998","journal-title":"Neural Netw."},{"article-title":"Compressing invariant manifolds in neural nets","year":"2020","author":"Paccolat","key":"mlstabd485bib46"},{"key":"mlstabd485bib47","doi-asserted-by":"publisher","first-page":"167","DOI":"10.1007\/s00041-012-9242-5","article-title":"On fourier transforms of radial functions and distributions","volume":"19","author":"Grafakos","year":"2013","journal-title":"J. Fourier Appl."},{"key":"mlstabd485bib48","doi-asserted-by":"publisher","first-page":"301","DOI":"10.1007\/s00041-013-9313-2","article-title":"On radial functions and distributions and their fourier transforms","volume":"20","author":"Estrada","year":"2014","journal-title":"J. Fourier Anal. Appl."},{"key":"mlstabd485bib49","doi-asserted-by":"publisher","first-page":"17","DOI":"10.1137\/0103002","article-title":"Asymptotic representations of fourier integrals and the method of stationary phase","volume":"3","author":"Erd\u00e9lyi","year":"1955","journal-title":"J. Soc. Ind. Appl. Math."},{"volume":"vol 17","year":"2004","author":"Wendland","key":"mlstabd485bib50"}],"container-title":["Machine Learning: Science and Technology"],"original-title":[],"link":[{"URL":"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/abd485","content-type":"text\/html","content-version":"am","intended-application":"text-mining"},{"URL":"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/abd485\/pdf","content-type":"application\/pdf","content-version":"am","intended-application":"text-mining"},{"URL":"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/abd485","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/abd485\/pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/abd485\/pdf","content-type":"application\/pdf","content-version":"am","intended-application":"syndication"},{"URL":"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/abd485\/pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/abd485\/pdf","content-type":"application\/pdf","content-version":"am","intended-application":"similarity-checking"},{"URL":"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/abd485\/pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,1,30]],"date-time":"2022-01-30T01:51:55Z","timestamp":1643507515000},"score":1,"resource":{"primary":{"URL":"https:\/\/iopscience.iop.org\/article\/10.1088\/2632-2153\/abd485"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,3,2]]},"references-count":50,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2021,3,2]]},"published-print":{"date-parts":[[2021,6,1]]}},"URL":"https:\/\/doi.org\/10.1088\/2632-2153\/abd485","relation":{},"ISSN":["2632-2153"],"issn-type":[{"type":"electronic","value":"2632-2153"}],"subject":[],"published":{"date-parts":[[2021,3,2]]},"assertion":[{"value":"How isotropic kernels perform on simple invariants","name":"article_title","label":"Article Title"},{"value":"Machine Learning: Science and Technology","name":"journal_title","label":"Journal Title"},{"value":"paper","name":"article_type","label":"Article Type"},{"value":"\u00a9 2021 The Author(s). Published by IOP Publishing Ltd","name":"copyright_information","label":"Copyright Information"},{"value":"2020-11-16","name":"date_received","label":"Date Received","group":{"name":"publication_dates","label":"Publication dates"}},{"value":"2020-12-17","name":"date_accepted","label":"Date Accepted","group":{"name":"publication_dates","label":"Publication dates"}},{"value":"2021-03-02","name":"date_epub","label":"Online publication date","group":{"name":"publication_dates","label":"Publication dates"}}]}}