{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,3,25]],"date-time":"2025-03-25T14:10:14Z","timestamp":1742911814766,"version":"3.40.3"},"publisher-location":"Cham","reference-count":18,"publisher":"Springer Nature Switzerland","isbn-type":[{"type":"print","value":"9783031255984"},{"type":"electronic","value":"9783031255991"}],"license":[{"start":{"date-parts":[[2023,1,1]],"date-time":"2023-01-01T00:00:00Z","timestamp":1672531200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,3,9]],"date-time":"2023-03-09T00:00:00Z","timestamp":1678320000000},"content-version":"vor","delay-in-days":67,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2023]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>We study a relaxed version of the column-sampling problem for the Nystr\u00f6m approximation of kernel matrices, where approximations are defined from multisets of landmark points in the ambient space; such multisets are referred to as Nystr\u00f6m samples. We consider an unweighted variation of the radial squared-kernel discrepancy (SKD) criterion as a surrogate for the classical criteria used to assess the Nystr\u00f6m approximation accuracy; in this setting, we discuss how Nystr\u00f6m samples can be efficiently optimised through stochastic gradient descent. We perform numerical experiments which demonstrate that the local minimisation of the radial SKD yields Nystr\u00f6m samples with improved Nystr\u00f6m approximation accuracy in terms of trace, Frobenius and spectral norms.<\/jats:p>","DOI":"10.1007\/978-3-031-25599-1_10","type":"book-chapter","created":{"date-parts":[[2023,3,8]],"date-time":"2023-03-08T04:32:27Z","timestamp":1678249947000},"page":"123-140","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Local Optimisation of\u00a0Nystr\u00f6m Samples Through Stochastic Gradient Descent"],"prefix":"10.1007","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8749-5151","authenticated-orcid":false,"given":"Matthew","family":"Hutchings","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5469-814X","authenticated-orcid":false,"given":"Bertrand","family":"Gauthier","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,3,9]]},"reference":[{"key":"10_CR1","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4419-9096-9","volume-title":"Reproducing Kernel Hilbert Spaces in Probability and Statistics","author":"A Berlinet","year":"2004","unstructured":"Berlinet, A., Thomas-Agnan, C.: Reproducing Kernel Hilbert Spaces in Probability and Statistics. Springer, New York (2004). https:\/\/doi.org\/10.1007\/978-1-4419-9096-9"},{"issue":"2","key":"10_CR2","doi-asserted-by":"publisher","first-page":"223","DOI":"10.1137\/16M1080173","volume":"60","author":"L Bottou","year":"2018","unstructured":"Bottou, L., Curtis, F.E., Nocedal, J.: Optimization methods for large-scale machine learning. SIAM Rev. 60(2), 223\u2013311 (2018)","journal-title":"SIAM Rev."},{"key":"10_CR3","doi-asserted-by":"crossref","unstructured":"Cai, D., Chow, E., Erlandson, L., Saad, Y., Xi, Y.: SMASH: structured matrix approximation by separation and hierarchy. Numer. Linear Algebra Appl. 25 (2018)","DOI":"10.1002\/nla.2204"},{"key":"10_CR4","doi-asserted-by":"crossref","unstructured":"Derezinski, M., Khanna, R., Mahoney, M.W.: Improved guarantees and a multiple-descent curve for Column Subset Selection and the Nystr\u00f6m method. In: Advances in Neural Information Processing Systems (2020)","DOI":"10.24963\/ijcai.2021\/647"},{"key":"10_CR5","first-page":"2153","volume":"6","author":"P Drineas","year":"2005","unstructured":"Drineas, P., Mahoney, M.W.: On the Nystr\u00f6m method for approximating a Gram matrix for improved kernel-based learning. J. Mach. Learn. Res. 6, 2153\u20132175 (2005)","journal-title":"J. Mach. Learn. Res."},{"key":"10_CR6","unstructured":"Dua, D., Graff, C.: UCI Machine Learning Repository (2019). http:\/\/archive.ics.uci.edu\/ml"},{"key":"10_CR7","unstructured":"Gauthier, B.: Nystr\u00f6m approximation and reproducing kernels: embeddings, projections and squared-kernel discrepancy. Preprint (2021). https:\/\/hal.archives-ouvertes.fr\/hal-03207443"},{"key":"10_CR8","doi-asserted-by":"publisher","first-page":"375","DOI":"10.1016\/j.csda.2016.10.018","volume":"113","author":"B Gauthier","year":"2017","unstructured":"Gauthier, B., Pronzato, L.: Convex relaxation for IMSE optimal design in random-field models. Comput. Stat. Data Anal. 113, 375\u2013394 (2017)","journal-title":"Comput. Stat. Data Anal."},{"key":"10_CR9","doi-asserted-by":"publisher","first-page":"A3636","DOI":"10.1137\/17M1123614","volume":"40","author":"B Gauthier","year":"2018","unstructured":"Gauthier, B., Suykens, J.: Optimal quadrature-sparsification for integral operator approximation. SIAM J. Sci. Comput. 40, A3636\u2013A3674 (2018)","journal-title":"SIAM J. Sci. Comput."},{"key":"10_CR10","first-page":"1","volume":"17","author":"A Gittens","year":"2016","unstructured":"Gittens, A., Mahoney, M.W.: Revisiting the Nystr\u00f6m method for improved large-scale machine learning. J. Mach. Learn. Res. 17, 1\u201365 (2016)","journal-title":"J. Mach. Learn. Res."},{"key":"10_CR11","first-page":"981","volume":"13","author":"S Kumar","year":"2012","unstructured":"Kumar, S., Mohri, M., Talwalkar, A.: Sampling methods for the Nystr\u00f6m method. J. Mach. Learn. Res. 13, 981\u20131006 (2012)","journal-title":"J. Mach. Learn. Res."},{"key":"10_CR12","unstructured":"Lee, J.D., Simchowitz, M., Jordan, M.I., Recht, B.: Gradient descent only converges to minimizers. In: Conference on Learning Theory, pp. 1246\u20131257. PMLR (2016)"},{"key":"10_CR13","doi-asserted-by":"crossref","unstructured":"Niederreiter, H.: Random Number Generation and Quasi-Monte Carlo Methods. SIAM (1992)","DOI":"10.1137\/1.9781611970081"},{"key":"10_CR14","doi-asserted-by":"crossref","unstructured":"Paulsen, V.I., Raghupathi, M.: An Introduction to the Theory of Reproducing Kernel Hilbert Spaces. Cambridge University Press, Cambridge (2016)","DOI":"10.1017\/CBO9781316219232"},{"key":"10_CR15","volume-title":"Gaussian Processes for Machine Learning","author":"C Rasmussen","year":"2006","unstructured":"Rasmussen, C., Williams, C.: Gaussian Processes for Machine Learning. MIT Press, Cambridge (2006)"},{"key":"10_CR16","unstructured":"Ruder, S.: An overview of gradient descent optimization algorithms. arXiv preprint arXiv:1609.04747 (2016)"},{"key":"10_CR17","doi-asserted-by":"publisher","DOI":"10.1007\/978-1-4757-3799-8","volume-title":"The Design and Analysis of Computer Experiments","author":"TJ Santner","year":"2018","unstructured":"Santner, T.J., Williams, B.J., Notz, W.I.: The Design and Analysis of Computer Experiments. Springer, New York (2018). https:\/\/doi.org\/10.1007\/978-1-4757-3799-8"},{"key":"10_CR18","first-page":"7329","volume":"17","author":"S Wang","year":"2016","unstructured":"Wang, S., Zhang, Z., Zhang, T.: Towards more efficient SPSD matrix approximation and CUR matrix decomposition. J. Mach. Learn. Res. 17, 7329\u20137377 (2016)","journal-title":"J. Mach. Learn. Res."}],"container-title":["Lecture Notes in Computer Science","Machine Learning, Optimization, and Data Science"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/978-3-031-25599-1_10","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,4,5]],"date-time":"2023-04-05T17:18:29Z","timestamp":1680715109000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/978-3-031-25599-1_10"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023]]},"ISBN":["9783031255984","9783031255991"],"references-count":18,"URL":"https:\/\/doi.org\/10.1007\/978-3-031-25599-1_10","relation":{},"ISSN":["0302-9743","1611-3349"],"issn-type":[{"type":"print","value":"0302-9743"},{"type":"electronic","value":"1611-3349"}],"subject":[],"published":{"date-parts":[[2023]]},"assertion":[{"value":"9 March 2023","order":1,"name":"first_online","label":"First Online","group":{"name":"ChapterHistory","label":"Chapter History"}},{"value":"LOD","order":1,"name":"conference_acronym","label":"Conference Acronym","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"International Conference on Machine Learning, Optimization, and Data Science","order":2,"name":"conference_name","label":"Conference Name","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Certosa di Pontignano","order":3,"name":"conference_city","label":"Conference City","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Italy","order":4,"name":"conference_country","label":"Conference Country","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"2022","order":5,"name":"conference_year","label":"Conference Year","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"19 September 2022","order":7,"name":"conference_start_date","label":"Conference Start Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"22 September 2022","order":8,"name":"conference_end_date","label":"Conference End Date","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"8","order":9,"name":"conference_number","label":"Conference Number","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"lod2022","order":10,"name":"conference_id","label":"Conference ID","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"https:\/\/lod2022.icas.cc\/","order":11,"name":"conference_url","label":"Conference URL","group":{"name":"ConferenceInfo","label":"Conference Information"}},{"value":"Double-blind","order":1,"name":"type","label":"Type","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"Easychair","order":2,"name":"conference_management_system","label":"Conference Management System","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"226","order":3,"name":"number_of_submissions_sent_for_review","label":"Number of Submissions Sent for Review","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"85","order":4,"name":"number_of_full_papers_accepted","label":"Number of Full Papers Accepted","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"0","order":5,"name":"number_of_short_papers_accepted","label":"Number of Short Papers Accepted","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"38% - The value is computed by the equation \"Number of Full Papers Accepted \/ Number of Submissions Sent for Review * 100\" and then rounded to a whole number.","order":6,"name":"acceptance_rate_of_full_papers","label":"Acceptance Rate of Full Papers","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"5.6","order":7,"name":"average_number_of_reviews_per_paper","label":"Average Number of Reviews per Paper","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"1.5","order":8,"name":"average_number_of_papers_per_reviewer","label":"Average Number of Papers per Reviewer","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}},{"value":"Yes","order":9,"name":"external_reviewers_involved","label":"External Reviewers Involved","group":{"name":"ConfEventPeerReviewInformation","label":"Peer Review Information (provided by the conference organizers)"}}]}}