{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,7,30]],"date-time":"2025-07-30T14:21:52Z","timestamp":1753885312138,"version":"3.41.2"},"reference-count":36,"publisher":"Wiley","issue":"1","license":[{"start":{"date-parts":[[2021,6,14]],"date-time":"2021-06-14T00:00:00Z","timestamp":1623628800000},"content-version":"vor","delay-in-days":164,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61403397"],"award-info":[{"award-number":["61403397"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["Computational Intelligence and Neuroscience"],"published-print":{"date-parts":[[2021,1]]},"abstract":"<jats:p>Extensions of kernel methods for the class imbalance problems have been extensively studied. Although they work well in coping with nonlinear problems, the high computation and memory costs severely limit their application to real\u2010world imbalanced tasks. The Nystr\u00f6m method is an effective technique to scale kernel methods. However, the standard Nystr\u00f6m method needs to sample a sufficiently large number of landmark points to ensure an accurate approximation, which seriously affects its efficiency. In this study, we propose a multi\u2010Nystr\u00f6m method based on mixtures of Nystr\u00f6m approximations to avoid the explosion of subkernel matrix, whereas the optimization to mixture weights is embedded into the model training process by multiple kernel learning (MKL) algorithms to yield more accurate low\u2010rank approximation. Moreover, we select subsets of landmark points according to the imbalance distribution to reduce the model\u2019s sensitivity to skewness. We also provide a kernel stability analysis of our method and show that the model solution error is bounded by weighted approximate errors, which can help us improve the learning process. Extensive experiments on several large scale datasets show that our method can achieve a higher classification accuracy and a dramatical speedup of MKL algorithms.<\/jats:p>","DOI":"10.1155\/2021\/9911871","type":"journal-article","created":{"date-parts":[[2021,6,14]],"date-time":"2021-06-14T19:35:14Z","timestamp":1623699314000},"update-policy":"https:\/\/doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Multi\u2010Nystr\u00f6m Method Based on Multiple Kernel Learning for Large Scale Imbalanced Classification"],"prefix":"10.1155","volume":"2021","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-2565-7095","authenticated-orcid":false,"given":"Ling","family":"Wang","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9355-8390","authenticated-orcid":false,"given":"Hongqiao","family":"Wang","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1960-0546","authenticated-orcid":false,"given":"Guangyuan","family":"Fu","sequence":"additional","affiliation":[]}],"member":"311","published-online":{"date-parts":[[2021,6,14]]},"reference":[{"key":"e_1_2_9_1_2","doi-asserted-by":"crossref","unstructured":"HuangC. LiY. Change LoyC. andTangX. Learning deep representation for imbalanced classification Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition June 2016 Las Vegas NV USA 5375\u20135384.","DOI":"10.1109\/CVPR.2016.580"},{"key":"e_1_2_9_2_2","unstructured":"ZhuJ.andHovyE. Active learning for word sense disambiguation with methods for addressing the class imbalance problem Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL) June 2007 Prague Czech Republic 783\u2013790."},{"key":"e_1_2_9_3_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2020.04.039"},{"key":"e_1_2_9_4_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-0-387-09823-4_45"},{"key":"e_1_2_9_5_2","doi-asserted-by":"publisher","DOI":"10.1109\/tii.2018.2854549"},{"key":"e_1_2_9_6_2","doi-asserted-by":"publisher","DOI":"10.1142\/s0218001409007326"},{"key":"e_1_2_9_7_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSMCC.2011.2161285"},{"key":"e_1_2_9_8_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.eswa.2016.12.035"},{"key":"e_1_2_9_9_2","doi-asserted-by":"publisher","DOI":"10.1613\/jair.953"},{"key":"e_1_2_9_10_2","unstructured":"HeH. BaiY. Edwardo GarciaA. andLiS. Adasyn: adaptive synthetic sampling approach for imbalanced learning Proceedings of the 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence) June 2008 Hong Kong China IEEE 1322\u20131328."},{"key":"e_1_2_9_11_2","doi-asserted-by":"publisher","DOI":"10.1109\/tfuzz.2010.2042721"},{"key":"e_1_2_9_12_2","doi-asserted-by":"publisher","DOI":"10.1109\/tfuzz.2019.2898371"},{"key":"e_1_2_9_13_2","doi-asserted-by":"publisher","DOI":"10.1109\/tnn.2006.882812"},{"key":"e_1_2_9_14_2","doi-asserted-by":"publisher","DOI":"10.1109\/TSMCB.2008.2002909"},{"key":"e_1_2_9_15_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.ins.2013.04.016"},{"key":"e_1_2_9_16_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2017.2751612"},{"key":"e_1_2_9_17_2","doi-asserted-by":"publisher","DOI":"10.1109\/tkde.2005.95"},{"key":"e_1_2_9_18_2","doi-asserted-by":"crossref","unstructured":"TangBoandHeH. Kerneladasyn: kernel based adaptive synthetic data generation for imbalanced learning Proceedings of the 2015 IEEE Congress on Evolutionary Computation (CEC) May 2015 Sendai Japan IEEE 664\u2013671.","DOI":"10.1109\/CEC.2015.7256954"},{"key":"e_1_2_9_19_2","first-page":"682","article-title":"Using the Nystr\u00f6m method to speed up kernel machines","volume":"13","author":"Williams C.","year":"2000","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_9_20_2","first-page":"2153","article-title":"On the Nystr\u00f6m method for approximating a gram matrix for improved kernel-based learning","volume":"6","author":"Drineas P.","year":"2005","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_2_9_21_2","unstructured":"MuscoC.andMuscoC. Recursive sampling for the Nystr\u00f6m method Proceedings of the 31st International Conference on Neural Information Processing Systems December 2017 Long Beach CA USA 3836\u20133848."},{"key":"e_1_2_9_22_2","first-page":"1060","article-title":"Ensemble Nystr\u00f6m method","volume":"22","author":"Kumar S.","year":"2009","journal-title":"Advances in Neural Information Processing Systems"},{"key":"e_1_2_9_23_2","doi-asserted-by":"crossref","unstructured":"LiZ. YangT. ZhangL. andJinR. Fast and accurate refined Nystr\u00f6m-based kernel SVM 30 Proceedings of the AAAI Conference on Artificial Intelligence February 2016 Phoenix AZ USA.","DOI":"10.1609\/aaai.v30i1.10244"},{"key":"e_1_2_9_24_2","doi-asserted-by":"publisher","DOI":"10.1214\/009053607000000677"},{"volume-title":"The Nature of Statistical Learning Theory","year":"2013","author":"Vapnik V.","key":"e_1_2_9_25_2"},{"key":"e_1_2_9_26_2","first-page":"27","article-title":"Learning the kernel matrix with semidefinite programming","volume":"5","author":"Lanckriet G. R. G.","year":"2004","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_2_9_27_2","first-page":"2491","article-title":"Simplemkl","volume":"9","author":"Rakotomamonjy A.","year":"2008","journal-title":"Journal of Machine Learning Research"},{"key":"e_1_2_9_28_2","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2019.2922123"},{"key":"e_1_2_9_29_2","unstructured":"KumarS. MohriM. andTalwalkarA. Sampling techniques for the Nystr\u00f6m method Proceedings of the Artificial Intelligence and Statistics April 2009 Clearwater Beach FL USA 304\u2013311."},{"key":"e_1_2_9_30_2","doi-asserted-by":"publisher","DOI":"10.1109\/access.2020.3046604"},{"key":"e_1_2_9_31_2","first-page":"2211","article-title":"Multiple kernel learning algorithms","volume":"12","author":"G\u00f6nen M.","year":"2011","journal-title":"The Journal of Machine Learning Research"},{"key":"e_1_2_9_32_2","first-page":"981","article-title":"Sampling methods for the Nystr\u00f6m method","volume":"13","author":"Kumar S.","year":"2012","journal-title":"The Journal of Machine Learning Research"},{"key":"e_1_2_9_33_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.aml.2011.03.046"},{"key":"e_1_2_9_34_2","first-page":"1523","article-title":"Iteration complexity of feasible descent methods for convex optimization","volume":"15","author":"Wang P.-W.","year":"2014","journal-title":"The Journal of Machine Learning Research"},{"key":"e_1_2_9_35_2","unstructured":"HsiehC.-J. SiSi andDhillonI. S. Fast prediction for large-scale kernel machines Proceedings of the Neural Information Processing Systems December 2014 Montreal Quebec Canada Citeseer 3689\u20133697."},{"key":"e_1_2_9_36_2","doi-asserted-by":"publisher","DOI":"10.1109\/TKDE.2012.232"}],"container-title":["Computational Intelligence and Neuroscience"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/downloads.hindawi.com\/journals\/cin\/2021\/9911871.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/cin\/2021\/9911871.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1155\/2021\/9911871","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,8,6]],"date-time":"2024-08-06T11:43:29Z","timestamp":1722944609000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1155\/2021\/9911871"}},"subtitle":[],"editor":[{"given":"Cornelio","family":"Y\u00e1\u00f1ez-M\u00e1rquez","sequence":"additional","affiliation":[]}],"short-title":[],"issued":{"date-parts":[[2021,1]]},"references-count":36,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2021,1]]}},"alternative-id":["10.1155\/2021\/9911871"],"URL":"https:\/\/doi.org\/10.1155\/2021\/9911871","archive":["Portico"],"relation":{},"ISSN":["1687-5265","1687-5273"],"issn-type":[{"type":"print","value":"1687-5265"},{"type":"electronic","value":"1687-5273"}],"subject":[],"published":{"date-parts":[[2021,1]]},"assertion":[{"value":"2021-03-09","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-05-27","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2021-06-14","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}],"article-number":"9911871"}}