{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,19]],"date-time":"2025-09-19T07:21:10Z","timestamp":1758266470017,"version":"3.41.0"},"reference-count":32,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2014,8,29]],"date-time":"2014-08-29T00:00:00Z","timestamp":1409270400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/100000143","name":"Division of Computing and Communication Foundations","doi-asserted-by":"publisher","id":[{"id":"10.13039\/100000143","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000185","name":"Defense Advanced Research Projects Agency","doi-asserted-by":"publisher","id":[{"id":"10.13039\/100000185","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100000121","name":"Division of Mathematical Sciences","doi-asserted-by":"publisher","id":[{"id":"10.13039\/100000121","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/100006602","name":"Air Force Research Laboratory","doi-asserted-by":"publisher","id":[{"id":"10.13039\/100006602","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Knowl. Discov. Data"],"published-print":{"date-parts":[[2014,10,7]]},"abstract":"<jats:p>\n            Let\n            <jats:bold>X<\/jats:bold>\n            be a data matrix of rank \u03c1, whose rows represent\n            <jats:italic>n<\/jats:italic>\n            points in\n            <jats:italic>d<\/jats:italic>\n            -dimensional space. The linear support vector machine constructs a hyperplane separator that maximizes the 1-norm soft margin. We develop a new oblivious dimension reduction technique that is precomputed and can be applied to any input matrix\n            <jats:bold>X<\/jats:bold>\n            . We prove that, with high probability, the margin and minimum enclosing ball in the feature space are preserved to within \u03f5-relative error, ensuring comparable generalization as in the original space in the case of classification. For regression, we show that the margin is preserved to \u03f5-relative error with high probability. We present extensive experiments with real and synthetic data to support our theory.\n          <\/jats:p>","DOI":"10.1145\/2641760","type":"journal-article","created":{"date-parts":[[2014,10,7]],"date-time":"2014-10-07T12:57:47Z","timestamp":1412686667000},"page":"1-25","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":23,"title":["Random Projections for Linear Support Vector Machines"],"prefix":"10.1145","volume":"8","author":[{"given":"Saurabh","family":"Paul","sequence":"first","affiliation":[{"name":"Rensselaer Polytechnic Institute, USA"}]},{"given":"Christos","family":"Boutsidis","sequence":"additional","affiliation":[{"name":"Yahoo! Labs, New York, NY"}]},{"given":"Malik","family":"Magdon-Ismail","sequence":"additional","affiliation":[{"name":"Rensselaer Polytechnic Institute, USA"}]},{"given":"Petros","family":"Drineas","sequence":"additional","affiliation":[{"name":"Rensselaer Polytechnic Institute, USA"}]}],"member":"320","published-online":{"date-parts":[[2014,8,29]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0022-0000(03)00025-4"},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/1132516.1132597"},{"volume-title":"Proceedings of the 19th Annual ACM-SIAM Symposium on Discrete Algorithms. 1--9.","author":"Ailon N.","key":"e_1_2_1_3_1","unstructured":"N. Ailon and E. Liberty . 2008. Fast dimension reduction using Rademacher series on dual BCH codes . In Proceedings of the 19th Annual ACM-SIAM Symposium on Discrete Algorithms. 1--9. N. Ailon and E. Liberty. 2008. Fast dimension reduction using Rademacher series on dual BCH codes. In Proceedings of the 19th Annual ACM-SIAM Symposium on Discrete Algorithms. 1--9."},{"volume-title":"Proceedings of the 12th International Conference on Algorithmic Learning Theory. 119--134","author":"Balcazar J. L.","key":"e_1_2_1_4_1","unstructured":"J. L. Balcazar , Y. Dai , and O. Watanabe . 2001. A random sampling technique for training support vector machines . In Proceedings of the 12th International Conference on Algorithmic Learning Theory. 119--134 . J. L. Balcazar, Y. Dai, and O. Watanabe. 2001. A random sampling technique for training support vector machines. In Proceedings of the 12th International Conference on Algorithmic Learning Theory. 119--134."},{"volume-title":"Proceedings of SIAM Workshop in Discrete Mathematics and Data Mining.","author":"Balczar J. L.","key":"e_1_2_1_5_1","unstructured":"J. L. Balczar , Y. Dai , and O. Watanabe . 2002. Provably fast support vector regression using random sampling . In Proceedings of SIAM Workshop in Discrete Mathematics and Data Mining. J. L. Balczar, Y. Dai, and O. Watanabe. 2002. Provably fast support vector regression using random sampling. In Proceedings of SIAM Workshop in Discrete Mathematics and Data Mining."},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1007\/11752790_3"},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2006.881945"},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/1961189.1961199"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/2488608.2488620"},{"key":"e_1_2_1_10_1","unstructured":"K. Crammer and Y. Singer. 2000. On the learnability and design of output codes for multi-class problems. In Computational Learning Theory. 35--46.   K. Crammer and Y. Singer. 2000. On the learnability and design of output codes for multi-class problems. In Computational Learning Theory. 35--46."},{"key":"e_1_2_1_11_1","doi-asserted-by":"crossref","unstructured":"N. Cristianini and J. Shawe-Taylor. 2000. Support Vector Machines and Other Kernel-based Learning Methods. Cambridge University Press.   N. Cristianini and J. Shawe-Taylor. 2000. Support Vector Machines and Other Kernel-based Learning Methods. Cambridge University Press.","DOI":"10.1017\/CBO9780511801389"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1002\/rsa.10073"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/1008992.1009036"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00211-010-0331-6"},{"key":"e_1_2_1_15_1","volume-title":"Journal of Machine Learning Research (2008)","author":"Fan R.-E.","year":"2008","unstructured":"R.-E. Fan , K.-W. Chang , C.-J. Hsieh , X.-R. Wang , and C.-J. Lin . 2008 . LIBLINEAR: A library for large linear classification . Journal of Machine Learning Research (2008) , 1871--1874. R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research (2008), 1871--1874."},{"key":"e_1_2_1_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/276698.276876"},{"key":"e_1_2_1_17_1","unstructured":"V. Jethava K. Suresh C. Bhattacharyya and R. Hariharan. 2009. Randomized algorithms for large scale SVMs. CoRR abs\/0909.3609 (2009). http:\/\/arxiv.org\/abs\/0909.3609.  V. Jethava K. Suresh C. Bhattacharyya and R. Hariharan. 2009. Randomized algorithms for large scale SVMs. CoRR abs\/0909.3609 (2009). http:\/\/arxiv.org\/abs\/0909.3609."},{"key":"e_1_2_1_18_1","unstructured":"S. Krishnan C. Bhattacharyya and R. Hariharan. 2008. A randomized algorithm for large scale support vector learning. In Advances in 20th Neural Information Processing Systems. 793--800.  S. Krishnan C. Bhattacharyya and R. Hariharan. 2008. A randomized algorithm for large scale support vector learning. In Advances in 20th Neural Information Processing Systems. 793--800."},{"key":"e_1_2_1_19_1","unstructured":"D. D. Lewis Y. Yang T. G. Rose and F. Li. 2004. RCV1: A new benchmark collection for text categorization research. Journal of Machine Learning Research (2004) 361--397.   D. D. Lewis Y. Yang T. G. Rose and F. Li. 2004. RCV1: A new benchmark collection for text categorization research. Journal of Machine Learning Research (2004) 361--397."},{"key":"e_1_2_1_20_1","doi-asserted-by":"crossref","unstructured":"J. Z. Li D. M. Absher H. Tang A. M. Southwick A. M. Casto S. Ramachandran H. M. Cann G. S. Barsh M. Feldman L. L. Cavalli-Sforza and R. M. Myers. 2008. Worldwide human relationships inferred from genome-wide patterns of variation. Science 319 5866 (2008) 1100--1104.  J. Z. Li D. M. Absher H. Tang A. M. Southwick A. M. Casto S. Ramachandran H. M. Cann G. S. Barsh M. Feldman L. L. Cavalli-Sforza and R. M. Myers. 2008. Worldwide human relationships inferred from genome-wide patterns of variation. Science 319 5866 (2008) 1100--1104.","DOI":"10.1126\/science.1153717"},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1145\/1150402.1150436"},{"volume-title":"Proceedings of the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms. 1422--1436","author":"Magen A.","key":"e_1_2_1_22_1","unstructured":"A. Magen and A. Zouzias . 2011. Low rank matrix-valued Chernoff bounds and approximate matrix multiplication . In Proceedings of the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms. 1422--1436 . A. Magen and A. Zouzias. 2011. Low rank matrix-valued Chernoff bounds and approximate matrix multiplication. In Proceedings of the 22nd Annual ACM-SIAM Symposium on Discrete Algorithms. 1422--1436."},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/2488608.2488621"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/FOCS.2013.21"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1136\/jmg.2010.078212"},{"volume-title":"Proceedings of the 16th International Conference on Artificial Intelligence & Statistics, JMLR W&CP. 498--506","author":"Paul S.","key":"e_1_2_1_26_1","unstructured":"S. Paul , C. Boutsidis , M. Magdon-Ismail , and P. Drineas . 2013. Random projections for support vector machines . In Proceedings of the 16th International Conference on Artificial Intelligence & Statistics, JMLR W&CP. 498--506 . S. Paul, C. Boutsidis, M. Magdon-Ismail, and P. Drineas. 2013. Random projections for support vector machines. In Proceedings of the 16th International Conference on Artificial Intelligence & Statistics, JMLR W&CP. 498--506."},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1038\/73432"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.5555\/1577069.1755873"},{"key":"e_1_2_1_29_1","volume-title":"Proceedings of 29th International Conference on Machine Learning. 591--598","author":"Shi Q.","year":"2012","unstructured":"Q. Shi , C. Shen , R. Hill , and A. V. D. Hengel . 2012 . Is margin preserved after random projection&quest; . In Proceedings of 29th International Conference on Machine Learning. 591--598 . Q. Shi, C. Shen, R. Hill, and A. V. D. Hengel. 2012. Is margin preserved after random projection&quest;. In Proceedings of 29th International Conference on Machine Learning. 591--598."},{"key":"e_1_2_1_30_1","doi-asserted-by":"crossref","unstructured":"V. N. Vapnik and A. Chervonenkis. 1971. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications 16 (1971) 264--280.  V. N. Vapnik and A. Chervonenkis. 1971. On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications 16 (1971) 264--280.","DOI":"10.1137\/1116025"},{"key":"e_1_2_1_31_1","volume-title":"Statistical learning theory. Theory of Probability and its Applications 16","author":"Vapnik V. N.","year":"1998","unstructured":"V. N. Vapnik . 1998. Statistical learning theory. Theory of Probability and its Applications 16 ( 1998 ), 264--280. V. N. Vapnik. 1998. Statistical learning theory. Theory of Probability and its Applications 16 (1998), 264--280."},{"key":"e_1_2_1_32_1","volume-title":"Proceedings of the Conference on Learning Theory (COLT'13)","volume":"30","author":"Zhang L.","unstructured":"L. Zhang , M. Mahdavi , R. Jin , and T. Yang . 2013. Recovering optimal solution by dual random projection . In Proceedings of the Conference on Learning Theory (COLT'13) JMLR W&CP, Vol. 30 . 135--157. L. Zhang, M. Mahdavi, R. Jin, and T. Yang. 2013. Recovering optimal solution by dual random projection. In Proceedings of the Conference on Learning Theory (COLT'13) JMLR W&CP, Vol. 30. 135--157."}],"container-title":["ACM Transactions on Knowledge Discovery from Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/2641760","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/2641760","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T06:56:19Z","timestamp":1750229779000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/2641760"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2014,8,29]]},"references-count":32,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2014,10,7]]}},"alternative-id":["10.1145\/2641760"],"URL":"https:\/\/doi.org\/10.1145\/2641760","relation":{},"ISSN":["1556-4681","1556-472X"],"issn-type":[{"type":"print","value":"1556-4681"},{"type":"electronic","value":"1556-472X"}],"subject":[],"published":{"date-parts":[[2014,8,29]]},"assertion":[{"value":"2013-04-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2013-12-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2014-08-29","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}