{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,29]],"date-time":"2025-09-29T11:54:27Z","timestamp":1759146867057},"reference-count":36,"publisher":"Institute of Electronics, Information and Communications Engineers (IEICE)","issue":"7","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["IEICE Trans. Inf. &amp; Syst."],"published-print":{"date-parts":[[2020,7,1]]},"DOI":"10.1587\/transinf.2019edp7274","type":"journal-article","created":{"date-parts":[[2020,6,30]],"date-time":"2020-06-30T22:13:46Z","timestamp":1593555226000},"page":"1693-1702","source":"Crossref","is-referenced-by-count":6,"title":["Stochastic Discrete First-Order Algorithm for Feature Subset Selection"],"prefix":"10.1587","volume":"E103.D","author":[{"given":"Kota","family":"KUDO","sequence":"first","affiliation":[{"name":"Graduate School of Science and Technology, University of Tsukuba"}]},{"given":"Yuichi","family":"TAKANO","sequence":"additional","affiliation":[{"name":"Faculty of Engineering, Information and Systems, University of Tsukuba"}]},{"given":"Ryo","family":"NOMURA","sequence":"additional","affiliation":[{"name":"Center for Data Science, Waseda University"}]}],"member":"532","reference":[{"key":"1","doi-asserted-by":"publisher","unstructured":"[1] O. Al Rasheed, P. Ivani\u0161, and B. Vasi\u0107, \u201cFault-tolerant probabilistic gradient-descent bit flipping decoder,\u201d IEEE Communications Letters, vol.18, no.9, pp.1487-1490, 2014. 10.1109\/lcomm.2014.2344031","DOI":"10.1109\/LCOMM.2014.2344031"},{"key":"2","unstructured":"[2] T.S. Arthanari and Y. Dodge, Mathematical Programming in Statistics, John Wiley &amp; Sons, New York, 1993."},{"key":"3","doi-asserted-by":"publisher","unstructured":"[3] F. Bach, R. Jenatton, J. Mairal, and G. Obozinski, \u201cOptimization with sparsity-inducing penalties,\u201d Foundations and Trends\u00ae in Machine Learning, vol.4, no.1, pp.1-106, 2012. 10.1561\/2200000015","DOI":"10.1561\/2200000015"},{"key":"4","doi-asserted-by":"publisher","unstructured":"[4] D. Bertsimas and A. King, \u201cLogistic regression: From art to science,\u201d Statistical Science, vol.32, no.3, pp.367-384, 2017. 10.1214\/16-sts602","DOI":"10.1214\/16-STS602"},{"key":"5","doi-asserted-by":"publisher","unstructured":"[5] D. Bertsimas, A. King, and R. Mazumder, \u201cBest subset selection via a modern optimization lens,\u201d The Annals of Statistics, vol.44, no.2, pp.813-852, 2016. 10.1214\/15-aos1388","DOI":"10.1214\/15-AOS1388"},{"key":"6","unstructured":"[6] D. Bertsimas, J. Pauphilet, and B. Van Parys, Sparse regression: Scalable algorithms and empirical performance. Statistical Science, in press."},{"key":"7","doi-asserted-by":"publisher","unstructured":"[7] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, \u201cDistributed optimization and statistical learning via the alternating direction method of multipliers,\u201d Foundations and Trends\u00ae in Machine Learning, vol.3, no.1, pp.1-122, 2011. 10.1561\/2200000016","DOI":"10.1561\/2200000016"},{"key":"8","unstructured":"[8] D. Dua and C. Graff, UCI Machine Learning Repository [ http:\/\/archive.ics.uci.edu\/ml ]. Irvine, CA: University of California, School of Information and Computer Science, 2019."},{"key":"9","doi-asserted-by":"publisher","unstructured":"[9] J. Fan and R. Li, \u201cVariable selection via nonconcave penalized likelihood and its oracle properties,\u201d Journal of the American Statistical Association, vol.96, no.456, pp.1348-1360, 2001. 10.1198\/016214501753382273","DOI":"10.1198\/016214501753382273"},{"key":"10","doi-asserted-by":"publisher","unstructured":"[10] B. Efron, T. Hastie, I. Johnstone, and R. Tibshirani, \u201cLeast angle regression,\u201d The Annals of Statistics, vol.32, no.2, pp.407-499, 2004. 10.1214\/009053604000000067","DOI":"10.1214\/009053604000000067"},{"key":"11","unstructured":"[11] M.A. Efroymson, \u201cMultiple regression analysis,\u201d In: Ralston, A., Wilf, H.S. (eds.) Mathematical Methods for Digital Computers, pp.191-203. Wiley, New York, 1960."},{"key":"12","doi-asserted-by":"publisher","unstructured":"[12] L.E. Frank and J.H. Friedman, \u201cA statistical view of some chemometrics regression tools,\u201d Technometrics, vol.35, no.2, pp.109-135, 1993. 10.1080\/00401706.1993.10485033","DOI":"10.1080\/00401706.1993.10485033"},{"key":"13","doi-asserted-by":"publisher","unstructured":"[13] J. Friedman, T. Hastie, and R. Tibshirani, \u201cRegularization paths for generalized linear models via coordinate descent,\u201d Journal of Statistical Software, vol.33, no.1, pp.1-22, 2010. 10.18637\/jss.v033.i01","DOI":"10.18637\/jss.v033.i01"},{"key":"14","doi-asserted-by":"publisher","unstructured":"[14] G.M. Furnival and R.W. Wilson, \u201cRegressions by leaps and bounds,\u201d Technometrics, vol.16, no.4, pp.499-511, 1974. 10.1080\/00401706.1974.10489231","DOI":"10.1080\/00401706.1974.10489231"},{"key":"15","unstructured":"[15] I. Guyon and A. Elisseeff, \u201cAn introduction to variable and feature selection,\u201d Journal of Machine Learning Research, 3(Mar), pp.1157-1182, 2003."},{"key":"16","unstructured":"[16] T. Hastie, R. Tibshirani, and R.J. Tibshirani, Extended comparisons of best subset selection, forward stepwise selection, and the lasso, 2017. arXiv preprint arXiv:1707.08692. https:\/\/arxiv.org\/abs\/1707.08692"},{"key":"17","doi-asserted-by":"crossref","unstructured":"[17] T. Hastie, R. Tibshirani, and M. Wainwright, \u201cStatistical learning with sparsity: the lasso and generalizations,\u201d Chapman and Hall\/CRC, 2015. 10.1201\/b18401","DOI":"10.1201\/b18401"},{"key":"18","doi-asserted-by":"publisher","unstructured":"[18] A.E. Hoerl and R.W. Kennard, \u201cRidge regression: Biased estimation for nonorthogonal problems,\u201d Technometrics, vol.12, no.1, pp.55-67, 1970. 10.1080\/00401706.1970.10488634","DOI":"10.1080\/00401706.1970.10488634"},{"key":"19","unstructured":"[19] S. Kamiya, R. Miyashiro, and Y. Takano, \u201cFeature subset selection for the multinomial logit model via mixed-integer optimization,\u201d In the 22nd International Conference on Artificial Intelligence and Statistics, pp.1254-1263, 2019."},{"key":"20","doi-asserted-by":"publisher","unstructured":"[20] K. Kimura, \u201cApplication of a mixed integer nonlinear programming approach to variable selection in logistic regression,\u201d Journal of the Operations Research Society of Japan, vol.62, no.1, pp.15-36, 2019. 10.15807\/jorsj.62.15","DOI":"10.15807\/jorsj.62.15"},{"key":"21","doi-asserted-by":"publisher","unstructured":"[21] H. Konno and R. Yamamoto, \u201cChoosing the best set of variables in regression analysis using integer programming,\u201d Journal of Global Optimization, vol.44, no.2, pp.273-282, 2009. 10.1007\/s10898-008-9323-9","DOI":"10.1007\/s10898-008-9323-9"},{"key":"22","doi-asserted-by":"crossref","unstructured":"[22] A. Miller, \u201cSubset selection in regression,\u201d Chapman and Hall\/CRC, 2002. 10.1201\/9781420035933","DOI":"10.1201\/9781420035933"},{"key":"23","doi-asserted-by":"crossref","unstructured":"[23] R. Miyashiro and Y. Takano, \u201cSubset selection by Mallows&apos; <i>C<sub>p<\/sub><\/i>: A mixed integer programming approach,\u201d Expert Systems with Applications, vol.42, no.1, pp.325-331, 2015.","DOI":"10.1016\/j.eswa.2014.07.056"},{"key":"24","doi-asserted-by":"publisher","unstructured":"[24] R. Miyashiro and Y. Takano, \u201cMixed integer second-order cone programming formulations for variable selection in linear regression,\u201d European Journal of Operational Research, vol.247, no.3, pp.721-731, 2015. 10.1016\/j.ejor.2015.06.081","DOI":"10.1016\/j.ejor.2015.06.081"},{"key":"25","doi-asserted-by":"publisher","unstructured":"[25] M. Naganuma, Y. Takano, and R. Miyashiro, \u201cFeature subset selection for ordered logit model via tangent-plane-based approximation,\u201d IEICE Transactions on Information and Systems, vol.E102-D, no.5, pp.1046-1053, 2019. 10.1587\/transinf.2018edp7188","DOI":"10.1587\/transinf.2018EDP7188"},{"key":"26","doi-asserted-by":"publisher","unstructured":"[26] B.K. Natarajan, \u201cSparse approximate solutions to linear systems,\u201d SIAM Journal on Computing, vol.24, no.2, pp.227-234, 1995. 10.1137\/s0097539792240406","DOI":"10.1137\/S0097539792240406"},{"key":"27","unstructured":"[27] C.J. van Rijsbergen, Information Retrieval, 2nd Edition.Butterworth-Heinemann, Oxford, 1979."},{"key":"28","doi-asserted-by":"publisher","unstructured":"[28] T. Sato, Y. Takano, and R. Miyashiro, \u201cPiecewise-linear approximation for feature subset selection in a sequential logit model,\u201d Journal of the Operations Research Society of Japan, vol.60, no.1, pp.1-14, 2017. 10.15807\/jorsj.60.1","DOI":"10.15807\/jorsj.60.1"},{"key":"29","doi-asserted-by":"publisher","unstructured":"[29] T. Sato, Y. Takano, R. Miyashiro, and A. Yoshise, \u201cFeature subset selection for logistic regression via mixed integer optimization,\u201d Computational Optimization and Applications, vol.64, no.3, pp.865-880, 2016. 10.1007\/s10589-016-9832-2","DOI":"10.1007\/s10589-016-9832-2"},{"key":"30","doi-asserted-by":"publisher","unstructured":"[30] G. Sundararajan, C. Winstead, and E. Boutillon, \u201cNoisy gradient descent bit-flip decoding for LDPC codes,\u201d IEEE Transactions on Communications, vol.62, no.10, pp.3385-3400, 2014. 10.1109\/tcomm.2014.2356458","DOI":"10.1109\/TCOMM.2014.2356458"},{"key":"31","doi-asserted-by":"publisher","unstructured":"[31] Y. Takano and R. Miyashiro, \u201cBest subset selection via cross-validation criterion,\u201d TOP, 2020. 10.1007\/s11750-020-00538-1","DOI":"10.1007\/s11750-020-00538-1"},{"key":"32","doi-asserted-by":"publisher","unstructured":"[32] R. Tibshirani, \u201cRegression shrinkage and selection via the lasso,\u201d Journal of the Royal Statistical Society: Series B (Methodological), vol.58, no.1, pp.267-288, 1996. 10.1111\/j.2517-6161.1996.tb02080.x","DOI":"10.1111\/j.2517-6161.1996.tb02080.x"},{"key":"33","doi-asserted-by":"crossref","unstructured":"[33] C. Winstead, T. Tithi, E. Boutillon, and F. Ghaffari, \u201cRecent advances on stochastic and noise enhanced methods in error correction decoders,\u201d 10th International Symposium on Turbo Codes &amp; Iterative Information Processing, pp.1-5, 2018. 10.1109\/istc.2018.8625277","DOI":"10.1109\/ISTC.2018.8625277"},{"key":"34","doi-asserted-by":"publisher","unstructured":"[34] C.-H. Zhang, \u201cNearly unbiased variable selection under minimax concave penalty,\u201d The Annals of Statistics, vol.38, no.2, pp.894-942, 2010. 10.1214\/09-aos729","DOI":"10.1214\/09-AOS729"},{"key":"35","doi-asserted-by":"publisher","unstructured":"[35] H. Zou, \u201cThe adaptive lasso and its oracle properties,\u201d Journal of the American Statistical Association, vol.101, no.476, pp.1418-1429, 2006. 10.1198\/016214506000000735","DOI":"10.1198\/016214506000000735"},{"key":"36","doi-asserted-by":"crossref","unstructured":"[36] H. Zou and T. Hastie, \u201cRegularization and variable selection via the elastic net,\u201d Journal of the Royal Statistical Society: Series B (Statistical Methodology), vol.67, no.2, pp.301-320, 2005. 10.1111\/j.1467-9868.2005.00503.x","DOI":"10.1111\/j.1467-9868.2005.00503.x"}],"container-title":["IEICE Transactions on Information and Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.jstage.jst.go.jp\/article\/transinf\/E103.D\/7\/E103.D_2019EDP7274\/_pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,10,31]],"date-time":"2022-10-31T11:22:10Z","timestamp":1667215330000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.jstage.jst.go.jp\/article\/transinf\/E103.D\/7\/E103.D_2019EDP7274\/_article"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,7,1]]},"references-count":36,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2020]]}},"URL":"https:\/\/doi.org\/10.1587\/transinf.2019edp7274","relation":{},"ISSN":["0916-8532","1745-1361"],"issn-type":[{"value":"0916-8532","type":"print"},{"value":"1745-1361","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,7,1]]}}}