{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2022,4,2]],"date-time":"2022-04-02T02:11:43Z","timestamp":1648865503725},"reference-count":0,"publisher":"IOS Press","license":[{"start":{"date-parts":[[2021,10,14]],"date-time":"2021-10-14T00:00:00Z","timestamp":1634169600000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2021,10,14]]},"abstract":"<jats:p>To find a way of the interpretability of deep learning, in this paper, a features back-tracking (FBT) approach based on a sparse deep learning architecture is proposed. Firstly, for a deep belief network (DBN), both the Kullback-Leibler divergence of the hidden neurons and the L1 norm penalty on the connection weights are introduced. Thus, the sparse response mechanism as well as the sparse connection of the brain neurons can be simulated directly. That means the DBN can learn a sparse framework and an effective sparse data representation. On this basis, the feature back-tracking technique is put forward. For both the single nucleotide polymorphisms (SNPs) data and MNIST data, FBT has quite well performance on searching for the risk loci on the genes as well as the important sites of the digit data. It reveals that the proposed FBT method can pick out the essential features by deep learning architecture with quite high classification accuracy and data storage ability. Utilizing the sparse layer-wise feature learning to achieve key features from the original data, is an effective attempt to explore the profound mechanism of human brain and interpretability of deep learning.<\/jats:p>","DOI":"10.3233\/faia210211","type":"book-chapter","created":{"date-parts":[[2021,10,20]],"date-time":"2021-10-20T21:54:26Z","timestamp":1634766866000},"source":"Crossref","is-referenced-by-count":0,"title":["Feature Back-Tracking with Sparse Deep Belief Networks"],"prefix":"10.3233","author":[{"given":"Chen","family":"Qiao","sequence":"first","affiliation":[{"name":"School of Mathematics and Statistics, Xi\u2019an Jiaotong University, China"}]},{"given":"Jiajia","family":"Li","sequence":"additional","affiliation":[{"name":"School of Mathematics and Statistics, Xi\u2019an Jiaotong University, China"}]},{"given":"Xuewu","family":"Zhang","sequence":"additional","affiliation":[{"name":"China Railway First Survey and Design Institute Group Co., Ltd, China"}]},{"given":"Cheng","family":"Zhang","sequence":"additional","affiliation":[{"name":"China Railway First Survey and Design Institute Group Co., Ltd, China"}]},{"given":"Wenfeng","family":"Jing","sequence":"additional","affiliation":[{"name":"School of Mathematics and Statistics, Xi\u2019an Jiaotong University, China"}]},{"given":"Danglin","family":"Yang","sequence":"additional","affiliation":[{"name":"Suzhou Hanlin Information Technology Development Co., Ltd, China"}]}],"member":"7437","container-title":["Frontiers in Artificial Intelligence and Applications","Fuzzy Systems and Data Mining VII"],"original-title":[],"link":[{"URL":"https:\/\/ebooks.iospress.nl\/pdf\/doi\/10.3233\/FAIA210211","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,10,25]],"date-time":"2021-10-25T13:40:46Z","timestamp":1635169246000},"score":1,"resource":{"primary":{"URL":"https:\/\/ebooks.iospress.nl\/doi\/10.3233\/FAIA210211"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,10,14]]},"references-count":0,"URL":"https:\/\/doi.org\/10.3233\/faia210211","relation":{},"ISSN":["0922-6389","1879-8314"],"issn-type":[{"value":"0922-6389","type":"print"},{"value":"1879-8314","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,10,14]]}}}