{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,7,7]],"date-time":"2024-07-07T23:10:45Z","timestamp":1720393845501},"reference-count":19,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2018,7,31]],"date-time":"2018-07-31T00:00:00Z","timestamp":1532995200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2018,7,31]],"date-time":"2018-07-31T00:00:00Z","timestamp":1532995200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["IPSJ T Comput Vis Appl"],"published-print":{"date-parts":[[2018,12]]},"abstract":"<jats:title>Abstract<\/jats:title>\n               <jats:p>We present a simple multi-scale learning network for image classification that is inspired by the MobileNet. The proposed method has two advantages: (1) It uses the multi-scale block with depthwise separable convolutions, which forms multiple sub-networks by increasing the width of the network while keeping the computational resources constant. (2) It combines the multi-scale block with residual connections and that accelerates the training of networks significantly. The experimental results show that the proposed method has strong performance compared to other popular models on different datasets.<\/jats:p>","DOI":"10.1186\/s41074-018-0047-6","type":"journal-article","created":{"date-parts":[[2018,7,31]],"date-time":"2018-07-31T11:46:20Z","timestamp":1533037580000},"update-policy":"http:\/\/dx.doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["An multi-scale learning network with depthwise separable convolutions"],"prefix":"10.1186","volume":"10","author":[{"given":"Gaihua","family":"Wang","sequence":"first","affiliation":[]},{"given":"Guoliang","family":"Yuan","sequence":"additional","affiliation":[]},{"given":"Tao","family":"Li","sequence":"additional","affiliation":[]},{"given":"Meng","family":"Lv","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2018,7,31]]},"reference":[{"key":"47_CR1","doi-asserted-by":"publisher","first-page":"193","DOI":"10.1007\/BF00344251","volume":"36","author":"K Fukushima","year":"1980","unstructured":"Fukushima K (1980) A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position. Biol Cybern 36:193\u2013202.","journal-title":"Biol Cybern"},{"issue":"11","key":"47_CR2","doi-asserted-by":"publisher","first-page":"2278","DOI":"10.1109\/5.726791","volume":"86","author":"Y LeCun","year":"1998","unstructured":"LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278\u20132324.","journal-title":"Proc IEEE"},{"key":"47_CR3","doi-asserted-by":"publisher","first-page":"60","DOI":"10.1016\/j.media.2017.07.005","volume":"42","author":"G Litjens","year":"2017","unstructured":"Litjens G, Kooi T, Bejnordi BE, Setio AAA (2017) A survey on deep learning in medical image analysis. Med Image Anal 42:60\u201388.","journal-title":"Med Image Anal"},{"issue":"7553","key":"47_CR4","doi-asserted-by":"publisher","first-page":"436","DOI":"10.1038\/nature14539","volume":"521","author":"Y LeCun","year":"2015","unstructured":"LeCun Y, Bengio Y, Hinton G (2015) Deep learning. Nature 521(7553):436.","journal-title":"Nature"},{"key":"47_CR5","first-page":"1106","volume":"25","author":"A Krizhevsky","year":"2012","unstructured":"Krizhevsky A, Sutskever I, Hinton G (2012) ImageNet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 25:1106\u20131114.","journal-title":"Adv Neural Inf Process Syst"},{"key":"47_CR6","first-page":"1929","volume":"15","author":"N Srivastava","year":"2014","unstructured":"Srivastava N, Hinton G, Krizhevsky A (2014) Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res 15:1929\u20131958.","journal-title":"J Mach Learn Res"},{"key":"47_CR7","unstructured":"Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. Computer Vision and Pattern Recognition 6. https:\/\/arxiv.org\/abs\/1409.1556v6."},{"key":"47_CR8","doi-asserted-by":"publisher","first-page":"446","DOI":"10.1016\/j.patcog.2017.06.037","volume":"72","author":"L Ren","year":"2017","unstructured":"Ren L, Lu J, Feng J, Zhou J (2017) Multi-modal uniform deep learning for RGB-D person re-identification. Pattern Recogn 72:446\u2013457.","journal-title":"Pattern Recogn"},{"key":"47_CR9","doi-asserted-by":"crossref","first-page":"94","DOI":"10.1016\/j.patcog.2017.05.024","volume":"71","author":"Z Liao","year":"2017","unstructured":"Liao Z, Carneiro G (2017) A deep convolutional neural network module that promotes competition of multiple-size filters. Pattern Recogn 71:94\u2013105.","journal-title":"Pattern Recogn"},{"key":"47_CR10","first-page":"1","volume":"000","author":"MJ Afridi","year":"2017","unstructured":"Afridi MJ, Ross A, Shapiro EM (2017) On automated source selection for transfer learning in convolutional neural networks. Pattern Recogn 000:1\u201311.","journal-title":"Pattern Recogn"},{"key":"47_CR11","first-page":"770","volume-title":"Deep residual learning for image recognition","author":"K He","year":"2015","unstructured":"He K, Zhang X, Ren S, Sun J (2015) Deep residual learning for image recognition, pp 770\u2013778."},{"key":"47_CR12","first-page":"4842v1","volume":"1409","author":"C Szegedy","year":"2014","unstructured":"Szegedy C, Liu W, Jia Y (2014) Going deeper with convolutions. arXiv 1409:4842v1 1.","journal-title":"arXiv"},{"key":"47_CR13","first-page":"05431v2","volume":"1611","author":"S Xie","year":"2017","unstructured":"Xie S, Girshick R, Doll\u00e1r P, Tu Z, He K (2017) Aggregated residual transformations for deep neural networks. arXiv 1611:05431v2 2.","journal-title":"arXiv"},{"key":"47_CR14","doi-asserted-by":"publisher","first-page":"1558","DOI":"10.1109\/TMM.2017.2659221","volume":"19","author":"Z Ma","year":"2017","unstructured":"Ma Z, Chang X, Yang Y, Sebe N (2017) The many shades of negativity. IEEE Trans Multimedia 19:1558\u20131568.","journal-title":"IEEE Trans Multimedia"},{"key":"47_CR15","doi-asserted-by":"publisher","first-page":"661","DOI":"10.1109\/TMM.2012.2237023","volume":"15","author":"Y Yang","year":"2013","unstructured":"Yang Y, Ma Z, Hauptmann GA (2013) Feature selection for multimedia analysis by sharing information among multiple tasks. IEEE Trans Multimedia 15:661\u2013669.","journal-title":"IEEE Trans Multimedia"},{"key":"47_CR16","first-page":"02357v3","volume":"1610","author":"F Chollet","year":"2017","unstructured":"Chollet F (2017) Xception: deep learning with depthwise separable convolutions. arXiv 1610:02357v3 3.","journal-title":"arXiv"},{"key":"47_CR17","first-page":"04337v1","volume":"1608","author":"M Wang","year":"2016","unstructured":"Wang M, Liu B, Foroosh H (2016) Factorized convolutional neural networks. arXiv 1608:04337v1 1.","journal-title":"arXiv"},{"key":"47_CR18","first-page":"04861v1","volume":"1704","author":"AG Howard","year":"2017","unstructured":"Howard AG, Zhu M, Dmitry BC, Kalenichenko D (2017) MobileNets: efficient convolutional neural networks for mobile vision applications. arXiv 1704:04861v1 1.","journal-title":"arXiv"},{"key":"47_CR19","first-page":"01083v1","volume":"1707","author":"X Zhang","year":"2017","unstructured":"Zhang X, Zhou X, Lin M, Sun J (2017) ShuffleNet: an extremely efficient convolutional neural network for mobile devices. arXiv 1707:01083v1 1.","journal-title":"arXiv"}],"container-title":["IPSJ Transactions on Computer Vision and Applications"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s41074-018-0047-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s41074-018-0047-6\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s41074-018-0047-6.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,7,30]],"date-time":"2021-07-30T12:08:51Z","timestamp":1627646931000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1186\/s41074-018-0047-6"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2018,7,31]]},"references-count":19,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2018,12]]}},"alternative-id":["47"],"URL":"https:\/\/doi.org\/10.1186\/s41074-018-0047-6","relation":{},"ISSN":["1882-6695"],"issn-type":[{"value":"1882-6695","type":"electronic"}],"subject":[],"published":{"date-parts":[[2018,7,31]]},"assertion":[{"value":"20 March 2018","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"20 July 2018","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"31 July 2018","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The authors declare that they have no competing interests.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}},{"value":"Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Publisher\u2019s Note"}}],"article-number":"11"}}