{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2023,11,8]],"date-time":"2023-11-08T00:42:35Z","timestamp":1699404155310},"reference-count":0,"publisher":"MIT Press","issue":"12","content-domain":{"domain":["direct.mit.edu"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2023,11,7]]},"abstract":"<jats:title>Abstract<\/jats:title>\n               <jats:p>Deep convolutional neural networks (DCNNs) have demonstrated impressive robustness to recognize objects under transformations (e.g., blur or noise) when these transformations are included in the training set. A hypothesis to explain such robustness is that DCNNs develop invariant neural representations that remain unaltered when the image is transformed. However, to what extent this hypothesis holds true is an outstanding question, as robustness to transformations could be achieved with properties different from invariance; for example, parts of the network could be specialized to recognize either transformed or nontransformed images. This article investigates the conditions under which invariant neural representations emerge by leveraging that they facilitate robustness to transformations beyond the training distribution. Concretely, we analyze a training paradigm in which only some object categories are seen transformed during training and evaluate whether the DCNN is robust to transformations across categories not seen transformed. Our results with state-of-the-art DCNNs indicate that invariant neural representations do not always drive robustness to transformations, as networks show robustness for categories seen transformed during training even in the absence of invariant neural representations. Invariance emerges only as the number of transformed categories in the training set is increased. This phenomenon is much more prominent with local transformations such as blurring and high-pass filtering than geometric transformations such as rotation and thinning, which entail changes in the spatial arrangement of the object. Our results contribute to a better understanding of invariant neural representations in deep learning and the conditions under which it spontaneously emerges.<\/jats:p>","DOI":"10.1162\/neco_a_01621","type":"journal-article","created":{"date-parts":[[2023,10,16]],"date-time":"2023-10-16T21:31:51Z","timestamp":1697491911000},"page":"1910-1937","update-policy":"http:\/\/dx.doi.org\/10.1162\/mitpressjournals.corrections.policy","source":"Crossref","is-referenced-by-count":0,"title":["Robustness to Transformations Across Categories: Is Robustness Driven by Invariant Neural Representations?"],"prefix":"10.1162","volume":"35","author":[{"given":"Hojin","family":"Jang","sequence":"first","affiliation":[{"name":"Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, U.S.A. jangh@mit.edu"}]},{"given":"Syed Suleman Abbas","family":"Zaidi","sequence":"additional","affiliation":[{"name":"Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, U.S.A."},{"name":"Department of Electrical and Computer Engineering, Technical University of Munich, 80333 Munich, Germany ssazaidi@mit.edu"}]},{"given":"Xavier","family":"Boix","sequence":"additional","affiliation":[{"name":"Department of Brain and Cognitive Sciences and Center for Brains, Minds and Machines, MIT, Cambridge, MA 02139, U.S.A."},{"name":"Fujitsu Research of America, Sunnyvale, CA 94085, U.S.A. xboix@fujitsu.com"}]},{"given":"Neeraj","family":"Prasad","sequence":"additional","affiliation":[{"name":"Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, U.S.A. nprasad@mit.edu"}]},{"given":"Sharon","family":"Gilad-Gutnick","sequence":"additional","affiliation":[{"name":"Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, U.S.A. sharongu@mit.edu"}]},{"given":"Shlomit","family":"Ben-Ami","sequence":"additional","affiliation":[{"name":"Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, U.S.A. shlomit@mit.edu"}]},{"given":"Pawan","family":"Sinha","sequence":"additional","affiliation":[{"name":"Department of Brain and Cognitive Sciences, MIT, Cambridge, MA 02139, U.S.A. psinha@mit.edu"}]}],"member":"281","published-online":{"date-parts":[[2023,11,7]]},"container-title":["Neural Computation"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/direct.mit.edu\/neco\/article-pdf\/35\/12\/1910\/2168836\/neco_a_01621.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/direct.mit.edu\/neco\/article-pdf\/35\/12\/1910\/2168836\/neco_a_01621.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,11,7]],"date-time":"2023-11-07T21:56:08Z","timestamp":1699394168000},"score":1,"resource":{"primary":{"URL":"https:\/\/direct.mit.edu\/neco\/article\/35\/12\/1910\/117834\/Robustness-to-Transformations-Across-Categories-Is"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,11,7]]},"references-count":0,"journal-issue":{"issue":"12","published-online":{"date-parts":[[2023,11,7]]},"published-print":{"date-parts":[[2023,11,7]]}},"URL":"https:\/\/doi.org\/10.1162\/neco_a_01621","relation":{},"ISSN":["0899-7667","1530-888X"],"issn-type":[{"value":"0899-7667","type":"print"},{"value":"1530-888X","type":"electronic"}],"subject":[],"published-other":{"date-parts":[[2023,12]]},"published":{"date-parts":[[2023,11,7]]}}}