{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"institution":[{"name":"bioRxiv"}],"indexed":{"date-parts":[[2026,3,19]],"date-time":"2026-03-19T12:58:03Z","timestamp":1773925083863,"version":"3.50.1"},"posted":{"date-parts":[[2026,3,17]]},"group-title":"Neuroscience","reference-count":18,"publisher":"openRxiv","license":[{"start":{"date-parts":[[2026,3,17]],"date-time":"2026-03-17T00:00:00Z","timestamp":1773705600000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by-nc\/4.0\/"}],"funder":[{"name":"EU Next Generation under the Recovery and Resilience Facility","award":["IASOMM2024007 (ENGRAMMER project)"],"award-info":[{"award-number":["IASOMM2024007 (ENGRAMMER project)"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"accepted":{"date-parts":[[2026,3,17]]},"abstract":"<jats:title>\n                  A\n                  <jats:sc>bstract<\/jats:sc>\n                <\/jats:title>\n                <jats:p>One of the most compelling ideas for bridging neuroscience and artificial neural networks is the establishment of a framework based on three main components: network architecture, optimization mechanism, and loss (or objective) function to be minimized. While the first two components have been extensively explored, the definition of a loss or objective function in neuroscience has been addressed less thoroughly, often from perspectives such as predictive coding. In this work, we propose an elementary loss function grounded in the comparison of neuronal responses to two signals: an external one, used for learning, and an internal one, reflecting the acquired knowledge. The loss function is thus simply the basic difference between the two, which, in terms of logical signals, corresponds to a well-known non-linearly separable function: the XOR function. We illustrate with a computational example how a binarized image recognition algorithm can be straightforwardly implemented in an autoencoder, and we show how a neuronal motif organized around an inhibitory neuron could implement such XOR operation and provide a feedback signal that makes optimization possible.<\/jats:p>","DOI":"10.64898\/2026.03.16.712061","type":"posted-content","created":{"date-parts":[[2026,3,17]],"date-time":"2026-03-17T17:03:37Z","timestamp":1773767017000},"source":"Crossref","is-referenced-by-count":0,"title":["Toward defining loss functions in neuroscience: an XOR-based neuronal mechanism"],"prefix":"10.64898","author":[{"ORCID":"https:\/\/orcid.org\/0009-0001-6456-9832","authenticated-orcid":false,"given":"Mar\u00eda","family":"Pe\u00f1a Fern\u00e1ndez","sequence":"first","affiliation":[{"name":"Advanced Computing and e-Science Group, Instituto de F\u00edsica de Cantabria (IFCA) CSIC-Universidad de Cantabria Santander, ES 39005 SPAIN"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0157-4765","authenticated-orcid":false,"given":"Lara Lloret","family":"Iglesias","sequence":"additional","affiliation":[{"name":"Advanced Computing and e-Science Group, Instituto de F\u00edsica de Cantabria (IFCA) CSIC-Universidad de Cantabria Santander, ES 39005 SPAIN"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7914-8494","authenticated-orcid":false,"given":"Jes\u00fas Marco","family":"de Lucas","sequence":"additional","affiliation":[{"name":"Advanced Computing and e-Science Group, Instituto de F\u00edsica de Cantabria (IFCA) CSIC-Universidad de Cantabria Santander, ES 39005 SPAIN"}]}],"member":"54368","reference":[{"key":"2026031902151188000_2026.03.16.712061v1.1","doi-asserted-by":"crossref","unstructured":"Gilra, A. & Gerstner, W. (2017), \u2018Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network\u2019, eLife 6(e28295).","DOI":"10.7554\/eLife.28295"},{"key":"2026031902151188000_2026.03.16.712061v1.2","doi-asserted-by":"crossref","unstructured":"Lillicrap, T. P. , Cownden, D. , Tweed, D. B. & Akerman, C. J. (2016), \u2018Random synaptic feedback weights support error backpropagation for deep learning\u2019, Nature Communications 7(13276).","DOI":"10.1038\/ncomms13276"},{"key":"2026031902151188000_2026.03.16.712061v1.3","doi-asserted-by":"publisher","DOI":"10.1038\/s41583-020-0277-31"},{"key":"2026031902151188000_2026.03.16.712061v1.4","article-title":"\u2018Implementing engrams from a machine learning perspective: matching for prediction\u2019","year":"2023","journal-title":"arXiv preprint"},{"key":"2026031902151188000_2026.03.16.712061v1.5","article-title":"\u2018From worms to mice: homeostasis maybe all you need\u2019","year":"2024","journal-title":"arXiv preprint"},{"key":"2026031902151188000_2026.03.16.712061v1.6","article-title":"\u2018Implementing engrams from a machine learning perspective: the relevance of a latent space\u2019","year":"2024","journal-title":"arXiv preprint"},{"key":"2026031902151188000_2026.03.16.712061v1.7","doi-asserted-by":"publisher","DOI":"10.7554\/eLife.43299"},{"key":"2026031902151188000_2026.03.16.712061v1.8","doi-asserted-by":"publisher","DOI":"10.1007\/BF00275687"},{"key":"2026031902151188000_2026.03.16.712061v1.9","doi-asserted-by":"publisher","DOI":"10.1038\/s41593-021-00857-x"},{"key":"2026031902151188000_2026.03.16.712061v1.10","article-title":"\u2018Implementing engrams from a machine learning perspective: XOR as a basic motif\u2019","year":"2024","journal-title":"arXiv preprint"},{"key":"2026031902151188000_2026.03.16.712061v1.11","doi-asserted-by":"publisher","DOI":"10.1038\/4580"},{"key":"2026031902151188000_2026.03.16.712061v1.12","doi-asserted-by":"publisher","DOI":"10.1038\/s41593-019-0520-2"},{"key":"2026031902151188000_2026.03.16.712061v1.13","doi-asserted-by":"crossref","unstructured":"Rosenblatt, F. (1962), Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms, Spartan Books.","DOI":"10.21236\/AD0256582"},{"key":"2026031902151188000_2026.03.16.712061v1.14","doi-asserted-by":"publisher","DOI":"10.1038\/323533a0"},{"key":"2026031902151188000_2026.03.16.712061v1.15","doi-asserted-by":"publisher","DOI":"10.1073\/pnas.70.4.997"},{"key":"2026031902151188000_2026.03.16.712061v1.16","doi-asserted-by":"publisher","DOI":"10.1016\/j.tics.2018.12.005"},{"key":"2026031902151188000_2026.03.16.712061v1.17","doi-asserted-by":"crossref","unstructured":"Wu, Y. , Zhao, R. , Zhu, J. , Chen, F. , Xu, M. , Li, G. , Song, S. , Deng, L. , Wang, G. , Zheng, H. , Ma, S. , Pei, J. , Zhang, Y. , Zhao, M. & Shi, L. (2022), \u2018Brain-inspired global-local learning incorporated with neuromorphic computing\u2019, Nature Communications 13(65).","DOI":"10.1038\/s41467-021-27653-2"},{"key":"2026031902151188000_2026.03.16.712061v1.18","doi-asserted-by":"crossref","unstructured":"Zhang, R. , Isola, P. , Efros, A. A. , Shechtman, E. & Wang, O. (2018), The unreasonable effectiveness of deep features as a perceptual metric, in \u2018Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)\u2019, pp. 586\u2013595.","DOI":"10.1109\/CVPR.2018.00068"}],"container-title":[],"original-title":[],"link":[{"URL":"https:\/\/syndication.highwire.org\/content\/doi\/10.64898\/2026.03.16.712061","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,3,19]],"date-time":"2026-03-19T09:15:26Z","timestamp":1773911726000},"score":1,"resource":{"primary":{"URL":"http:\/\/biorxiv.org\/lookup\/doi\/10.64898\/2026.03.16.712061"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,3,17]]},"references-count":18,"URL":"https:\/\/doi.org\/10.64898\/2026.03.16.712061","relation":{},"subject":[],"published":{"date-parts":[[2026,3,17]]},"subtype":"preprint"}}