{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,10]],"date-time":"2025-12-10T20:46:07Z","timestamp":1765399567477,"version":"3.46.0"},"posted":{"date-parts":[[2025,12,10]]},"group-title":"PsyArXiv","reference-count":0,"publisher":"Center for Open Science","license":[{"start":{"date-parts":[[2025,12,10]],"date-time":"2025-12-10T00:00:00Z","timestamp":1765324800000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/legalcode"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"abstract":"<p>The self-voice plays a fundamental role in communication and identity, yet remains a relatively neglected topic in psychological science. As AI-generated and digitally manipulated voices become more common, understanding how individuals perceive and process their own voice is increasingly important. Disruptions in self-voice processing are implicated in several clinical conditions, including psychosis, autism, and personality disorders, highlighting the need for integrative models to explain self-voice across contexts. However, research faces two major challenges: a methodological one \u2013 replicating the bone-conducted acoustics that shape natural self-voice perception, and a conceptual one \u2013 a persistent bias toward treating the self-voice as purely auditory. To address these gaps, we propose a framework decomposing the self-voice into five interacting components: auditory, motor, memory, multisensory integration, and self-concept. We review the functional and neural basis of each component and suggest how they converge within distributed brain networks to support coherent self-voice processing. This integrative framework aims to advance theoretical and translational work by bridging psychology, neuroscience, clinical research, and voice technology in the context of emerging digital voice environments.<\/p>","DOI":"10.31234\/osf.io\/kg4ns_v2","type":"posted-content","created":{"date-parts":[[2025,12,10]],"date-time":"2025-12-10T20:41:04Z","timestamp":1765399264000},"source":"Crossref","is-referenced-by-count":0,"title":["From Voice to Self: An Integrative Framework on Self-Voice Processing"],"prefix":"10.31234","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-6965-7578","authenticated-orcid":true,"given":"Pavo","family":"Orepic","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7981-3682","authenticated-orcid":true,"given":"Ana","family":"Pinheiro","sequence":"additional","affiliation":[]}],"member":"15934","container-title":[],"original-title":[],"deposited":{"date-parts":[[2025,12,10]],"date-time":"2025-12-10T20:41:05Z","timestamp":1765399265000},"score":1,"resource":{"primary":{"URL":"https:\/\/osf.io\/kg4ns_v2"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,12,10]]},"references-count":0,"URL":"https:\/\/doi.org\/10.31234\/osf.io\/kg4ns_v2","relation":{"is-version-of":[{"id-type":"doi","id":"10.31234\/osf.io\/kg4ns_v1","asserted-by":"subject"}]},"subject":[],"published":{"date-parts":[[2025,12,10]]},"subtype":"preprint"}}