{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2023,7,22]],"date-time":"2023-07-22T07:10:40Z","timestamp":1690009840177},"reference-count":0,"publisher":"Acoustical Society of America (ASA)","issue":"5_Supplement","content-domain":{"domain":["pubs.aip.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2007,11,1]]},"abstract":"<jats:p>Clustering based on the normalized cut criterion, and more generally, spectral clustering methods, are techniques originally proposed to model perceptual grouping tasks, such as image segmentation in computer vision. In this work, it is shown how such techniques can be applied to the problem of dominant melodic source separation in polyphonic music audio signals. One of the main advantages of this approach is the ability to incorporate mutiple perceptually-inspired grouping criteria into a single framework without requiring multiple processing stages, as many existing computational auditory science analysis approaches do. Experimental results for several tasks, including dominant melody pitch detection, are presented. The system is based on a sinusoidal modeling analysis front-end. A novel similarity cue based on harmonicity (harmonically-wrapped peak similariy) is also introduced. The proposed system is data-driven (i.e., requires no prior knowledge about the extracted source), causal, robust, practical, and efficient (close to real-time on a fast computer). Although a specific implementation is presented, one of the main advantages of the proposed approach is its ability to utilize different analysis front-ends and grouping criteria in a straightforward manner.<\/jats:p>","DOI":"10.1121\/1.2942659","type":"journal-article","created":{"date-parts":[[2008,9,9]],"date-time":"2008-09-09T15:33:53Z","timestamp":1220974433000},"page":"2988-2989","update-policy":"http:\/\/dx.doi.org\/10.1063\/aip-crossmark-policy-page","source":"Crossref","is-referenced-by-count":0,"title":["A framework for sound source separation using spectral clustering"],"prefix":"10.1121","volume":"122","author":[{"given":"George","family":"Tzanetakis","sequence":"first","affiliation":[{"name":"Dept. of Comput. Sci., Univ. of Victoria, Canada"}]},{"given":"Mathieu","family":"Lagrange","sequence":"additional","affiliation":[{"name":"Dept. of Comput. Sci., Univ. of Victoria, Canada"}]},{"given":"Luis Gustavo","family":"Martins","sequence":"additional","affiliation":[{"name":"Dept. of Comput. Sci., Univ. of Victoria, Canada"}]},{"given":"Jennifer","family":"Murdoch","sequence":"additional","affiliation":[{"name":"Dept. of Comput. Sci., Univ. of Victoria, Canada"}]}],"member":"231","container-title":["The Journal of the Acoustical Society of America"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/pubs.aip.org\/jasa\/article\/122\/5_Supplement\/2988\/712147\/A-framework-for-sound-source-separation-using","content-type":"application\/pdf","content-version":"vor","intended-application":"syndication"},{"URL":"https:\/\/pubs.aip.org\/jasa\/article\/122\/5_Supplement\/2988\/712147\/A-framework-for-sound-source-separation-using","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,7,22]],"date-time":"2023-07-22T06:51:07Z","timestamp":1690008667000},"score":1,"resource":{"primary":{"URL":"https:\/\/pubs.aip.org\/jasa\/article\/122\/5_Supplement\/2988\/712147\/A-framework-for-sound-source-separation-using"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2007,11,1]]},"references-count":0,"journal-issue":{"issue":"5_Supplement","published-print":{"date-parts":[[2007,11,1]]}},"URL":"https:\/\/doi.org\/10.1121\/1.2942659","relation":{},"ISSN":["0001-4966","1520-8524"],"issn-type":[{"value":"0001-4966","type":"print"},{"value":"1520-8524","type":"electronic"}],"subject":[],"published-other":{"date-parts":[[2007,11]]},"published":{"date-parts":[[2007,11,1]]}}}