{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2022,6,2]],"date-time":"2022-06-02T02:41:11Z","timestamp":1654137671249},"reference-count":18,"publisher":"IGI Global","issue":"4","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2013,10,1]]},"abstract":"<p>Recorded speech signals convey information not only for the speakers' identity and the spoken language, but also for the acquisition devices used for their recording. Therefore, it is reasonable to perform acquisition device identification by analyzing the recorded speech signal. To this end, recording-level spectral, cepstral, and fusion of spectral and cepstral features are employed as suitable representations for device identification. The feature vectors extracted from the training speech recordings are used to form overcomplete dictionaries for the devices. Each test feature vector is represented as a linear combination of all the dictionary columns (i.e., atoms). Since the dimensionality of the feature vectors is much smaller than the number of training speech recordings, there are infinitely many representations of each test feature vector with respect to the dictionary. These representations are referred to as collaborative representations in the sense that all the dictionary atoms collaboratively represent any test feature vector. By imposing the representation to be either sparse (i.e., to admit the minimum  norm) or to have the minimum  norm, unique collaborative representations are obtained. The classification is performed by assigning each test feature vector the device identity of the dictionary atoms yielding the minimum reconstruction error. This classification method is referred to as the sparse representation-based classifier (SRC) if the sparse collaborative representation is employed and as the least squares collaborative representation-based classifier (LSCRC) in the case of the minimum  norm regularized collaborative representation is used for reconstructing the test sample. By employing the LSCRC, state of the art identification accuracy of 97.67% is obtained on a set of 8 telephone handsets, from Lincoln-Labs Handset Database.<\/p>","DOI":"10.4018\/ijdcf.2013100101","type":"journal-article","created":{"date-parts":[[2014,3,21]],"date-time":"2014-03-21T13:53:20Z","timestamp":1395410000000},"page":"1-14","source":"Crossref","is-referenced-by-count":1,"title":["Telephone Handset Identification by Collaborative Representations"],"prefix":"10.4018","volume":"5","author":[{"given":"Yannis","family":"Panagakis","sequence":"first","affiliation":[{"name":"Department of Informatics, Aristotle University of Thessaloniki, Thessaloniki, Greece"}]},{"given":"Constantine","family":"Kotropoulos","sequence":"additional","affiliation":[{"name":"Department of Informatics, Aristotle University of Thessaloniki, Thessaloniki, Greece"}]}],"member":"2432","reference":[{"key":"ijdcf.2013100101-0","doi-asserted-by":"crossref","unstructured":"Bingham, E., & Mannila, H. (2001). Random projection in dimensionality reduction: applications to image and text data. In Proc. 7th ACM Int. Conf. Knowledge Discovery and Data Mining (pp. 245-250). San Francisco, CA.","DOI":"10.1145\/502512.502546"},{"key":"ijdcf.2013100101-1","doi-asserted-by":"publisher","DOI":"10.1109\/TIT.2005.858979"},{"key":"ijdcf.2013100101-2","doi-asserted-by":"publisher","DOI":"10.1145\/1961189.1961199"},{"key":"ijdcf.2013100101-3","unstructured":"Chi, Y., & Porikli, F. (2012). Connecting the dots in multi-class classification: From nearest subspace to collaborative representation. In Proc. 2012 IEEE Conf. Computer Vision and Pattern Recognition, Washington, DC (pp. 3602-3609)."},{"key":"ijdcf.2013100101-4","doi-asserted-by":"publisher","DOI":"10.1002\/cpa.20131"},{"key":"ijdcf.2013100101-5","doi-asserted-by":"publisher","DOI":"10.1038\/scientificamerican0608-66"},{"key":"ijdcf.2013100101-6","doi-asserted-by":"crossref","unstructured":"Garcia-Romero, D., & Espy-Wilson, C. Y. (2010). Automatic acquisition device identification from speech recordings. In Proc. 2010 IEEE Int. Conf. Acoustics, Speech, and Signal Processing, Dallas, TX (pp. 1806-1809).","DOI":"10.1109\/ICASSP.2010.5495407"},{"key":"ijdcf.2013100101-7","doi-asserted-by":"publisher","DOI":"10.1109\/TIFS.2011.2178403"},{"key":"ijdcf.2013100101-8","doi-asserted-by":"crossref","unstructured":"Kraetzer, C., Oermann, A., Dittmann, J., & Lang, A. (2007). Digital audio forensics: A first practical evaluation on microphone and environment classification. In Proc. 9th ACM Workshop Multimedia and Security, Dallas, TX (pp. 63-74).","DOI":"10.1145\/1288869.1288879"},{"key":"ijdcf.2013100101-9","doi-asserted-by":"publisher","DOI":"10.1109\/MSP.2008.931080"},{"key":"ijdcf.2013100101-10","doi-asserted-by":"crossref","unstructured":"Malik, H., & Farid, H. (2010). Audio forensics from acoustic reverberation. In Proc. 2010 IEEE Int. Conf. Acoustics Speech and Signal Processing, Dallas, TX (pp. 1710-1713).","DOI":"10.1109\/ICASSP.2010.5495479"},{"key":"ijdcf.2013100101-11","doi-asserted-by":"crossref","unstructured":"Oermann, A., Lang, A., & Dittmann, J. (2005). Verifier-tuple for audio-forensic to determine speaker environment. In Proc. 7th ACM Workshop on Multimedia and Security, New York, NY (pp. 57-62).","DOI":"10.1145\/1073170.1073181"},{"key":"ijdcf.2013100101-12","doi-asserted-by":"crossref","unstructured":"Panagakis, Y., & Kotropoulos, C. (2012a). Automatic telephone handset identification by sparse representation of random spectral features. In Proc. 2012 ACM Multimedia and Security, Coventry, UK (pp. 91-96).","DOI":"10.1145\/2361407.2361422"},{"key":"ijdcf.2013100101-13","doi-asserted-by":"crossref","unstructured":"Panagakis, Y., & Kotropoulos, C. (2012b). Telephone handset identification by feature selection and sparse representations. In Proc. 2012 IEEE Int. Workshop Information Forensics and Security, Tenerife, Spain (pp. 73-78).","DOI":"10.1109\/WIFS.2012.6412628"},{"key":"ijdcf.2013100101-14","doi-asserted-by":"crossref","unstructured":"Reynolds, D. (1997). HTIMIT and LLHDB: Speech corpora for the study of handset transducer effects. In Proc. 1997 IEEE Int. Conf. Acoustics, Speech, and Signal Processing, Munich, Germany (pp. 1535-1538).","DOI":"10.1109\/ICASSP.1997.596243"},{"key":"ijdcf.2013100101-15","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2008.79"},{"key":"ijdcf.2013100101-16","doi-asserted-by":"crossref","unstructured":"Yang, R., Qu, Z., & Huang, J. (2008). Detecting digital audio forgeries by checking frame offsets. In Proc. 10th ACM Workshop on Multimedia and Security, New York, NY (pp. 21-26).","DOI":"10.1145\/1411328.1411334"},{"key":"ijdcf.2013100101-17","doi-asserted-by":"crossref","unstructured":"Zhang, L., Yang, M., & Xiangchu, F. (2011). Sparse representation or collaborative representation: Which helps face recognition? In Proc. 2011 Int. Conf. Computer Vision, Washington, DC (pp. 471-478).","DOI":"10.1109\/ICCV.2011.6126277"}],"container-title":["International Journal of Digital Crime and Forensics"],"original-title":[],"language":"ng","link":[{"URL":"https:\/\/www.igi-global.com\/viewtitle.aspx?TitleId=103934","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,6,2]],"date-time":"2022-06-02T02:26:53Z","timestamp":1654136813000},"score":1,"resource":{"primary":{"URL":"https:\/\/services.igi-global.com\/resolvedoi\/resolve.aspx?doi=10.4018\/ijdcf.2013100101"}},"subtitle":[""],"short-title":[],"issued":{"date-parts":[[2013,10,1]]},"references-count":18,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2013,10]]}},"URL":"https:\/\/doi.org\/10.4018\/ijdcf.2013100101","relation":{},"ISSN":["1941-6210","1941-6229"],"issn-type":[{"value":"1941-6210","type":"print"},{"value":"1941-6229","type":"electronic"}],"subject":[],"published":{"date-parts":[[2013,10,1]]}}}