{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,6]],"date-time":"2026-04-06T21:19:10Z","timestamp":1775510350896,"version":"3.50.1"},"reference-count":35,"publisher":"Walter de Gruyter GmbH","issue":"1","license":[{"start":{"date-parts":[[2021,1,1]],"date-time":"2021-01-01T00:00:00Z","timestamp":1609459200000},"content-version":"unspecified","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2021,7,20]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>The dependency of a speech recognition system on the accent of a user leads to the variation in its performance, as the people from different backgrounds have different accents. Accent labeling and conversion have been reported as a prospective solution for the challenges faced in language learning and various other voice-based advents. In the English TTS system, the accent labeling of unregistered words is another very important link besides the phonetic conversion. Since the importance of the primary stress is much greater than that of the secondary stress, and the primary stress is easier to call than the secondary stress, the labeling of the primary stress is separated from the secondary stress. In this work, the labeling of primary accents uses a labeling algorithm that combines morphological rules and machine learning; the labeling of secondary accents is done entirely through machine learning algorithms. After 10 rounds of cross-validation, the average tagging accuracy rate of primary stress was 94%, the average tagging accuracy rate of secondary stress was 94%, and the total tagging accuracy rate was 83.6%. This perceptual study separates the labeling of primary and secondary accents providing the promising outcomes.<\/jats:p>","DOI":"10.1515\/jisys-2020-0144","type":"journal-article","created":{"date-parts":[[2021,7,20]],"date-time":"2021-07-20T17:10:53Z","timestamp":1626801053000},"page":"881-892","source":"Crossref","is-referenced-by-count":4,"title":["Accent labeling algorithm based on morphological rules and machine learning in English conversion system"],"prefix":"10.1515","volume":"30","author":[{"given":"Xiaofeng","family":"Liu","sequence":"first","affiliation":[{"name":"Department of Aircraft Maintenance, Sichuan Southwest Vocational College of Civil Aviation , Chengdu 610000 , Sichuan Province , China"}]},{"given":"Pradeep Kumar","family":"Singh","sequence":"additional","affiliation":[{"name":"Department of Computer Science, KIET Group of Institutions , Delhi-NCR , Ghaziabad, UP , India"}]},{"given":"Pljonkin Anton","family":"Pavlovich","sequence":"additional","affiliation":[{"name":"Institute of Computer Technologies and Information Security , Southern Federal University , Russia"}]}],"member":"374","published-online":{"date-parts":[[2021,7,20]]},"reference":[{"key":"2025120523322269545_j_jisys-2020-0144_ref_001","doi-asserted-by":"crossref","unstructured":"Vojir S, Zeman V, Kuchar J, Kliegr T. Easyminer.eu: web framework for interpretable machine learning based on rules and frequent itemsets. Knowl Based Syst. 2018;150(JUN 15):111\u20135.","DOI":"10.1016\/j.knosys.2018.03.006"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_002","doi-asserted-by":"crossref","unstructured":"Zhao F, Chen Y, Hou Y, He X. Segmentation of blood vessels using rule-based and machine- learning-based methods: a review. Multimed Syst. 2019;25(2):109\u201318.","DOI":"10.1007\/s00530-017-0580-7"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_003","doi-asserted-by":"crossref","unstructured":"Rodellar J, Alf\u00e9rez S, Acevedo A, Molina A, Merino A. Image processing and machine learning in the morphological analysis of blood cells. Int J Lab Hematol. 2018;40:46\u201353.","DOI":"10.1111\/ijlh.12818"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_004","doi-asserted-by":"crossref","unstructured":"Kaur G, Bhardwaj N, Singh PK. An analytic review on image enhancement techniques based on soft computing approach. In: Urooj S, Virmani J, editors. Sensors and image processing. Advances in intelligent systems and computing. Vol. 651, Singapore: Springer; 2018. 10.1007\/978-981-10-6614-6_26.","DOI":"10.1007\/978-981-10-6614-6_26"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_005","doi-asserted-by":"crossref","unstructured":"Sharma A, Tomar R, Chilamkurti N, Kim BG. Blockchain based smart contracts for internet of medical things in e-healthcare. Electronics. 2020;9(10):1609.","DOI":"10.3390\/electronics9101609"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_006","doi-asserted-by":"crossref","unstructured":"Rabbani A, Babaei M. Hybrid pore-network and lattice-boltzmann permeability modelling accelerated by machine learning. Adv Water Resour. 2019;126(APR):116\u201328.","DOI":"10.1016\/j.advwatres.2019.02.012"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_007","doi-asserted-by":"crossref","unstructured":"Ly C, Olsen AM, Schwerdt IJ, Porter R, Sentz K, Mcdonald LW, et al. A new approach for quantifying morphological features of u3o8 for nuclear forensics using a deep learning model. J Nucl Mater. 2019;517:128\u201337.","DOI":"10.1016\/j.jnucmat.2019.01.042"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_008","doi-asserted-by":"crossref","unstructured":"Tanwar S, Bhatia Q, Patel P, Kumari A, Singh PK, Hong W. Machine learning adoption in blockchain-based smart applications: the challenges, and a way forward. IEEE Access. 2020;8:474\u201388. 10.1109\/ACCESS.2019.2961372.","DOI":"10.1109\/ACCESS.2019.2961372"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_009","doi-asserted-by":"crossref","unstructured":"Zhao Y, Ren W, Li Z. An accent marking algorithm of english conversion system based on morphological rules. Int J Emerg Technol Learn. 2021;16(1):234\u201346.","DOI":"10.3991\/ijet.v16i01.19717"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_010","doi-asserted-by":"crossref","unstructured":"Cruz-Benito J, V\u00e1zquez-Ingelmo A, S\u00e1nchez-Prieto JC, Ther\u00f3n R, Garc\u00eda-Pealvo FJ, Mart\u00edn-Gonz\u00e1lez M. Enabling adaptability in web forms based on user characteristics detection through a\/b testing and machine learning. IEEE Access. 2018;6:2251\u201365.","DOI":"10.1109\/ACCESS.2017.2782678"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_011","doi-asserted-by":"crossref","unstructured":"Fan G, Shi W, Guo L, Zeng J, Gui G. Machine learning based quantitative association rule mining method for evaluating cellular network performance. IEEE Access. 2019;7:1.","DOI":"10.1109\/ACCESS.2019.2953943"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_012","doi-asserted-by":"crossref","unstructured":"Tania MH, Lwin KT, Shabut AM, Najlah M, Chin J, Hossain MA. Intelligent image-based colourimetric tests using machine learning framework for lateral flow assays. Expert Syst Appl. 2020;139:112843.1\u201322.","DOI":"10.1016\/j.eswa.2019.112843"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_013","unstructured":"Stantic D, Jo J. Accent identification by clustering and scoring formants. Int J Comput Syst Eng. 2012;6(3):379\u201384."},{"key":"2025120523322269545_j_jisys-2020-0144_ref_014","unstructured":"Kumar P, Chandra M. Speaker identification using Gaussian mixture models. MIT IJECE. 2011;1(1):27\u201330."},{"key":"2025120523322269545_j_jisys-2020-0144_ref_015","doi-asserted-by":"crossref","unstructured":"Tang H, Ghorbani AA. Accent classification using support vector machine and hidden markov model. Conference of the Canadian Society for Computational Studies of Intelligence. Berlin, Heidelberg: Springer; 2003 June. p. 629\u201331.","DOI":"10.1007\/3-540-44886-1_65"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_016","unstructured":"Novich S, Trevino A. Introduction to accent classification with neural networks. Rice University; 2010."},{"key":"2025120523322269545_j_jisys-2020-0144_ref_017","doi-asserted-by":"crossref","unstructured":"Behravan H, Hautam\u00e4ki V, Kinnunen T. Foreign accent detection from spoken Finnish using i-vectors. Lyon, France: International Speech Communication Association (ISCA)-INTERSPEECH, Vol. 2013, 2013 Aug. p. 14.","DOI":"10.21437\/Interspeech.2013-42"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_018","doi-asserted-by":"crossref","unstructured":"Chen T, Huang C, Chang E, Wang J. On the use of Gaussian mixture model for speaker variability analysis. Seventh International Conference on Spoken Language Processing. Germany: Causal Productions Pty; 2002.","DOI":"10.21437\/ICSLP.2002-385"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_019","unstructured":"Tsai WH, Wang HM. Towards automatic identification of singing language in popular music recordings. Barcelona, Spain: International Society for Music Information Retrieval; 2004 Oct."},{"key":"2025120523322269545_j_jisys-2020-0144_ref_020","unstructured":"Kruspe AM, Fraunhofer IDMT. Keyword spotting in a-capella singing. Taipei, Taiwan: International Society for Music Information Retrieval, Vol. 14, 2014 Oct. p. 271\u20136."},{"key":"2025120523322269545_j_jisys-2020-0144_ref_021","unstructured":"Nichols E, Morris D, Basu S, Raphael C. Relationships between lyrics and melody in popular music. New York City, USA: International Society for Music Information Retrieval; 2016."},{"key":"2025120523322269545_j_jisys-2020-0144_ref_022","doi-asserted-by":"crossref","unstructured":"Jiao Y, Tu M, Berisha V, Liss JM. Accent identification by combining deep neural networks and recurrent neural networks trained on long and short term features. San Francisco, CA, USA: International Speech Communication Association; 2016 Sept. p. 2388\u201392.","DOI":"10.21437\/Interspeech.2016-1148"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_023","unstructured":"Kim S, Jung H. A study on the utilization of speech recognition technology in foreign language learning applications-focusing on English and French speech. J Digit Contents Soc. 2018;19(4):621\u201330."},{"key":"2025120523322269545_j_jisys-2020-0144_ref_024","doi-asserted-by":"crossref","unstructured":"Bird JJ, Wanner E, Ek\u00e1rt A, Faria DR. Accent classification in human speech biometrics for native and non-native english speakers. Proceedings of the 12th ACM International Conference on PErvasive Technologies Related to Assistive Environments. New York, NY, United States: Association for Computing Machinery; 2019 June. p. 554\u201360.","DOI":"10.1145\/3316782.3322780"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_025","doi-asserted-by":"crossref","unstructured":"Zhao G, Sonsaat S, Levis J, Chukharev-Hudilainen E, Gutierrez-Osuna R. Accent conversion using phonetic posteriorgrams. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). United States: Institute of Electrical and Electronics Engineers Inc.; 2018 April. p. 5314\u20138.","DOI":"10.1109\/ICASSP.2018.8462258"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_026","doi-asserted-by":"crossref","unstructured":"Sun L, Li K, Wang H, Kang S, Meng H. Phonetic posteriorgrams for many-to-one voice conversion without parallel data training. 2016 IEEE International Conference on Multimedia and Expo (ICME). Seattle, WA, USA: Institute of Electrical and Electronics Engineers; 2016 July. p. 1\u20136.","DOI":"10.1109\/ICME.2016.7552917"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_027","doi-asserted-by":"crossref","unstructured":"Chen X, Chu W, Guo J, Xu N. Singing voice conversion with non-parallel data. 2019 IEEE Conference on Multimedia Information Processing and Retrieval (MIPR). San Jose, California, USA: Brazilian Ministry of Education; 2019 March. p. 292\u20136.","DOI":"10.1109\/MIPR.2019.00059"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_028","doi-asserted-by":"crossref","unstructured":"Fang F, Yamagishi J, Echizen I, Lorenzo-Trueba J. High-quality nonparallel voice conversion based on cycle-consistent adversarial network. 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). United States: Institute of Electrical and Electronics Engineers Inc.; 2018 April. p. 5279\u201383.","DOI":"10.1109\/ICASSP.2018.8462342"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_029","doi-asserted-by":"crossref","unstructured":"Yeh CC, Hsu PC, Chou JC, Lee HY, Lee LS. Rhythm-flexible voice conversion without parallel data using cycle-gan over phoneme posteriorgram sequences. In 2018 IEEE Spoken Language Technology Workshop (SLT). United States: IEEE; 2018 Dec. p. 274\u201381.","DOI":"10.1109\/SLT.2018.8639647"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_030","doi-asserted-by":"crossref","unstructured":"Upadhyay R, Lui S. Foreign English accent classification using deep belief networks. 2018 IEEE 12th International Conference on Semantic Computing (ICSC). United States: IEEE; 2018 Jan. p. 290\u20133.","DOI":"10.1109\/ICSC.2018.00053"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_031","doi-asserted-by":"crossref","unstructured":"Parikh P, Velhal K, Potdar S, Sikligar A, Karani R. English language accent classification and conversion using machine learning. Singapore: Springer; 2020. Available at SSRN 3600748.","DOI":"10.2139\/ssrn.3600748"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_032","doi-asserted-by":"crossref","unstructured":"Zhang A, Ni C. Pitch accent prediction using ensemble machine learning. 2009 Second International Conference on Intelligent Computation Technology and Automation. Vol. 1, NW Washington, DC, United States: IEEE Computer Society 1730 Massachusetts Ave.; 2009, October. p. 444\u20137.","DOI":"10.1109\/ICICTA.2009.114"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_033","doi-asserted-by":"crossref","unstructured":"Feng N, Sun B. On simulating one-trial learning using morphological neural networks. Cognit Syst Res. 2018;53:61\u201370.","DOI":"10.1016\/j.cogsys.2018.05.003"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_034","doi-asserted-by":"crossref","unstructured":"Sutton J, Mahajan R, Akbilgic O, Kamaleswaran R. Physonline: an open source machine learning pipeline for real-time analysis of streaming physiological waveform. IEEE J Biomed Health Inform. 2018;99:1.","DOI":"10.1109\/JBHI.2018.2832610"},{"key":"2025120523322269545_j_jisys-2020-0144_ref_035","doi-asserted-by":"crossref","unstructured":"Lou Z, Ren Y. Investigating issues with machine learning for accent classification. J Phys Conf Ser. 2021;1738(1):012111. IOP Publishing.","DOI":"10.1088\/1742-6596\/1738\/1\/012111"}],"container-title":["Journal of Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.degruyterbrill.com\/document\/doi\/10.1515\/jisys-2020-0144\/xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.degruyterbrill.com\/document\/doi\/10.1515\/jisys-2020-0144\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,12,5]],"date-time":"2025-12-05T23:33:13Z","timestamp":1764977593000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.degruyterbrill.com\/document\/doi\/10.1515\/jisys-2020-0144\/html"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,1,1]]},"references-count":35,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2021,9,22]]},"published-print":{"date-parts":[[2021,9,22]]}},"alternative-id":["10.1515\/jisys-2020-0144"],"URL":"https:\/\/doi.org\/10.1515\/jisys-2020-0144","relation":{},"ISSN":["2191-026X"],"issn-type":[{"value":"2191-026X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,1,1]]}}}