{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,3]],"date-time":"2026-02-03T22:58:03Z","timestamp":1770159483992,"version":"3.49.0"},"reference-count":18,"publisher":"SAGE Publications","issue":"2","license":[{"start":{"date-parts":[[2020,7,3]],"date-time":"2020-07-03T00:00:00Z","timestamp":1593734400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/journals.sagepub.com\/page\/policies\/text-and-data-mining-license"}],"content-domain":{"domain":["journals.sagepub.com"],"crossmark-restriction":true},"short-container-title":["Journal of Intelligent &amp; Fuzzy Systems"],"published-print":{"date-parts":[[2020,8,31]]},"abstract":"<jats:p>If there are more external interference factors in the process of intelligent recognition in English, the recognition accuracy will be greatly reduced. It is of great academic value and application significance to deeply study feature recognition of English part-of-speech and realize automatic image processing of English recognition. Based on unsupervised machine learning and image recognition technology, this study combines the actual factors of English recognition to set the corresponding influencing factors and proposes a reliable method to identify multi-body rotating characters. This method utilizes the principle of the periodic characteristics of the trajectory rotation on the feature space. Moreover, this study conducts a comparative analysis of recognition accuracy by comparative experiments. In addition, this paper analyzes the recognition principles of 4 fonts in detail. The research results show that the proposed method has certain effects and can provide theoretical reference for subsequent related research.<\/jats:p>","DOI":"10.3233\/jifs-179960","type":"journal-article","created":{"date-parts":[[2020,7,3]],"date-time":"2020-07-03T13:46:51Z","timestamp":1593784011000},"page":"1891-1901","update-policy":"https:\/\/doi.org\/10.1177\/sage-journals-update-policy","source":"Crossref","is-referenced-by-count":1,"title":["Analysis of the characteristics of English part of speech based on unsupervised machine learning and image recognition model"],"prefix":"10.1177","volume":"39","author":[{"given":"Pengpeng","family":"Li","sequence":"first","affiliation":[{"name":"Cangzhou Normal University, Cangzhou, Hebei, China"}]},{"given":"Shuai","family":"Jiang","sequence":"additional","affiliation":[{"name":"Cangzhou Normal University, Cangzhou, Hebei, China"}]}],"member":"179","published-online":{"date-parts":[[2020,7,3]]},"reference":[{"issue":"99","key":"e_1_3_1_2_2","first-page":"1","article-title":"An Investigation on the Accuracy of Truncated DKLT Representation for Speaker Identification With Short Sequences of Speech Frames[J]","author":"Biagetti G.","year":"2016","unstructured":"BiagettiG., CrippaP., FalaschettiL., et al., An Investigation on the Accuracy of Truncated DKLT Representation for Speaker Identification With Short Sequences of Speech Frames[J], IEEE Transactions on CyberneticsPP(99) (2016), 1\u201315.","journal-title":"IEEE Transactions on Cybernetics"},{"key":"e_1_3_1_3_2","doi-asserted-by":"publisher","DOI":"10.1121\/1.4976056"},{"key":"e_1_3_1_4_2","unstructured":"BarrozoT.F. PagannevesL.O. PinheirodS.J. et al. Sensitivity and specificity of the Percentage of Consonants Correct-Revised in the identification of speech sound disorder.[J] Codas (2017) 29."},{"key":"e_1_3_1_5_2","doi-asserted-by":"crossref","unstructured":"MoctezumaL.A. Torres-Garc\u00edaA.A. Villase\u00f1or-PinedaL. et al. Subjects Identification using EEG-recorded Imagined Speech[J] Expert Systems with Applications 2018.","DOI":"10.1016\/j.eswa.2018.10.004"},{"key":"e_1_3_1_6_2","doi-asserted-by":"crossref","unstructured":"SardarV.M. and ShirbahadurkarS.D. Speaker identification of whispering speech: an investigation on selected timbrel features and KNN distance measures[J] International Journal of Speech Technology 2018.","DOI":"10.1007\/s10772-018-9527-4"},{"key":"e_1_3_1_7_2","doi-asserted-by":"crossref","unstructured":"PrasadS. and PrasadR. Fusion Multistyle Training for Speaker Identification of Disguised Speech[J] Wireless Personal Communications 2018.","DOI":"10.1007\/s11277-018-6057-y"},{"key":"e_1_3_1_8_2","doi-asserted-by":"crossref","unstructured":"PogrebnyakovN. Unsupervised domain-agnostic identification of product names in social media posts[J]. 2018.","DOI":"10.1109\/BigData.2018.8622119"},{"key":"e_1_3_1_9_2","doi-asserted-by":"publisher","DOI":"10.1037\/h0101810"},{"key":"e_1_3_1_10_2","doi-asserted-by":"crossref","unstructured":"KaminskiK.S. and SporerS.L. Observers\u2019\u2032 Judgments of Identification Accuracy are Affected by Non-Valid Cues: A Brunswikian Lens Model Analysis[J] European Journal of Social Psychology 2017.","DOI":"10.1002\/ejsp.2293"},{"key":"e_1_3_1_11_2","doi-asserted-by":"publisher","DOI":"10.1049\/iet-spr.2016.0731"},{"key":"e_1_3_1_12_2","unstructured":"XueT. and De-GenH. Mixed Model-based Method for Chinese Simple Noun Phrase Recognition[J] Journal of Chinese Computer Systems 2017."},{"issue":"52","key":"e_1_3_1_13_2","first-page":"32","article-title":"Identification Features Analysis in Speech Data Using Gmm-Ubm Speaker Verification System[J]","volume":"3","author":"Rakhmanenko I.A.","year":"2017","unstructured":"RakhmanenkoI.A. and MeshcheryakovR.V., Identification Features Analysis in Speech Data Using Gmm-Ubm Speaker Verification System[J], Tr Spiiran3(52) (2017), 32\u201350.","journal-title":"Tr Spiiran"},{"key":"e_1_3_1_14_2","doi-asserted-by":"publisher","DOI":"10.1142\/S0218194017500346"},{"key":"e_1_3_1_15_2","doi-asserted-by":"crossref","unstructured":"ChaudhariP.R. and AlexJ.S.R. Evaluation of Cepstral Features of Speech for Person Identification System Under Noisy Environment[J]. 2018.","DOI":"10.1007\/978-981-10-8354-9_17"},{"key":"e_1_3_1_16_2","doi-asserted-by":"publisher","DOI":"10.17116\/sudmed201760425-28"},{"key":"e_1_3_1_17_2","doi-asserted-by":"crossref","unstructured":"LiH. LiaoC. HuG. et al. A Cyclic Cascaded CRFs Model for Opinion Targets Identification Based on Rules and Statistics[J]. 2017.","DOI":"10.1007\/978-3-319-48390-0_27"},{"key":"e_1_3_1_18_2","unstructured":"BartzC. YangH. and MeinelC. STN-OCR: A single Neural Network for Text Detection and Text Recognition[J]. 2017."},{"key":"e_1_3_1_19_2","unstructured":"WuY.C. YinF. ZhangX.Y. et al. SCAN: Sliding Convolutional Attention Network for Scene Text Recognition[J]. 2018."}],"container-title":["Journal of Intelligent &amp; Fuzzy Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/journals.sagepub.com\/doi\/pdf\/10.3233\/JIFS-179960","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/journals.sagepub.com\/doi\/full-xml\/10.3233\/JIFS-179960","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/journals.sagepub.com\/doi\/pdf\/10.3233\/JIFS-179960","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,2,3]],"date-time":"2026-02-03T09:54:16Z","timestamp":1770112456000},"score":1,"resource":{"primary":{"URL":"https:\/\/journals.sagepub.com\/doi\/10.3233\/JIFS-179960"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,7,3]]},"references-count":18,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2020,8,31]]}},"alternative-id":["10.3233\/JIFS-179960"],"URL":"https:\/\/doi.org\/10.3233\/jifs-179960","relation":{},"ISSN":["1064-1246","1875-8967"],"issn-type":[{"value":"1064-1246","type":"print"},{"value":"1875-8967","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,7,3]]}}}