{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,2]],"date-time":"2026-02-02T15:15:11Z","timestamp":1770045311067,"version":"3.49.0"},"reference-count":5,"publisher":"SAGE Publications","issue":"2","content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["IFS"],"published-print":{"date-parts":[[2021,9,15]]},"abstract":"<jats:p>In the task of Person re-identification (reID), the range of motion of pedestrians often spans multiple camera areas, and their motion direction and behavior cannot be constrained, and irrelevant people or objects in different scenes will also obtain target pedestrian information for us Cause interference. At the same time, the surveillance system also has many characteristics such as a fixed shooting angle of a single camera, different angles between different cameras, and low image resolution. These characteristics make the task of Person re-identification difficult. This paper proposes a Multi-level Feature Extraction Network (MFEN) based on SEResNet-50. Extracting richer and more diverse pedestrian features from poor-quality images will effectively improve the re-identification ability of the network, and MFEN can obtain Multistage key features in the image through the Feature Re-extraction Method (FRM) proposed in this paper. Experiments show that compared with AANet-50, MFEN has 3.85% \/0.71% improvements of mAP\/ Rank-1 on the Market1501 dataset, and 2.74% \/1.28% improvements of mAP\/ Rank-1 on the DukeMTMC-reID dataset.<\/jats:p>","DOI":"10.3233\/jifs-211456","type":"journal-article","created":{"date-parts":[[2021,6,26]],"date-time":"2021-06-26T05:21:52Z","timestamp":1624684912000},"page":"4187-4201","source":"Crossref","is-referenced-by-count":0,"title":["Multi-level feature extraction network for\u00a0person re-identification"],"prefix":"10.1177","volume":"41","author":[{"given":"Yang","family":"Ge","sequence":"first","affiliation":[{"name":"Advanced Institute of Natural Sciences, Beijing Normal University at Zhuhai, China"},{"name":"Key Laboratory of Intelligent Multimedia Technology, Beijing Normal University (Zhuhai Campus), Zhuhai, China"},{"name":"Engineering Lab on Intelligent Perception for Internet of Things (ELIP), Shenzhen Graduate School, Peking University, Shenzhen, China"}]},{"given":"Ding","family":"Xin","sequence":"additional","affiliation":[{"name":"Advanced Institute of Natural Sciences, Beijing Normal University at Zhuhai, China"},{"name":"Key Laboratory of Intelligent Multimedia Technology, Beijing Normal University (Zhuhai Campus), Zhuhai, China"}]}],"member":"179","reference":[{"key":"10.3233\/JIFS-211456_ref6","doi-asserted-by":"crossref","first-page":"53","DOI":"10.1016\/j.patcog.2019.05.028","article-title":"AlignedReID++: Dynamically matching local information for person re-identification[J]","volume":"94","author":"Luo","year":"2019","journal-title":"Pattern Recognition"},{"key":"10.3233\/JIFS-211456_ref9","doi-asserted-by":"crossref","first-page":"51","DOI":"10.1016\/j.jvcir.2019.01.010","article-title":"Spherereid: Deep hypersphere manifold embedding for person re-identification[J]","volume":"60","author":"Fan","year":"2019","journal-title":"Journal of Visual Communication and Image Representation"},{"issue":"7","key":"10.3233\/JIFS-211456_ref15","doi-asserted-by":"crossref","first-page":"1655","DOI":"10.1109\/TPAMI.2018.2846566","article-title":"Fine-tuning CNN imageretrieval with no human annotation[J]","volume":"41","author":"Radenovi\u0107","year":"2018","journal-title":"IEEE Transactions onPattern Analysis and Machine Intelligence"},{"key":"10.3233\/JIFS-211456_ref27","first-page":"1","article-title":"Semantic Part Constraint for Person Re-identification[J]","volume":"41","author":"Ying","year":"2020","journal-title":"Journal of Electronics & Information Technology"},{"key":"10.3233\/JIFS-211456_ref28","doi-asserted-by":"publisher","DOI":"10.13229\/j.cnki.jdxbgxb20210007"}],"container-title":["Journal of Intelligent &amp; Fuzzy Systems"],"original-title":[],"link":[{"URL":"https:\/\/content.iospress.com\/download?id=10.3233\/JIFS-211456","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,2,2]],"date-time":"2026-02-02T03:26:00Z","timestamp":1770002760000},"score":1,"resource":{"primary":{"URL":"https:\/\/journals.sagepub.com\/doi\/full\/10.3233\/JIFS-211456"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,9,15]]},"references-count":5,"journal-issue":{"issue":"2"},"URL":"https:\/\/doi.org\/10.3233\/jifs-211456","relation":{},"ISSN":["1064-1246","1875-8967"],"issn-type":[{"value":"1064-1246","type":"print"},{"value":"1875-8967","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,9,15]]}}}