{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,5,6]],"date-time":"2025-05-06T10:49:26Z","timestamp":1746528566557,"version":"3.37.3"},"reference-count":61,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2024,3,1]],"date-time":"2024-03-01T00:00:00Z","timestamp":1709251200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2024,3,1]],"date-time":"2024-03-01T00:00:00Z","timestamp":1709251200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61977013"],"award-info":[{"award-number":["61977013"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2024,6]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p><jats:bold>L<\/jats:bold>earning with <jats:bold>N<\/jats:bold>oisy <jats:bold>L<\/jats:bold>abels (LNL) methods aim to improve the accuracy of <jats:bold>D<\/jats:bold>eep <jats:bold>N<\/jats:bold>eural <jats:bold>N<\/jats:bold>etworks (DNNs) when the training set contains samples with noisy or incorrect labels, and have become popular in recent years. Existing popular LNL methods frequently regard samples with high learning difficulty (high-loss and low prediction probability) as noisy samples; however, irregular feature patterns from hard clean samples can also cause high learning difficulty, which can lead to the misclassification of hard clean samples as noisy samples. To address this insufficiency, we propose the <jats:bold>S<\/jats:bold>amples\u2019 <jats:bold>L<\/jats:bold>earning <jats:bold>R<\/jats:bold>isk-based <jats:bold>L<\/jats:bold>earning with <jats:bold>N<\/jats:bold>oisy <jats:bold>L<\/jats:bold>abels (SLRLNL) method. Specifically, we propose to separate noisy samples from hard clean samples using samples\u2019 learning risk, which represents samples\u2019 influence on DNN\u2019s accuracy . We show that samples\u2019 learning risk is comprehensively determined by samples\u2019 learning difficulty as well as samples\u2019 feature similarity to other samples, and thus, compared to existing LNL methods that solely rely on the learning difficulty, our method can better separate hard clean samples from noisy samples, since the former frequently possess irregular feature patterns. Moreover, to extract more useful information from samples with irregular feature patterns (i.e., hard samples), we further propose the <jats:bold>R<\/jats:bold>elabeling-based <jats:bold>L<\/jats:bold>abel <jats:bold>A<\/jats:bold>ugmentation (RLA) process to prevent the memorization of hard noisy samples and better learn the hard clean samples, thus enhancing the learning for hard samples. Empirical studies show that samples\u2019 learning risk can identify noisy samples more accurately, and the RLA process can enhance the learning for hard samples. To evaluate the effectiveness of our method, we compare it with popular existing LNL methods on CIFAR-10, CIFAR-100, Animal-10N, Clothing1M, and Docred. The experimental results indicate that our method outperforms other existing methods. The source code for SLRLNL can be found at <jats:ext-link xmlns:xlink=\"http:\/\/www.w3.org\/1999\/xlink\" ext-link-type=\"uri\" xlink:href=\"https:\/\/github.com\/yangbo1973\/SLRLNL\">https:\/\/github.com\/yangbo1973\/SLRLNL<\/jats:ext-link>.<\/jats:p>","DOI":"10.1007\/s40747-024-01360-z","type":"journal-article","created":{"date-parts":[[2024,3,1]],"date-time":"2024-03-01T10:02:23Z","timestamp":1709287343000},"page":"4033-4054","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Separating hard clean samples from noisy samples with samples\u2019 learning risk for DNN when learning with noisy labels"],"prefix":"10.1007","volume":"10","author":[{"given":"Lihui","family":"Deng","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-0805-7928","authenticated-orcid":false,"given":"Bo","family":"Yang","sequence":"additional","affiliation":[]},{"given":"Zhongfeng","family":"Kang","sequence":"additional","affiliation":[]},{"given":"Jiajin","family":"Wu","sequence":"additional","affiliation":[]},{"given":"Shaosong","family":"Li","sequence":"additional","affiliation":[]},{"given":"Yanping","family":"Xiang","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,3,1]]},"reference":[{"key":"1360_CR1","doi-asserted-by":"publisher","unstructured":"He K, Zhang X, Ren S, Sun J ( 2016) Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR), pp 770\u2013 778 . https:\/\/doi.org\/10.48550\/arXiv.1512.03385","DOI":"10.48550\/arXiv.1512.03385"},{"key":"1360_CR2","doi-asserted-by":"publisher","unstructured":"Yao Y, Ye D, Li P, Han X, Lin Y, Liu Z, Liu Z, Huang L, Zhou J, Sun M (2019) DocRED: a large-scale document-level relation extraction dataset. In: Proceedings of the 57th annual meeting of the association for computational linguistics (ACL), pp 764\u2013777. https:\/\/doi.org\/10.18653\/v1\/P19-1074","DOI":"10.18653\/v1\/P19-1074"},{"key":"1360_CR3","doi-asserted-by":"publisher","unstructured":"Cheng P, Wang H, Stojanovic V, Liu F, He S, Shi K (2022) Dissipativity-based finite-time asynchronous output feedback control for wind turbine system via a hidden markov model. Int J Syst Sci 1\u201313 . https:\/\/doi.org\/10.1080\/00207721.2022.2076171","DOI":"10.1080\/00207721.2022.2076171"},{"key":"1360_CR4","doi-asserted-by":"publisher","DOI":"10.1007\/s11063-023-11189-1","author":"X Song","year":"2023","unstructured":"Song X, Wu N, Song S, Stojanovic V (2023) Switching-like event-triggered state estimation for reaction-diffusion neural networks against dos attacks. Neural Process Lett. https:\/\/doi.org\/10.1007\/s11063-023-11189-1","journal-title":"Neural Process Lett"},{"key":"1360_CR5","doi-asserted-by":"publisher","unstructured":"Zhuang Z, Tao H, Chen Y, Stojanovic V, Paszke W (2023) An optimal iterative learning control approach for linear systems with nonuniform trial lengths under input constraints. IEEE Trans Syst Man Cybern Syst 3461\u20133473 . https:\/\/doi.org\/10.1109\/TSMC.2022.3225381","DOI":"10.1109\/TSMC.2022.3225381"},{"key":"1360_CR6","doi-asserted-by":"publisher","DOI":"10.1016\/j.energy.2023.128677","volume":"284","author":"S Wang","year":"2023","unstructured":"Wang S, Wu F, Takyi-Aninakwa P, Fernandez C, Stroe D-I, Huang Q (2023) Improved singular filtering-gaussian process regression-long short-term memory model for whole-life-cycle remaining capacity estimation of lithium-ion batteries adaptive to fast aging and multi-current variations. Energy 284:128677. https:\/\/doi.org\/10.1016\/j.energy.2023.128677","journal-title":"Energy"},{"key":"1360_CR7","doi-asserted-by":"publisher","first-page":"108920","DOI":"10.1016\/j.ress.2022.108920","volume":"230","author":"S Wang","year":"2023","unstructured":"Wang S, Fan Y, Jin S, Takyi-Aninakwa P, Fernandez C (2023) Improved anti-noise adaptive long short-term memory neural network modeling for the robust remaining useful life prediction of lithium-ion batteries. Reliab Eng Syst Saf 230:108920. https:\/\/doi.org\/10.1016\/j.ress.2022.108920","journal-title":"Reliab Eng Syst Saf"},{"key":"1360_CR8","doi-asserted-by":"publisher","first-page":"93","DOI":"10.1016\/j.neucom.2014.09.081","volume":"160","author":"A Ghosh","year":"2015","unstructured":"Ghosh A, Manwani N, Sastry PS (2015) Making risk minimization tolerant to label noise. Neurocomputing 160:93\u2013107. https:\/\/doi.org\/10.1016\/j.neucom.2014.09.081","journal-title":"Neurocomputing"},{"key":"1360_CR9","doi-asserted-by":"publisher","unstructured":"Zhang Z, Sabuncu MR ( 2018) Generalized cross entropy loss for training deep neural networks with noisy labels. In: Proceedings of the 32nd conference on neural information processing systems (NeurIPS), pp. 8792\u2013 8802 . https:\/\/doi.org\/10.48550\/arXiv.1805.07836","DOI":"10.48550\/arXiv.1805.07836"},{"key":"1360_CR10","doi-asserted-by":"publisher","unstructured":"Zhang Y, Zheng S, Wu P, Goswami M, Chen C ( 2021) Learning with feature-dependent label noise: a progressive approach. In: International conference on learning representations (ICLR) . https:\/\/doi.org\/10.48550\/arXiv.2103.07756","DOI":"10.48550\/arXiv.2103.07756"},{"key":"1360_CR11","doi-asserted-by":"publisher","first-page":"358","DOI":"10.1016\/j.neunet.2021.03.030","volume":"139","author":"L Deng","year":"2021","unstructured":"Deng L, Yang B, Kang Z, Yang S, Wu S (2021) A noisy label and negative sample robust loss function for dnn-based distant supervised relation extraction. Neural Netw 139:358\u2013370. https:\/\/doi.org\/10.1016\/j.neunet.2021.03.030","journal-title":"Neural Netw"},{"key":"1360_CR12","doi-asserted-by":"publisher","unstructured":"Yingbin B, Tongliang L ( 2021) Me-momentum: extracting hard confident examples from noisily labeled data. In: Proceedings of the IEEE international conference on computer vision (ICCV), pp 9292\u20139301 . https:\/\/doi.org\/10.1109\/ICCV48922.2021.00918","DOI":"10.1109\/ICCV48922.2021.00918"},{"key":"1360_CR13","doi-asserted-by":"publisher","first-page":"112","DOI":"10.1016\/j.neucom.2022.02.030","volume":"489","author":"K Kong","year":"2022","unstructured":"Kong K, Lee J, Kwak Y, Cho Y-R, Kim S-E, Song W-J (2022) Penalty based robust learning with noisy labels. Neurocomputing 489:112\u2013127. https:\/\/doi.org\/10.1016\/j.neucom.2022.02.030","journal-title":"Neurocomputing"},{"key":"1360_CR14","doi-asserted-by":"publisher","unstructured":"Xia X, Liu T, Han B, Gong M, Yu J, Niu G, Sugiyama M (2022) Sample selection with uncertainty of losses for learning with noisy labels. In: International conference on learning representations (ICLR). https:\/\/doi.org\/10.48550\/arXiv.2106.00445","DOI":"10.48550\/arXiv.2106.00445"},{"key":"1360_CR15","unstructured":"Cheng D, Ning Y, Wang N, Gao X, Yang H, Du Y, Han B, Liu T(2022) Class-dependent label-noise learning with cycle-consistency regularization. In: Advances in neural information processing systems (NeurIPS)"},{"key":"1360_CR16","doi-asserted-by":"publisher","first-page":"881","DOI":"10.1109\/TMI.2021.3125459","volume":"41","author":"C Zhu","year":"2022","unstructured":"Zhu C, Chen W, Peng T, Wang Y, Jin M (2022) Hard sample aware noise robust learning for histopathology image classification. IEEE Trans Med Imaging 41:881\u2013894. https:\/\/doi.org\/10.1109\/TMI.2021.3125459","journal-title":"IEEE Trans Med Imaging"},{"key":"1360_CR17","doi-asserted-by":"publisher","unstructured":"Huang J, Qu L, Jia R, Zhao B (2019) O2u-net: a simple noisy label detection approach for deep neural networks. In: Proceedings of the IEEE\/CVF international conference on computer vision (CVPR), pp 3326\u20133334. https:\/\/doi.org\/10.1109\/ICCV.2019.00342","DOI":"10.1109\/ICCV.2019.00342"},{"key":"1360_CR18","doi-asserted-by":"publisher","unstructured":"Zheng S, Wu P, Goswami A, Goswami M, Metaxas D, Chen C (2020) Error-bounded correction of noisy labels. In: Proceedings of machine learning research (PMLR), pp 11447\u201311457 . https:\/\/doi.org\/10.48550\/arXiv.2011.10077","DOI":"10.48550\/arXiv.2011.10077"},{"key":"1360_CR19","doi-asserted-by":"publisher","first-page":"17044","DOI":"10.48550\/arXiv.2001.10528","volume":"33","author":"G Pleiss","year":"2020","unstructured":"Pleiss G, Zhang T, Elenberg ER, Weinberger KQ (2020) Identifying mislabeled data using the area under the margin ranking. Adv Neural Inf Process Syst (NeurIPS) 33:17044\u201317056. https:\/\/doi.org\/10.48550\/arXiv.2001.10528","journal-title":"Adv Neural Inf Process Syst (NeurIPS)"},{"key":"1360_CR20","doi-asserted-by":"publisher","unstructured":"Wang Q, Han B, Liu T, Niu G, Yang J, Gong C (2021) Tackling instance-dependent label noise via a universal probabilistic model. In: Proceedings of the 35th AAAI conference on artificial intelligence. https:\/\/doi.org\/10.48550\/arXiv.2101.05467","DOI":"10.48550\/arXiv.2101.05467"},{"key":"1360_CR21","doi-asserted-by":"publisher","first-page":"447","DOI":"10.1109\/TPAMI.2015.2456899","volume":"38","author":"T Liu","year":"2016","unstructured":"Liu T, Tao D (2016) Classification with noisy labels by importance reweighting. IEEE Trans Pattern Anal Mach Intell 38:447\u2013461. https:\/\/doi.org\/10.1109\/TPAMI.2015.2456899","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1360_CR22","doi-asserted-by":"publisher","unstructured":"Han B, Yao Q, Yu X, Niu G, Xu M, Hu W, Tsang I, Sugiyama M (2018) Co-teaching: robust training of deep neural networks with extremely noisy labels. In: Advances in neural information processing systems (NeurIPS), pp 8535\u20138545. https:\/\/doi.org\/10.48550\/arXiv.1804.06872","DOI":"10.48550\/arXiv.1804.06872"},{"key":"1360_CR23","doi-asserted-by":"publisher","unstructured":"Arazo E, Ortego D, Albert P, O\u2019Connor N, McGuinness K (2019) Unsupervised label noise modeling and loss correction. In: Proceedings of the 36th international conference on machine learning (ICML), pp 312\u2013321. https:\/\/doi.org\/10.48550\/arXiv.1904.11238","DOI":"10.48550\/arXiv.1904.11238"},{"key":"1360_CR24","doi-asserted-by":"publisher","unstructured":"Wang Y, Ma X, Chen Z, Luo Y, Yi J, Bailey J (2019) symmetric cross entropy for robust learning with noisy labels. In: Proceedings of the IEEE\/CVF international conference on computer vision (ICCV), pp 322\u2013330. https:\/\/doi.org\/10.1109\/ICCV.2019.00041","DOI":"10.1109\/ICCV.2019.00041"},{"key":"1360_CR25","doi-asserted-by":"publisher","unstructured":"Yu X, Han B, Yao J, Niu G, Tsang I, Sugiyama M (2019) How does disagreement help generalization against label corruption? In: Proceedings of machine learning research (PMLR), pp 7164\u20137173 . https:\/\/doi.org\/10.48550\/arXiv.1901.04215","DOI":"10.48550\/arXiv.1901.04215"},{"key":"1360_CR26","doi-asserted-by":"publisher","first-page":"21382","DOI":"10.48550\/arXiv.2012.04835","volume":"33","author":"P Wu","year":"2020","unstructured":"Wu P, Zheng S, Goswami M, Metaxas D, Chen C (2020) A topological filter for learning with label noise. Adv Neural Inf Process Syst (NeurIPS) 33:21382\u201321393. https:\/\/doi.org\/10.48550\/arXiv.2012.04835","journal-title":"Adv Neural Inf Process Syst (NeurIPS)"},{"key":"1360_CR27","unstructured":"Song H, Kim M, Lee J-G (2019) Selfie: refurbishing unclean samples for robust deep learning. In: Proceedings of machine learning research (PMLR), pp 5907\u20135915"},{"key":"1360_CR28","doi-asserted-by":"publisher","unstructured":"Lee K, Yun S, Lee K, Lee H, Li B, Shin J (2019) Robust inference via generative classifiers for handling noisy labels. In: Proceedings of the 36th international conference on machine learning (ICML), Vol. 97, pp 3763\u20133772. https:\/\/doi.org\/10.48550\/arXiv.1901.11300","DOI":"10.48550\/arXiv.1901.11300"},{"key":"1360_CR29","doi-asserted-by":"publisher","unstructured":"Yi K, Wu J (2019) Probabilistic end-to-end noise correction for learning with noisy labels. In: 2019 IEEE\/CVF conference on computer vision and pattern recognition (CVPR), pp 7010\u20137018. https:\/\/doi.org\/10.1109\/CVPR.2019.00718","DOI":"10.1109\/CVPR.2019.00718"},{"key":"1360_CR30","doi-asserted-by":"publisher","unstructured":"Cheng J, Liu T, Ramamohanarao K, Tao D (2020) Learning with bounded instance- and label-dependent label noise. In: Proceedings of the 37th international conference on machine learning (ICML). https:\/\/doi.org\/10.48550\/arXiv.1709.03768","DOI":"10.48550\/arXiv.1709.03768"},{"key":"1360_CR31","doi-asserted-by":"publisher","unstructured":"Lukasik M, Bhojanapalli S, Menon A, Kumar S (2020) Does label smoothing mitigate label noise? In: Proceedings of the 37th international conference on machine learning (ICML), 6448\u20136458. https:\/\/doi.org\/10.48550\/arXiv.2003.02819","DOI":"10.48550\/arXiv.2003.02819"},{"key":"1360_CR32","doi-asserted-by":"publisher","unstructured":"Berthon A, Han B, Niu G, Liu T, Sugiyama M (2021) Confidence scores make instance-dependent label-noise learning possible. In: Proceedings of the 38th international conference on machine learning (ICML), pp 825\u2013836. https:\/\/doi.org\/10.48550\/arXiv.2001.03772","DOI":"10.48550\/arXiv.2001.03772"},{"key":"1360_CR33","doi-asserted-by":"publisher","unstructured":"Li J, Xiong C, Hoi SCH ( 2021) MoPro: webly supervised learning with momentum prototypes. In: International conference on learning representations (ICLR) . https:\/\/doi.org\/10.48550\/arXiv.2009.07995","DOI":"10.48550\/arXiv.2009.07995"},{"key":"1360_CR34","doi-asserted-by":"publisher","unstructured":"Zhang C, Bengio S, Hardt M, Recht B, Vinyals O (2017) Understanding deep learning requires rethinking generalization. In: International conference on learning representations (ICLR) . https:\/\/doi.org\/10.48550\/arXiv.1611.03530","DOI":"10.48550\/arXiv.1611.03530"},{"key":"1360_CR35","doi-asserted-by":"publisher","unstructured":"Arpit D, Jastrzundefinedbski S, Ballas N, Krueger D, Bengio E, Kanwal MS, Maharaj T, Fischer A, Courville A, Bengio Y, Lacoste-Julien S (2017) A closer look at memorization in deep networks. In: Proceedings of the 34th international conference on machine learning (ICML), pp 233\u2013242. https:\/\/doi.org\/10.48550\/arXiv.1706.05394","DOI":"10.48550\/arXiv.1706.05394"},{"key":"1360_CR36","doi-asserted-by":"publisher","first-page":"313","DOI":"10.1002\/widm.1132","volume":"4","author":"J Kremer","year":"2014","unstructured":"Kremer J, Steenstrup Pedersen K, Igel C (2014) Active learning with support vector machines. Data Min Knowl Disc 4:313\u2013326. https:\/\/doi.org\/10.1002\/widm.1132","journal-title":"Data Min Knowl Disc"},{"key":"1360_CR37","doi-asserted-by":"publisher","unstructured":"Harutyunyan H, Achille A, Paolini G, Majumder O, Ravichandran A, Bhotika R, Soatto S (2021) Estimating informativeness of samples with smooth unique information. In: International conference on learning representations (ICLR). https:\/\/doi.org\/10.48550\/arXiv.2101.06640","DOI":"10.48550\/arXiv.2101.06640"},{"key":"1360_CR38","doi-asserted-by":"crossref","unstructured":"Bengio Y, Louradour J, Collobert R, Weston J (2009) Curriculum learning. In: Proceedings of the 26th international conference on machine learning (ICML), pp 41\u2013 48","DOI":"10.1145\/1553374.1553380"},{"key":"1360_CR39","unstructured":"Settles B (2009) Active learning literature survey"},{"key":"1360_CR40","unstructured":"Krizhevsky A, Hinton G (2009) Learning multiple layers of features from tiny images. Master\u2019s thesis, Department of Computer Science, University of Toronto 1(4)"},{"key":"1360_CR41","doi-asserted-by":"publisher","unstructured":"Xiao T, Xia T, Yang Y, Huang C, Wang X (2015) Learning from massive noisy labeled data for image classification. In: Proceedings of the IEEE Conference on computer vision and pattern recognition (CVPR), pp 2691\u20132699. https:\/\/doi.org\/10.1109\/CVPR.2015.7298885","DOI":"10.1109\/CVPR.2015.7298885"},{"key":"1360_CR42","doi-asserted-by":"publisher","unstructured":"Jiang L, Zhou Z, Leung T, Li L-J, Fei-Fei L (2018) Mentornet: learning data-driven curriculum for very deep neural networks on corrupted labels. In: Proceedings of the 35th international conference on machine learning (ICML), pp 2304\u20132313. https:\/\/doi.org\/10.48550\/arXiv.1712.05055","DOI":"10.48550\/arXiv.1712.05055"},{"key":"1360_CR43","doi-asserted-by":"publisher","unstructured":"Nguyen DT, Mummadi CK, Ngo TPN, Nguyen THP, Beggel L, Brox T (2020): SELF: learning to filter noisy labels with self-ensembling. In: International conference on learning representations (ICLR) . https:\/\/doi.org\/10.48550\/arXiv.1910.01842","DOI":"10.48550\/arXiv.1910.01842"},{"key":"1360_CR44","doi-asserted-by":"publisher","unstructured":"Lee J, Chung S-Y (2020) Robust training with ensemble consensus. In: International conference on learning representations (ICLR) . https:\/\/doi.org\/10.48550\/arXiv.1910.09792","DOI":"10.48550\/arXiv.1910.09792"},{"key":"1360_CR45","doi-asserted-by":"publisher","first-page":"209","DOI":"10.1016\/j.neunet.2021.06.012","volume":"143","author":"D Ji","year":"2021","unstructured":"Ji D, Oh D, Hyun Y, Kwon O-M, Park M-J (2021) How to handle noisy labels for robust learning from uncertainty. Neural Netw 143:209\u2013217. https:\/\/doi.org\/10.1016\/j.neunet.2021.06.012","journal-title":"Neural Netw"},{"key":"1360_CR46","doi-asserted-by":"publisher","unstructured":"Ghosh A, Kumar H, Sastry PS (2017) Robust loss functions under label noise for deep neural networks. In: Proceedings of the 31th AAAI conference on artificial intelligence, pp 1919\u20131925. https:\/\/doi.org\/10.48550\/arXiv.1712.09482","DOI":"10.48550\/arXiv.1712.09482"},{"key":"1360_CR47","doi-asserted-by":"publisher","unstructured":"Toneva M, Sordoni A, Combes RT, Trischler A, Bengio Y, Gordon GJ (2019) An empirical study of example forgetting during deep neural network learning. In: International conference on learning representations (ICLR). https:\/\/doi.org\/10.48550\/arXiv.1812.05159","DOI":"10.48550\/arXiv.1812.05159"},{"key":"1360_CR48","doi-asserted-by":"publisher","unstructured":"Lin T, Goyal P, Girshick R, He K, Doll\u00e1r P (2017) Focal loss for dense object detection. In: 2017 IEEE international conference on computer vision (ICCV), pp 2999\u20133007. https:\/\/doi.org\/10.1109\/TPAMI.2018.2858826","DOI":"10.1109\/TPAMI.2018.2858826"},{"key":"1360_CR49","doi-asserted-by":"publisher","first-page":"892","DOI":"10.1109\/TPAMI.2014.2307881","volume":"23","author":"S-J Huang","year":"2010","unstructured":"Huang S-J, Jin R, Zhou Z-H (2010) Active learning by querying informative and representative examples. Adv Neural Inf Process Syst (NeurIPS) 23:892\u2013900. https:\/\/doi.org\/10.1109\/TPAMI.2014.2307881","journal-title":"Adv Neural Inf Process Syst (NeurIPS)"},{"key":"1360_CR50","doi-asserted-by":"publisher","unstructured":"Koh PW, Liang P (2017) Understanding black-box predictions via influence functions. In: Proceedings of the 34th international conference on machine learning (ICML), pp 1885\u20131894. https:\/\/doi.org\/10.48550\/arXiv.1703.04730","DOI":"10.48550\/arXiv.1703.04730"},{"key":"1360_CR51","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1186\/s40537-019-0197-0","volume":"6","author":"C Shorten","year":"2019","unstructured":"Shorten C, Khoshgoftaar TM (2019) A survey on image data augmentation for deep learning. J Big Data 6:1\u201348","journal-title":"J Big Data"},{"key":"1360_CR52","doi-asserted-by":"publisher","unstructured":"Wei, J., Zou, K(2019) EDA: easy data augmentation techniques for boosting performance on text classification tasks. In: Proceedings of the 2019 conference on empirical methods in natural language processing (EMNLP), pp 6382\u20136388. https:\/\/doi.org\/10.18653\/v1\/D19-1670","DOI":"10.18653\/v1\/D19-1670"},{"key":"1360_CR53","doi-asserted-by":"publisher","unstructured":"Lee H, Hwang SJ, Shin J (2020) Self-supervised label augmentation via input transformations. In: Proceedings of the 37th international conference on machine learning (ICML), pp 5714\u20135724. https:\/\/doi.org\/10.48550\/arXiv.1910.05872","DOI":"10.48550\/arXiv.1910.05872"},{"key":"1360_CR54","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2021.107605","volume":"235","author":"W Gao","year":"2022","unstructured":"Gao W, Wu M, Lam S-K, Xia Q, Zou J (2022) Decoupled self-supervised label augmentation for fully-supervised image classification. Knowl-Based Syst 235:107605. https:\/\/doi.org\/10.1016\/j.knosys.2021.107605","journal-title":"Knowl-Based Syst"},{"key":"1360_CR55","doi-asserted-by":"publisher","unstructured":"Gui X, Wang W, Tian Z (2021) Towards understanding deep learning from noisy labels with small-loss criterion. In: Proceedings of the 30th international joint conference on artificial intelligence (IJCAI), pp 2469\u20132475 https:\/\/doi.org\/10.48550\/arXiv.2106.09291","DOI":"10.48550\/arXiv.2106.09291"},{"key":"1360_CR56","doi-asserted-by":"publisher","unstructured":"Chang H-S, Learned-Miller E, McCallum A (2017) Active bias: training more accurate neural networks by emphasizing high variance samples. Adv Neural Inf Process Syst (NeurIPS) 30:1002\u20131012. https:\/\/doi.org\/10.48550\/arXiv.1704.07433","DOI":"10.48550\/arXiv.1704.07433"},{"key":"1360_CR57","doi-asserted-by":"publisher","unstructured":"Li Y, Long G, Shen T, Zhou T, Jiang J (2020) Self-attention enhanced selective gate with entity-aware embedding for distantly supervised relation extraction. In: Proceedings of the AAAI conference on artificial intelligence, Vol. 34, pp 8269\u20138276. https:\/\/doi.org\/10.48550\/arXiv.1911.11899","DOI":"10.48550\/arXiv.1911.11899"},{"key":"1360_CR58","doi-asserted-by":"publisher","unstructured":"Nayak T, Ng HT (2020) Effective modeling of encoder-decoder architecture for joint entity and relation extraction. In: Proceedings of the AAAI conference on artificial intelligence, Vol. 34, pp 8528\u20138535. https:\/\/doi.org\/10.48550\/arXiv.1911.09886","DOI":"10.48550\/arXiv.1911.09886"},{"key":"1360_CR59","doi-asserted-by":"publisher","first-page":"183","DOI":"10.1016\/j.ins.2019.09.006","volume":"509","author":"Z Geng","year":"2020","unstructured":"Geng Z, Chen G, Han Y, Lu G, Li F (2020) Semantic relation extraction using sequential and tree-structured lstm with attention. Inf Sci 509:183\u2013192. https:\/\/doi.org\/10.1016\/j.ins.2019.09.006","journal-title":"Inf Sci"},{"key":"1360_CR60","doi-asserted-by":"publisher","unstructured":"Simonyan K, Zisserman A(2015) Very deep convolutional networks for large-scale image recognition. In: International conference on learning representations (ICLR), pp 2691\u20132699. https:\/\/doi.org\/10.48550\/arXiv.1409.1556","DOI":"10.48550\/arXiv.1409.1556"},{"key":"1360_CR61","doi-asserted-by":"publisher","unstructured":"Pennington J, Socher R, Manning C (2014) GloVe: global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp 1532\u20131543. https:\/\/doi.org\/10.3115\/v1\/D14-1162","DOI":"10.3115\/v1\/D14-1162"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01360-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-024-01360-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-024-01360-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,5,16]],"date-time":"2024-05-16T18:22:55Z","timestamp":1715883775000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-024-01360-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,3,1]]},"references-count":61,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2024,6]]}},"alternative-id":["1360"],"URL":"https:\/\/doi.org\/10.1007\/s40747-024-01360-z","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"type":"print","value":"2199-4536"},{"type":"electronic","value":"2198-6053"}],"subject":[],"published":{"date-parts":[[2024,3,1]]},"assertion":[{"value":"29 May 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"21 January 2024","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"1 March 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no known competed financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}