{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,26]],"date-time":"2026-01-26T00:58:29Z","timestamp":1769389109771,"version":"3.49.0"},"reference-count":30,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2025,6,1]],"date-time":"2025-06-01T00:00:00Z","timestamp":1748736000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,6,9]],"date-time":"2025-06-09T00:00:00Z","timestamp":1749427200000},"content-version":"vor","delay-in-days":8,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100003451","name":"Universidad del Pa\u00eds Vasco","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100003451","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Cogn Comput"],"published-print":{"date-parts":[[2025,6]]},"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>In machine learning, deep metric learning from original data is essential, with supervised contrastive learning being a notable approach. This method aims to form a deep feature space where similar samples from the same class are clustered together, while dissimilar samples from different classes are separated. However, a common limitation of contrastive learning methods is that they utilize the entire feature space for data embedding and often neglect the within-class variability. To overcome this limitation, we propose a novel supervised contrastive learning method that decomposes deep features into two distinct components: common features, which encapsulate the essential, class-defining characteristics, and style features, which capture the within-class variability and nuanced differences. Additionally, we enhance this approach by introducing an overlapping field that synergistically integrates elements from both feature spaces, enabling a more comprehensive and robust feature representation. Our experiments with different image datasets and deep encoders, including CNNs and transformers, show that our approach outperforms traditional single-feature contrastive methods. On the CIFAR100 and PASCAL VOC databases, traditional supervised contrastive learning achieved accuracy rates of 75.5% and 51.41%, respectively, while our method improved them to 77.81% and 59.38%, respectively. We present an algorithm for deep contrastive learning that utilizes two feature spaces: one for encoding common class features and another for capturing within-class variability. This is achieved by partitioning the features of the last layer of the encoder into (i) a common field and (ii) a style field. Our loss function contrasts the common features while summarizing the style features within the same class so that the style field can capture the intra-class variability.<\/jats:p>","DOI":"10.1007\/s12559-025-10430-4","type":"journal-article","created":{"date-parts":[[2025,6,9]],"date-time":"2025-06-09T09:49:04Z","timestamp":1749462544000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["Deep Feature Disentanglement for Supervised Contrastive Learning: Application to Image Classification"],"prefix":"10.1007","volume":"17","author":[{"given":"F.","family":"Dornaika","sequence":"first","affiliation":[]},{"given":"B.","family":"Wang","sequence":"additional","affiliation":[]},{"given":"J.","family":"Charafeddine","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,6,9]]},"reference":[{"key":"10430_CR1","doi-asserted-by":"crossref","unstructured":"Alirezazadeh P, Dornaika F, Moujahid A. A deep learning loss based on additive cosine margin: application to fashion style and face recognition. Appl Soft Comput. 2022;131.","DOI":"10.1016\/j.asoc.2022.109776"},{"key":"10430_CR2","doi-asserted-by":"crossref","unstructured":"Alirezazadeh P, Dornaika F, Moujahid A. Deep learning with discriminative margin loss for cross-domain consumer-to-shop clothes retrieval. Sensors. 2022;22(7).","DOI":"10.3390\/s22072660"},{"key":"10430_CR3","unstructured":"Bachman P, Hjelm RD, Buchwalter W. Learning representations by maximizing mutual information across views. CoRR. arXiv:1906.00910, 2019."},{"key":"10430_CR4","doi-asserted-by":"crossref","unstructured":"Bao S, Xu Q, Zhiyong Yang XC, Huang Q. Rethinking collaborative metric learning: toward an efficient alternative without negative sampling. IEEE Trans Pattern Anal Mach Intell. 2023;45:1017\u20131035","DOI":"10.1109\/TPAMI.2022.3141095"},{"key":"10430_CR5","doi-asserted-by":"crossref","unstructured":"Chen D, Chen Y, Li Y, Mao F, He Y, Xue H. Self-supervised learning for few-shot image classification. In: IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 1745\u20131749, 2021.","DOI":"10.1109\/ICASSP39728.2021.9413783"},{"key":"10430_CR6","unstructured":"Chen T, Kornblith S, Norouzi M, Hinton G. A simple framework for contrastive learning of visual representations. In: Proceedings of the 37th international conference on machine learning, ICML\u201920, 2020."},{"key":"10430_CR7","doi-asserted-by":"crossref","unstructured":"Deng J, Guo J, Xue N, Zafeiriou S. Arcface: Additive angular margin loss for deep face recognition. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition, pp 4690\u20134699, 2019.","DOI":"10.1109\/CVPR.2019.00482"},{"key":"10430_CR8","unstructured":"Deng X, Zhang Z. Deep causal metric learning. In: Proceedings of the 39th international conference on machine learning, volume 162 of Proceedings of Machine Learning Research, pp 4993\u20135006, 17\u201323 Jul 2022."},{"key":"10430_CR9","doi-asserted-by":"crossref","unstructured":"Gonzalez-Zapata J, Reyes-Amezcua I, Flores-Araiza D, Mendez-Ruiz M, Ochoa-Ruiz G, Mendez-Vazquez A. Guided deep metric learning. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (CVPR) Workshops, pp 1481\u20131489, 2022.","DOI":"10.1109\/CVPRW56347.2022.00154"},{"key":"10430_CR10","unstructured":"Grill J-B, Strub F, Altch\u00e9 F, Tallec C, Richemond PH, Buchatskaya E, Doersch C, Pires BA, Guo ZD, Azar MG, Piot B, Kavukcuoglu K, Munos R, Valko M. Bootstrap your own latent a new approach to self-supervised learning. In: Proceedings of the 34th international conference on neural information processing systems, NIPS\u201920, 2020."},{"key":"10430_CR11","doi-asserted-by":"crossref","unstructured":"Hadsell R, Chopra S, LeCun Y. Dimensionality reduction by learning an invariant mapping. In: 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR\u201906), vol 2, pp 1735\u20131742, 2006.","DOI":"10.1109\/CVPR.2006.100"},{"key":"10430_CR12","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. CoRR. arXiv:1512.03385, 2015.","DOI":"10.1109\/CVPR.2016.90"},{"key":"10430_CR13","doi-asserted-by":"crossref","unstructured":"Hoffer E, Ailon N. Deep metric learning using triplet network. In A. Feragen, M. Pelillo, and M. Loog, editors, Similarity-based pattern recognition, pp 84\u201392. Springer International Publishing, 2015.","DOI":"10.1007\/978-3-319-24261-3_7"},{"key":"10430_CR14","unstructured":"H\u00c3 OJ, Srinivas A, De Fauw J, Razavi A, Doersch C, Eslami SMA, Oord Avd. Data-efficient image recognition with contrastive predictive coding. In: International conference on machine learning, 2020."},{"key":"10430_CR15","unstructured":"Khosla P, Teterwak P, Wang C, Sarna A, Tian Y, Isola P, Maschinot A, Liu C, Krishnan D. Supervised constrastive learning. In: Advances in neural information processing systems, vol 33, 2020."},{"key":"10430_CR16","doi-asserted-by":"crossref","unstructured":"Kim S, Kim D, Cho M, Kwak S. Self-taught metric learning without labels. In: Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (CVPR), pp 7431\u20137441, 2022.","DOI":"10.1109\/CVPR52688.2022.00728"},{"key":"10430_CR17","doi-asserted-by":"crossref","unstructured":"Kobs K, Steininger M, Dulny A, Hotho A. Do different deep metric learning losses lead to similar learned features? In: Proceedings of the IEEE\/CVF international conference on computer vision (ICCV), pp 10644\u201310654, 2021.","DOI":"10.1109\/ICCV48922.2021.01047"},{"key":"10430_CR18","doi-asserted-by":"crossref","unstructured":"Liu Z, Mao H, Wu C-Y, Feichtenhofer C, Darrell T, Xie S. A convnet for the 2020s. Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (CVPR), 2022.","DOI":"10.1109\/CVPR52688.2022.01167"},{"key":"10430_CR19","doi-asserted-by":"crossref","unstructured":"Moon W, Kim J-H, Heo J-P. Tailoring self-supervision for supervised learning, 2022.","DOI":"10.1007\/978-3-031-19806-9_20"},{"key":"10430_CR20","doi-asserted-by":"crossref","unstructured":"Qian Q, Shang L, Sun B, Hu J, Li H, Jin R. Softtriple loss: deep metric learning without triplet sampling. CoRR, arXiv:1909.05235, 2019.","DOI":"10.1109\/ICCV.2019.00655"},{"key":"10430_CR21","unstructured":"Robinson J, Chuang C, Sra S, Jegelka S. Contrastive learning with hard negative samples. ICLR. arXiv:2010.04592, 2021."},{"key":"10430_CR22","unstructured":"Roth K, Milbich T, Ommer B, Cohen JP, Ghassemi M. Simultaneous similarity-based self-distillation for deep metric learning. In: M. Meila and T. Zhang, editors, Proceedings of the 38th international conference on machine learning, volume 139 of Proceedings of Machine Learning Research, pp 9095\u20139106, 2021."},{"key":"10430_CR23","unstructured":"Sohn K. Improved deep metric learning with multi-class n-pair loss objective. pp 1857\u20131865, Red Hook, NY, USA, 2016"},{"key":"10430_CR24","doi-asserted-by":"crossref","unstructured":"Tan C, Xia J, Wu L, Li SZ. Co-learning: learning from noisy labels with self-supervision. In: Proceedings of the 29th ACM International Conference on Multimedia, MM \u201921, pp 1405\u20131413, 2021.","DOI":"10.1145\/3474085.3475622"},{"key":"10430_CR25","doi-asserted-by":"crossref","unstructured":"Wang F, Cheng J, Liu W, Liu H. Additive margin softmax for face verification. IEEE Signal Process Lett. 2018;25(7):926\u201330.","DOI":"10.1109\/LSP.2018.2822810"},{"key":"10430_CR26","doi-asserted-by":"crossref","unstructured":"Wang X, Han X, Huang W, Dong D, Scott MR. Multi-similarity loss with general pair weighting for deep metric learning. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 5022\u20135030, 2019.","DOI":"10.1109\/CVPR.2019.00516"},{"key":"10430_CR27","doi-asserted-by":"crossref","unstructured":"Wei X-S, Song Y-Z, Mac Aodha O, Wu J, Peng Y, Tang J, Yang J, Belongie S. Fine-grained image analysis with deep learning: a survey. IEEE Trans Pattern Anal Mach Intell. 2021;44(12):8927\u20138948.","DOI":"10.1109\/TPAMI.2021.3126648"},{"key":"10430_CR28","doi-asserted-by":"publisher","first-page":"108","DOI":"10.1016\/j.neucom.2018.02.040","volume":"290","author":"B Wu","year":"2018","unstructured":"Wu B, Chen Z, Wang J, Wu H. Exponential discriminative metric embedding in deep learning. Neurocomputing. 2018;290:108\u201320.","journal-title":"Neurocomputing."},{"key":"10430_CR29","unstructured":"Zbontar J, Jing L, Misra I, LeCun Y, Deny S. Barlow twins: self-supervised learning via redundancy reduction, 2021."},{"key":"10430_CR30","doi-asserted-by":"crossref","unstructured":"Zhu S, Yang T, Chen C. Visual explanation for deep metric learning. IEEE Transactions on Image Processing, 2021.","DOI":"10.1109\/TIP.2021.3107214"}],"container-title":["Cognitive Computation"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s12559-025-10430-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s12559-025-10430-4\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s12559-025-10430-4.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,26]],"date-time":"2025-06-26T02:41:24Z","timestamp":1750905684000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s12559-025-10430-4"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6]]},"references-count":30,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2025,6]]}},"alternative-id":["10430"],"URL":"https:\/\/doi.org\/10.1007\/s12559-025-10430-4","relation":{},"ISSN":["1866-9956","1866-9964"],"issn-type":[{"value":"1866-9956","type":"print"},{"value":"1866-9964","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,6]]},"assertion":[{"value":"15 January 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 February 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"9 June 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of Interest"}}],"article-number":"117"}}