{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T02:14:16Z","timestamp":1760235256235,"version":"build-2065373602"},"reference-count":23,"publisher":"MDPI AG","issue":"15","license":[{"start":{"date-parts":[[2021,8,3]],"date-time":"2021-08-03T00:00:00Z","timestamp":1627948800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Electronics"],"abstract":"<jats:p>Interpretability has made significant strides in recent years, enabling the formerly black-box models to reach new levels of transparency. These kinds of models can be particularly useful to broaden the applicability of machine learning-based systems to domains where\u2014apart from the predictions\u2014appropriate justifications are also required (e.g., forensics and medical image analysis). In this context, techniques that focus on visual explanations are of particular interest here, due to their ability to directly portray the reasons that support a given prediction. Therefore, in this document, we focus on presenting the core principles of interpretability and describing the main methods that deliver visual cues (including one that we designed for periocular recognition in particular). Based on these intuitions, the experiments performed show explanations that attempt to highlight the most important periocular components towards a non-match decision. Then, some particularly challenging scenarios are presented to naturally sustain our conclusions and thoughts regarding future directions.<\/jats:p>","DOI":"10.3390\/electronics10151861","type":"journal-article","created":{"date-parts":[[2021,8,3]],"date-time":"2021-08-03T08:16:39Z","timestamp":1627978599000},"page":"1861","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["A Short Survey on Machine Learning Explainability: An Application to Periocular Recognition"],"prefix":"10.3390","volume":"10","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-0360-7049","authenticated-orcid":false,"given":"Jo\u00e3o","family":"Brito","sequence":"first","affiliation":[{"name":"Department of Computer Science, Faculty of Engineering, Universidade da Beira Interior, 6201-001 Covilh\u00e3, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2551-8570","authenticated-orcid":false,"given":"Hugo","family":"Proen\u00e7a","sequence":"additional","affiliation":[{"name":"IT: Instituto de Telecomunica\u00e7\u00f5es, Department of Computer Science, Faculty of Engineering, Universidade da Beira Interior, 6201-001 Covilh\u00e3, Portugal"}]}],"member":"1968","published-online":{"date-parts":[[2021,8,3]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Carvalho, D.V., Pereira, E.M., and Cardoso, J.S. (2019). Machine Learning Interpretability: A Survey on Methods and Metrics. Electronics, 8.","DOI":"10.3390\/electronics8080832"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Huang, Z., and Li, Y. (2020, January 14\u201319). Interpretable and Accurate Fine-grained Recognition via Region Grouping. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00869"},{"key":"ref_3","unstructured":"Minaee, S., Abdolrashidi, A., Su, H., Bennamoun, M., and Zhang, D. (2019). Biometric Recognition Using Deep Learning: A Survey. arXiv, Available online: https:\/\/arxiv.org\/abs\/1912.00271."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"012017","DOI":"10.1088\/1742-6596\/1098\/1\/012017","article-title":"Application of Machine Learning to Biometric Systems- A Survey","volume":"1098","author":"Chato","year":"2018","journal-title":"J. Phys. Conf. Ser."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Damousis, Y., and Argyropoulos, S. (2012). Four Machine Learning Algorithms for Biometrics Fusion: A Comparative Study. Appl. Comput. Intell. Soft Comput., 2012.","DOI":"10.1155\/2012\/242401"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"888","DOI":"10.1109\/TIFS.2017.2771230","article-title":"Deep-PRWIS: Periocular Recognition Without the Iris and Sclera Using Deep Learning Frameworks","volume":"13","author":"Neves","year":"2018","journal-title":"IEEE Trans. Inf. Forens. Secur."},{"key":"ref_7","unstructured":"Molnar, C. (2021, July 30). Interpretable Machine Learning. Available online: https:\/\/christophm.github.io\/interpretable-ml-book\/."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"1189","DOI":"10.1214\/aos\/1013203451","article-title":"Greedy function approximation: A gradient boosting machine","volume":"29","author":"Friedman","year":"2001","journal-title":"Ann. Stat."},{"key":"ref_9","unstructured":"Apley, D. (2016). Visualizing the Effects of Predictor Variables in Black Box Supervised Learning Models. arXiv."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Ribeiro, M.T., Singh, S., and Guestrin, C. (2016, January 13\u201317). \u201cWhy Should I Trust You?\u201d: Explaining the Predictions of Any Classifier. Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA.","DOI":"10.1145\/2939672.2939778"},{"key":"ref_11","unstructured":"Lundberg, S., and Lee, S. (2017, January 4). A Unified Approach to Interpreting Model Predictions. Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Ribeiro, M.T., Singh, S., and Guestrin, C. (2018, January 2\u20137). Anchors: High-Precision Model-Agnostic Explanations. Proceedings of the AAAI Conference on Artificial Intelligence, Hilton New Orleans Riverside, New Orleans, LA, USA.","DOI":"10.1609\/aaai.v32i1.11491"},{"key":"ref_13","unstructured":"Simonyan, K., Vedaldi, A., and Zisserman, A. (2014). Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. arXiv."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Nassih, R., and Berrado, A. (2020, January 23\u201324). State of the Art of Fairness, Interpretability and Explainability in Machine Learning: Case of PRIM. Proceedings of the 13th International Conference on Intelligent Systems: Theories and Applications, SITA\u201920, Association for Computing Machinery, New York, NY, USA.","DOI":"10.1145\/3419604.3419776"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Brito, J., and Proenca, H. (2021, January 19\u201325). A Deep Adversarial Framework for Visually Explainable Periocular Recognition. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, Virtual.","DOI":"10.1109\/CVPRW53098.2021.00161"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"He, K., Gkioxari, G., Doll\u00e1r, P., and Girshick, R. (2017, January 22\u201329). Mask R-CNN. Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.322"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Karras, T., Laine, S., Aittala, M., Hellsten, J., Lehtinen, J., and Aila, T. (2020, January 14\u201319). Analyzing and Improving the Image Quality of StyleGAN. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00813"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Padole, C., and Proen\u00e7a, H. (April, January 29). Periocular Recognition: Analysis of Performance Degradation Factors. Proceedings of the Fifth IAPR\/IEEE International Conference on Biometrics\u2013ICB 2012, New Delhi, India.","DOI":"10.1109\/ICB.2012.6199790"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Karras, T., Laine, S., and Aila, T. (2019, January 16\u201320). A Style-Based Generator Architecture for Generative Adversarial Networks. Proceedings of the 2019 IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00453"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"1529","DOI":"10.1109\/TPAMI.2009.66","article-title":"The UBIRIS.v2: A Database of Visible Wavelength Iris Images Captured On- The-Move and At-A-Distance","volume":"32","author":"Filipe","year":"2010","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"1017","DOI":"10.1109\/TIFS.2016.2636093","article-title":"Accurate Periocular Recognition Under Less Constrained Environment Using Semantics-Assisted Convolutional Neural Network","volume":"12","author":"Zhao","year":"2017","journal-title":"IEEE Trans. Inf. Forens. Secur."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"2937","DOI":"10.1109\/TIFS.2018.2833018","article-title":"Improving Periocular Recognition by Explicit Attention to Critical Regions in Deep Neural Network","volume":"13","author":"Zhao","year":"2018","journal-title":"IEEE Trans. Inf. Forens. Secur."},{"key":"ref_23","unstructured":"Commission, E. (2021, July 30). General Data Protection Regulation. Available online: https:\/\/gdpr-info.eu."}],"container-title":["Electronics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2079-9292\/10\/15\/1861\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T06:39:25Z","timestamp":1760164765000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2079-9292\/10\/15\/1861"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,8,3]]},"references-count":23,"journal-issue":{"issue":"15","published-online":{"date-parts":[[2021,8]]}},"alternative-id":["electronics10151861"],"URL":"https:\/\/doi.org\/10.3390\/electronics10151861","relation":{},"ISSN":["2079-9292"],"issn-type":[{"type":"electronic","value":"2079-9292"}],"subject":[],"published":{"date-parts":[[2021,8,3]]}}}