{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,24]],"date-time":"2026-02-24T23:23:26Z","timestamp":1771975406102,"version":"3.50.1"},"reference-count":57,"publisher":"MDPI AG","issue":"2","license":[{"start":{"date-parts":[[2025,1,22]],"date-time":"2025-01-22T00:00:00Z","timestamp":1737504000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Computers"],"abstract":"<jats:p>Machine learning applied to image-based number recognition has made significant strides in recent years. Recent use of Large Language Models (LLMs) in natural language search and generation of text have improved performance for general images, yet performance limitations still exist for data subsets related to color blindness. In this paper, we replicated the training of six distinct neural networks (MNIST, LeNet5, VGG16, AlexNet, and two AlexNet modifications) using deep learning techniques with the MNIST dataset and the Ishihara-Like MNIST dataset. While many prior works have dealt with MNIST, the Ishihara adaption addresses red-green combinations of color blindness, allowing for further research in color distortion. Through this research, we applied pre-processing to accentuate the effects of red-green and monochrome colorblindness and hyper-parameterized the existing architectures, ultimately achieving better overall performance than currently published in known works.<\/jats:p>","DOI":"10.3390\/computers14020034","type":"journal-article","created":{"date-parts":[[2025,1,22]],"date-time":"2025-01-22T07:55:32Z","timestamp":1737532532000},"page":"34","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Number Recognition Through Color Distortion Using Convolutional Neural Networks"],"prefix":"10.3390","volume":"14","author":[{"ORCID":"https:\/\/orcid.org\/0009-0009-5769-2590","authenticated-orcid":false,"given":"Christopher","family":"Henshaw","sequence":"first","affiliation":[{"name":"Virginia Tech National Security Institute, Blacksburg, VA 24060, USA"},{"name":"Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA 24061, USA"}]},{"ORCID":"https:\/\/orcid.org\/0009-0003-3408-2833","authenticated-orcid":false,"given":"Jacob","family":"Dennis","sequence":"additional","affiliation":[{"name":"Virginia Tech National Security Institute, Blacksburg, VA 24060, USA"}]},{"given":"Jonathan","family":"Nadzam","sequence":"additional","affiliation":[{"name":"Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA 24061, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2437-3410","authenticated-orcid":false,"given":"Alan J.","family":"Michaels","sequence":"additional","affiliation":[{"name":"Virginia Tech National Security Institute, Blacksburg, VA 24060, USA"},{"name":"Bradley Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA 24061, USA"}]}],"member":"1968","published-online":{"date-parts":[[2025,1,22]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"142642","DOI":"10.1109\/ACCESS.2020.3012542","article-title":"Handwritten Optical Character Recognition (OCR): A Comprehensive Systematic Literature Review (SLR)","volume":"8","author":"Memon","year":"2020","journal-title":"IEEE Access"},{"key":"ref_2","unstructured":"Tseng, Y.C., and Pan, H.K. (2001, January 22\u201326). Secure and invisible data hiding in 2-color images. Proceedings of the Proceedings IEEE INFOCOM 2001. Conference on Computer Communications. Twentieth Annual Joint Conference of the IEEE Computer and Communications Society (Cat. No.01CH37213), Anchorage, AK, USA."},{"key":"ref_3","unstructured":"LeCun, Y., Cortes, C., and Burges, C. (2024, December 12). MNIST Handwritten Digit Database. ATT Labs [Online]. Available online: http:\/\/yann.lecun.com\/exdb\/mnist."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Baldominos, A., Saez, Y., and Isasi, P. (2019). A Survey of Handwritten Character Recognition with MNIST and EMNIST. Appl. Sci., 9.","DOI":"10.3390\/app9153169"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Cohen, G., Afshar, S., Tapson, J., and van Schaik, A. (2017, January 14\u201319). EMNIST: Extending MNIST to handwritten letters. Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA.","DOI":"10.1109\/IJCNN.2017.7966217"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Nocentini, O., Kim, J., Bashir, M.Z., and Cavallo, F. (2022). Image Classification Using Multiple Convolutional Neural Networks on the Fashion-MNIST Dataset. Sensors, 22.","DOI":"10.3390\/s22239544"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"364","DOI":"10.1109\/TE.2013.2239997","article-title":"Traffic Sign Recognition for Computer Vision Project-Based Learning","volume":"56","author":"Serrat","year":"2013","journal-title":"IEEE Trans. Educ."},{"key":"ref_8","unstructured":"Shaker, A., Saralajew, S., Gashteovski, K., Faust, I., Xu, Z., Kotnis, B., Ben-Rim, W., and Lawrence, C. (2024, December 12). Ishihara Like MNIST. Available online: https:\/\/www.kaggle.com\/datasets\/ammarshaker\/ishihara-mnist."},{"key":"ref_9","unstructured":"Ishihara, S. Tests for colour-blindness, 1951."},{"key":"ref_10","unstructured":"(2024, December 12). Picryl. Available online: https:\/\/picryl.com\/media\/eight-ishihara-charts-for-testing-colour-blindness-europe-wellcome-l0059155-cf3385."},{"key":"ref_11","unstructured":"(2024, December 12). We Are Colorblind. Available online: https:\/\/wearecolorblind.com\/articles\/a-quick-introduction-to-color-blindness\/."},{"key":"ref_12","unstructured":"(2024, December 12). American Academy of Ophthalmology. Available online: https:\/\/www.aao.org\/eye-health\/anatomy\/cones#:~:text=There%20are%20three%20types%20of,%2Dsensing%20cones%20(10%20percent)."},{"key":"ref_13","unstructured":"National Eye Institute (2024, December 12). Color Blindness, Available online: https:\/\/www.nei.nih.gov\/learn-about-eye-health\/eye-conditions-and-diseases\/color-blindness."},{"key":"ref_14","unstructured":"Mayo Clinic (2024, December 12). Color Blindness. Available online: https:\/\/www.mayoclinic.org\/diseases-conditions\/poor-color-vision\/symptoms-causes\/syc-20354988."},{"key":"ref_15","unstructured":"National Eye Institute (2024, December 12). Types of Color Vision Deficiency, Available online: https:\/\/www.nei.nih.gov\/learn-about-eye-health\/eye-conditions-and-diseases\/color-blindness\/types-color-vision-deficiency."},{"key":"ref_16","unstructured":"(2024, December 12). GavinAdmin. Available online: https:\/\/doctorofeye.com\/colour-blindness\/."},{"key":"ref_17","unstructured":"(2024, December 12). MedlinePlus, Available online: https:\/\/medlineplus.gov\/genetics\/condition\/achromatopsia\/#frequency."},{"key":"ref_18","unstructured":"(2024, December 12). PickPik. Available online: https:\/\/www.pickpik.com\/fruit-mixed-color-food-assorted-variety-62464."},{"key":"ref_19","unstructured":"Pilestone Inc. (2024, December 12). Color Blind Vision Simulator. Available online: https:\/\/pilestone.com\/pages\/color-blindness-simulator-1."},{"key":"ref_20","unstructured":"Petrovic, G., and Fujita, H. (2017). Deep Correct: Deep Learning Color Correction for Color Blindness, IOS Press."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Lin, H.Y., Chen, L.Q., and Wang, M.L. (2019). Improving Discrimination in Color Vision Deficiency by Image Re-Coloring. Sensors, 19.","DOI":"10.3390\/s19102250"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Jefferson, L., and Harvey, R. (2006, January 23\u201325). Accommodating color blind computer users. Proceedings of the 8th International ACM SIGACCESS Conference on Computers and Accessibility, Portland, OR, USA.","DOI":"10.1145\/1168987.1168996"},{"key":"ref_23","unstructured":"Jefferson, L., and Harvey, R. (May, January 28). An interface to support color blind computer users. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, San Jose, CA, USA."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Tsekouras, G.E., Rigos, A., Chatzistamatis, S., Tsimikas, J., Kotis, K., Caridakis, G., and Anagnostopoulos, C.N. (2021). A Novel Approach to Image Recoloring for Color Vision Deficiency. Sensors, 21.","DOI":"10.3390\/s21082740"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"848","DOI":"10.1109\/41.649946","article-title":"Road traffic sign detection and classification","volume":"44","author":"Moreno","year":"1997","journal-title":"IEEE Trans. Ind. Electron."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Bahlmann, C., Zhu, Y., Ramesh, V., Pellkofer, M., and Koehler, T. (2005, January 6\u20138). A system for traffic sign detection, tracking, and recognition using color, shape, and motion information. Proceedings of the IEEE Proceedings. Intelligent Vehicles Symposium, Las Vegas, NV, USA.","DOI":"10.1109\/IVS.2005.1505111"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Creusen, I., Hazelhoff, L., and de With, P. (October, January 30). Color transformation for improved traffic sign detection. Proceedings of the 2012 19th IEEE International Conference on Image Processing, Orlando, FL, USA.","DOI":"10.1109\/ICIP.2012.6466896"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Xie, Z., and Lyu, R. (2024). Whether pattern memory can be truly realized in deep neural network?. Research Square.","DOI":"10.21203\/rs.3.rs-4632836\/v1"},{"key":"ref_29","unstructured":"Solonko, M. (2024, December 12). Reading Color Blindness Charts: Deep Learning and Computer Vision. Available online: https:\/\/towardsdatascience.com\/reading-color-blindness-charts-deep-learning-and-computer-vision-a8c824dd71cd."},{"key":"ref_30","unstructured":"Bottou, L., Cortes, C., Denker, J., Drucker, H., Guyon, I., Jackel, L., LeCun, Y., Muller, U., Sackinger, E., and Simard, P. (1994, January 9\u201313). Comparison of classifier methods: A case study in handwritten digit recognition. Proceedings of the 12th IAPR International Conference on Pattern Recognition, Vol. 3\u2014Conference C: Signal Processing (Cat. No.94CH3440-5), Jerusalem, Israel."},{"key":"ref_31","unstructured":"GeeksforGeeks (2024, December 12). MNIST Dataset: Practical Applications Using Keras and PyTorch. Available online: https:\/\/www.geeksforgeeks.org\/mnist-dataset\/."},{"key":"ref_32","unstructured":"Clanuwat, T., Bober-Irizar, M., Kitamoto, A., Lamb, A., Yamamoto, K., and Ha, D. (2018). Deep Learning for Classical Japanese Literature. arXiv."},{"key":"ref_33","unstructured":"Al-Noori, A.H., Talib, M., and Harbi S., J. (2023, January 26\u201327). The Classification of Ancient Sumerian Characters using Convolutional Neural Network. Proceedings of the 1st International Conference on Computing and Emerging Sciences, Lahore, Pakistan."},{"key":"ref_34","unstructured":"Zalando Research, and Crawford Company (2024, December 12). Fashion Mnist. Available online: https:\/\/www.kaggle.com\/datasets\/zalando-research\/fashionmnist."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Xhaferra, E., Cina, E., and Toti, L. (2022, January 20\u201322). Classification of Standard FASHION MNIST Dataset Using Deep Learning Based CNN Algorithms. Proceedings of the 2022 International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT), Ankara, Turkey.","DOI":"10.1109\/ISMSIT56059.2022.9932737"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Rim, W.B., Shaker, A., Xu, Z., Gashteovski, K., Kotnis, B., Lawrence, C., Quittek, J., and Saralajew, S. (2024, January 9\u201313). A Human-Centric Assessment of the Usefulness of Attribution Methods in Computer Vision. Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Vilnius, Lithuania.","DOI":"10.1007\/978-3-031-70362-1_2"},{"key":"ref_37","unstructured":"Potjewyd, G. (2024, December 12). The Color Code. Available online: https:\/\/theophthalmologist.com\/business-profession\/the-color-code."},{"key":"ref_38","unstructured":"(2024, December 12). Welcome Collection. Available online: https:\/\/wellcomecollection.org\/search\/works."},{"key":"ref_39","unstructured":"Ishihara, S. (2024, December 12). Ishihara Instructions. Available online: https:\/\/web.stanford.edu\/group\/vista\/wikiupload\/0\/0a\/Ishihara.14.Plate.Instructions.pdf."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Dhawale, K., Vohra, A.S., Jain, P., and Kumar, T. (2021). A Framework to Identify Color Blindness Charts Using Image Processing and CNN. Communication, Networks and Computing: Second International Conference (CNC 2020), Gwalior, India, 29\u201331 December 2020, Springer. Revised Selected Papers 2.","DOI":"10.1007\/978-981-16-8896-6_8"},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"1480","DOI":"10.1109\/TPAMI.2014.2366765","article-title":"Text Detection and Recognition in Imagery: A Survey","volume":"37","author":"Ye","year":"2015","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Imran, F., Hossain, D.M.A., and Mamun, M.A. (2020, January 5\u20137). Identification and Recognition of Printed Distorted Characters Using Proposed DCR Method. Proceedings of the 2020 IEEE Region 10 Symposium (TENSYMP), Dhaka, Bangladesh.","DOI":"10.1109\/TENSYMP50017.2020.9230646"},{"key":"ref_43","unstructured":"Paravisionlab.co.in (2024, December 12). LeNet-5: A Simple Yet Powerful CNN for Image Classification. Available online: https:\/\/paravisionlab.co.in\/lenet-5-architecture\/."},{"key":"ref_44","unstructured":"Boesch, G. (2024, December 12). Very Deep Convolutional Networks (VGG) Essential Guide. Available online: https:\/\/viso.ai\/deep-learning\/vgg-very-deep-convolutional-networks\/."},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"84","DOI":"10.1145\/3065386","article-title":"ImageNet Classification with Deep Convolutional Neural Networks","volume":"60","author":"Krizhevsky","year":"2017","journal-title":"Commun. ACM"},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"012097","DOI":"10.1088\/1755-1315\/428\/1\/012097","article-title":"Improvement of MNIST Image Recognition Based on CNN","volume":"428","author":"Wang","year":"2020","journal-title":"IOP Conf. Ser. Earth Environ. Sci."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Cheng, S., Shang, G., and Zhang, L. (2018, January 12\u201314). Handwritten digit recognition based on improved VGG16 network. Proceedings of the Tenth International Conference on Graphics and Image Processing (ICGIP 2018), Chengdu, China.","DOI":"10.1117\/12.2524281"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016, January 27\u201330). You Only Look Once: Unified, Real-Time Object Detection. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.91"},{"key":"ref_49","unstructured":"(2024, December 12). Kaggle. Available online: https:\/\/www.kaggle.com\/."},{"key":"ref_50","unstructured":"GeeksforGeeks (2024, December 12). How to Choose Batch Size and Number of Epochs When Fitting a Model?. Available online: https:\/\/www.geeksforgeeks.org\/how-to-choose-batch-size-and-number-of-epochs-when-fitting-a-model\/."},{"key":"ref_51","unstructured":"Kaggle, A.J. (2024, December 12). Available online: https:\/\/www.kaggle.com\/code\/amyjang\/tensorflow-mnist-cnn-tutorial\/."},{"key":"ref_52","unstructured":"Thakur, A. (2024, December 12). ReLU vs. Sigmoid Function in Deep Neural Networks. Available online: https:\/\/wandb.ai\/ayush-thakur\/dl-question-bank\/reports\/ReLU-vs-Sigmoid-Function-in-Deep-Neural-Networks\u2013VmlldzoyMDk0MzI#:~:text=The%20model%20trained%20with%20ReLU,better%20when%20trained%20with%20ReLU."},{"key":"ref_53","doi-asserted-by":"crossref","first-page":"2278","DOI":"10.1109\/5.726791","article-title":"Gradient-based learning applied to document recognition","volume":"86","author":"Lecun","year":"1998","journal-title":"Proc. IEEE"},{"key":"ref_54","unstructured":"Kumar, S. (2024, December 12). Comparison of Sigmoid, Tanh and Relu Activation Functions. Available online: https:\/\/www.aitude.com\/comparison-of-sigmoid-tanh-and-relu-activation-functions\/."},{"key":"ref_55","unstructured":"Melanie (2024, December 12). Unveiling the Secrets of the VGG Model: A Deep Dive with Daniel. Available online: https:\/\/datascientest.com\/en\/unveiling-the-secrets-of-the-vgg-model-a-deep-dive-with-daniel#:~:text=A%20little%20history,Recognition%20Challenge)%20competition%20in%202014."},{"key":"ref_56","unstructured":"Wei, J. (2024, December 12). AlexNet: The Architecture That Challenged CNNs. Available online: https:\/\/towardsdatascience.com\/alexnet-the-architecture-that-challenged-cnns-e406d5297951."},{"key":"ref_57","doi-asserted-by":"crossref","unstructured":"Taheri, R., Arabikhan, F., Gegov, A., and Akbari, N. (2023). Robust Aggregation Function in Federated Learning. International Conference on Information and Knowledge Systems, Springer.","DOI":"10.1007\/978-3-031-51664-1_12"}],"container-title":["Computers"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2073-431X\/14\/2\/34\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,8]],"date-time":"2025-10-08T10:33:40Z","timestamp":1759919620000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2073-431X\/14\/2\/34"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,1,22]]},"references-count":57,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2025,2]]}},"alternative-id":["computers14020034"],"URL":"https:\/\/doi.org\/10.3390\/computers14020034","relation":{},"ISSN":["2073-431X"],"issn-type":[{"value":"2073-431X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,1,22]]}}}