{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,24]],"date-time":"2026-02-24T09:13:10Z","timestamp":1771924390245,"version":"3.50.1"},"reference-count":20,"publisher":"MDPI AG","issue":"4","license":[{"start":{"date-parts":[[2025,10,15]],"date-time":"2025-10-15T00:00:00Z","timestamp":1760486400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["MAKE"],"abstract":"<jats:p>Classifying brain tumour transcriptomic data is crucial for precision medicine but remains challenging due to high dimensionality and limited interpretability of conventional models. This study benchmarks three image-based deep learning approaches, DeepInsight, Fotomics, and a novel saliency-guided convolutional neural network (CNN), for transcriptomic classification. DeepInsight utilises dimensionality reduction to spatially arrange gene features, while Fotomics applies Fourier transforms to encode expression patterns into structured images. The proposed method transforms each single-cell gene expression profile into an RGB image using PCA, UMAP, or t-SNE, enabling CNNs such as ResNet to learn spatially organised molecular features. Gradient-based saliency maps are employed to highlight gene regions most influential in model predictions. Evaluation is conducted on two biologically and technologically different datasets: single-cell RNA-seq from glioblastoma GSM3828672 and bulk microarray data from medulloblastoma GSE85217. Outcomes demonstrate that image-based deep learning methods, particularly those incorporating saliency guidance, provide a robust and interpretable framework for uncovering biologically meaningful patterns in complex high-dimensional omics data. For instance, ResNet-18 achieved the highest accuracy of 97.25% on the GSE85217 dataset and 91.02% on GSM3828672, respectively, outperforming other baseline models across multiple metrics.<\/jats:p>","DOI":"10.3390\/make7040119","type":"journal-article","created":{"date-parts":[[2025,10,15]],"date-time":"2025-10-15T14:04:02Z","timestamp":1760537042000},"page":"119","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Image-Based Deep Learning for Brain Tumour Transcriptomics: A Benchmark of DeepInsight, Fotomics, and Saliency-Guided CNNs"],"prefix":"10.3390","volume":"7","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-3167-8418","authenticated-orcid":false,"given":"Ali","family":"Alyatimi","sequence":"first","affiliation":[{"name":"Faculty of Engineering, School of Computer Science, The University of Sydney, Sydney, NSW 2008, Australia"},{"name":"Department of Computer and Information Technology, Jazan College of Technology, Technical and Vocational Training Corporation (TVTC), Riyadh 12613, Saudi Arabia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3158-9650","authenticated-orcid":false,"given":"Vera","family":"Chung","sequence":"additional","affiliation":[{"name":"Faculty of Engineering, School of Computer Science, The University of Sydney, Sydney, NSW 2008, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5491-4981","authenticated-orcid":false,"given":"Muhammad Atif","family":"Iqbal","sequence":"additional","affiliation":[{"name":"Faculty of Engineering, School of Computer Science, The University of Sydney, Sydney, NSW 2008, Australia"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-8864-0314","authenticated-orcid":false,"given":"Ali","family":"Anaissi","sequence":"additional","affiliation":[{"name":"Faculty of Engineering, School of Computer Science, The University of Sydney, Sydney, NSW 2008, Australia"}]}],"member":"1968","published-online":{"date-parts":[[2025,10,15]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"599","DOI":"10.1038\/nprot.2017.149","article-title":"Exponential scaling of single-cell RNA-seq in the past decade","volume":"13","author":"Svensson","year":"2018","journal-title":"Nat. Protoc."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"987","DOI":"10.1056\/NEJMoa043330","article-title":"Radiotherapy plus Concomitant and Adjuvant Temozolomide for Glioblastoma","volume":"352","author":"Stupp","year":"2005","journal-title":"N. Engl. J. Med."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"98","DOI":"10.1016\/j.ccr.2009.12.020","article-title":"Integrated Genomic Analysis Identifies Clinically Relevant Subtypes of Glioblastoma Characterized by Abnormalities in PDGFRA, IDH1, EGFR, and NF1","volume":"17","author":"Verhaak","year":"2010","journal-title":"Cancer Cell"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"737","DOI":"10.1016\/j.ccell.2017.05.005","article-title":"Intertumoral Heterogeneity within Medulloblastoma Subgroups","volume":"31","author":"Cavalli","year":"2017","journal-title":"Cancer Cell"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Angermueller, C., P\u00e4rnamaa, T., Parts, L., and Stegle, O. (2016). Deep learning for computational biology. Mol. Syst. Biol., 12.","DOI":"10.15252\/msb.20156651"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"60","DOI":"10.1016\/j.media.2017.07.005","article-title":"A survey on deep learning in medical image analysis","volume":"42","author":"Litjens","year":"2017","journal-title":"Med. Image Anal."},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Sharma, A., Vans, E., Shigemizu, D., Boroevich, K.A., and Tsunoda, T. (2019). DeepInsight: A methodology to transform a non-image data to an image for convolution neural network architecture. Sci. Rep., 9.","DOI":"10.1038\/s41598-019-47765-6"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"7263","DOI":"10.1007\/s10462-022-10357-4","article-title":"Fotomics: Fourier transform-based omics imagification for deep learning-based cell-identity mapping using single-cell omics profiles","volume":"56","author":"Zandavi","year":"2023","journal-title":"Artif. Intell. Rev."},{"key":"ref_9","unstructured":"Simonyan, K., Vedaldi, A., and Zisserman, A. (2013). Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. arXiv, Version Number: 2."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"723","DOI":"10.32604\/iasc.2022.022179","article-title":"Combining CNN and Grad-Cam for COVID-19 Disease Prediction and Visual Explanation","volume":"32","author":"Moujahid","year":"2022","journal-title":"Intell. Autom. Soft Comput."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Gokhale, M., Mohanty, S.K., and Ojha, A. (2023). GeneViT: Gene Vision Transformer with Improved DeepInsight for cancer classification. Comput. Biol. Med., 155.","DOI":"10.1016\/j.compbiomed.2023.106643"},{"key":"ref_12","unstructured":"Ma, S., and Zhang, Z. (2018). OmicsMapNet: Transforming omics data to take advantage of Deep Convolutional Neural Network for discovery. arXiv, Version Number: 2."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Kobak, D., and Berens, P. (2019). The art of using t-SNE for single-cell transcriptomics. Nat. Commun., 10.","DOI":"10.1038\/s41467-019-13056-x"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"McInnes, L., Healy, J., and Melville, J. (2018). UMAP: Uniform Manifold Approximation and Projection for Dimension Reduction. arXiv.","DOI":"10.21105\/joss.00861"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Ge, S., Sun, S., Xu, H., Cheng, Q., and Ren, Z. (2025). Deep learning in single-cell and spatial transcriptomics data analysis: Advances and challenges from a data science perspective. Briefings Bioinform., 26.","DOI":"10.1093\/bib\/bbaf136"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Brocki, L., and Chung, N.C. (2019, January 16\u201319). Concept Saliency Maps to Visualize Relevant Features in Deep Generative Models. Proceedings of the 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA), Boca Raton, FL, USA.","DOI":"10.1109\/ICMLA.2019.00287"},{"key":"ref_17","first-page":"293","article-title":"Wilcoxon Signed Rank Based Feature Selection for Sentiment Classification","volume":"Volume 712","author":"Bhateja","year":"2018","journal-title":"Proceedings of the Second International Conference on Computational Intelligence and Informatics"},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Wilcoxon, F. (1945). Individual Comparisons by Ranking Methods. Biom. Bull., 1.","DOI":"10.2307\/3001968"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_20","unstructured":"Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., and Gelly, S. (2021). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv."}],"container-title":["Machine Learning and Knowledge Extraction"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2504-4990\/7\/4\/119\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,15]],"date-time":"2025-10-15T14:30:55Z","timestamp":1760538655000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2504-4990\/7\/4\/119"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,10,15]]},"references-count":20,"journal-issue":{"issue":"4","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["make7040119"],"URL":"https:\/\/doi.org\/10.3390\/make7040119","relation":{},"ISSN":["2504-4990"],"issn-type":[{"value":"2504-4990","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,10,15]]}}}