{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,21]],"date-time":"2026-04-21T06:24:10Z","timestamp":1776752650504,"version":"3.51.2"},"reference-count":39,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2026,2,16]],"date-time":"2026-02-16T00:00:00Z","timestamp":1771200000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2026,2,16]],"date-time":"2026-02-16T00:00:00Z","timestamp":1771200000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Korea Advanced Institute of Science and Technology"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Cluster Comput"],"published-print":{"date-parts":[[2026,6]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>Single-image super-resolution (SISR) has become a major focus in the field of computer vision, with significant applications in industries such as medical imaging, satellite analysis, and security surveillance. Recent developments have led to the use of deep convolutional networks and generative adversarial models, such as ESRGAN, which applies residual-dense connections to reconstruct high-resolution (HR) images from low-resolution (LR) inputs. Nevertheless, these architectures often fail to capture long-range dependencies and the most delicate textures that are essential for photo-realistic restoration. In the present work, we propose a modified ESRGAN model by integrating a Convolutional Block Attention Module (CBAM) into the Residual-in-Residual Dense Block (RRDB) structure and replacing the final dense layer with a more advanced feature recalibration module. This modification introduces a slight computational overhead but substantially enhances attention-driven texture refinement. Experiments conducted on the Div2K, BSD100, and Set14 datasets demonstrate that the CBAM-ESRGAN model outperforms existing state-of-the-art techniques, achieving superior PSNR, SSIM, LPIPS, and Perceptual Index scores, while also improving visual quality and reducing both inference time and model complexity. Additional experiments and their corresponding analysis further clarify the optimal placement of the CBAM module, considering the trade-off between performance and computational efficiency. The proposed model is intended for implementation as a practical alternative to existing high-quality super-resolution methods in both real-time and resource-constrained environments.<\/jats:p>","DOI":"10.1007\/s10586-026-05970-9","type":"journal-article","created":{"date-parts":[[2026,2,16]],"date-time":"2026-02-16T13:24:00Z","timestamp":1771248240000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Adaptive feature refinement for texture-preserving single image super-resolution"],"prefix":"10.1007","volume":"29","author":[{"given":"Mukhiddin","family":"Toshpulatov","sequence":"first","affiliation":[]},{"given":"Furkat","family":"Safarov","sequence":"additional","affiliation":[]},{"given":"Ugiloy","family":"Khojamuratova","sequence":"additional","affiliation":[]},{"given":"Komoliddin","family":"Misirov","sequence":"additional","affiliation":[]},{"given":"Zafar","family":"Ganiyev","sequence":"additional","affiliation":[]},{"given":"Geehyuk","family":"Lee","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2026,2,16]]},"reference":[{"key":"5970_CR1","doi-asserted-by":"crossref","unstructured":"Conde, M., Timofte, R., Lu, Z., Kong, X., Xing, X., Wang, F., Han, S., Park, M., Hao, T., He, Y., Li, R.: NTIRE 2025 challenge on raw image restoration and super-resolution. In Proceedings of the Computer Vision and Pattern Recognition Conference (pp. 1148\u20131171). (2025)","DOI":"10.1109\/CVPRW67362.2025.00110"},{"key":"5970_CR2","doi-asserted-by":"publisher","first-page":"125428","DOI":"10.1016\/j.eswa.2024.125428","volume":"261","author":"H Lu","year":"2025","unstructured":"Lu, H., Mei, J., Qiu, Y., Li, Y., Hao, F., Xu, J., Tang, L.: Information sparsity guided transformer for multi-modal medical image super-resolution. Expert Syst. Appl. 261, 125428 (2025)","journal-title":"Expert Syst. Appl."},{"key":"5970_CR3","doi-asserted-by":"publisher","first-page":"105277","DOI":"10.1016\/j.chemolab.2024.105277","volume":"256","author":"A Sharma","year":"2025","unstructured":"Sharma, A., Shrivastava, B.P., Tyagi, P.K., Siddiqui, E.A., Prasad, R., Gautam, S., Pranjal, P.: Enhanced satellite image resolution with a residual network and correlation filter. Chemometr. Intell. Lab. Syst. 256, 105277 (2025)","journal-title":"Chemometr. Intell. Lab. Syst."},{"key":"5970_CR4","doi-asserted-by":"publisher","first-page":"121609","DOI":"10.1016\/j.ins.2024.121609","volume":"691","author":"P Dong","year":"2025","unstructured":"Dong, P., Li, S., Gong, X., Zhang, L.: HVASR: Enhancing 360-degree video delivery with viewport-aware super resolution. Inf. Sci. 691, 121609 (2025)","journal-title":"Inf. Sci."},{"key":"5970_CR5","doi-asserted-by":"publisher","first-page":"112778","DOI":"10.1016\/j.knosys.2024.112778","volume":"309","author":"Y Yang","year":"2025","unstructured":"Yang, Y., Ren, X., Ke, L.: FedSR: Federated learning for image Super-Resolution via detail-assisted contrastive learning. Knowl. Based Syst. 309, 112778 (2025)","journal-title":"Knowl. Based Syst."},{"key":"5970_CR6","doi-asserted-by":"publisher","first-page":"100935","DOI":"10.1016\/j.mser.2025.100935","volume":"163","author":"J Sun","year":"2025","unstructured":"Sun, J., Chronopoulos, D.: Super-resolution imaging with elastic waves: A review of superlenses, hyperlenses, and Metalenses. Mater. Sci. Engineering: R: Rep. 163, 100935 (2025)","journal-title":"Mater. Sci. Engineering: R: Rep."},{"key":"5970_CR7","unstructured":"Ai, H., Cao, Z., Wang, L.: A survey of representation learning, optimization strategies, and applications for omnidirectional vision. Int. J. Comput. Vision, pp.1\u201340. (2025)"},{"key":"5970_CR8","doi-asserted-by":"publisher","first-page":"101812","DOI":"10.1016\/j.inffus.2023.101812","volume":"97","author":"J He","year":"2023","unstructured":"He, J., Yuan, Q., Li, J., Xiao, Y., Liu, D., Shen, H., Zhang, L.: Spectral super-resolution Meets deep learning: Achievements and challenges. Inform. Fusion. 97, 101812 (2023)","journal-title":"Inform. Fusion"},{"key":"5970_CR9","doi-asserted-by":"crossref","unstructured":"Lim, B., Son, S., Kim, H., Nah, S., Mu Lee, K.: Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops (pp. 136\u2013144). (2017)","DOI":"10.1109\/CVPRW.2017.151"},{"key":"5970_CR10","doi-asserted-by":"crossref","unstructured":"Ledig, C., Theis, L., Husz\u00e1r, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., Wang, Z., Shi, W.: Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4681\u20134690). (2017)","DOI":"10.1109\/CVPR.2017.19"},{"key":"5970_CR11","doi-asserted-by":"crossref","unstructured":"Ma, C.: March. Uncertainty-aware GAN for single image super resolution. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 5, pp. 4071\u20134079). (2024)","DOI":"10.1609\/aaai.v38i5.28201"},{"key":"5970_CR12","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Tian, Y., Kong, Y., Zhong, B., Fu, Y.: Residual dense network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2472\u20132481). (2018)","DOI":"10.1109\/CVPR.2018.00262"},{"key":"5970_CR13","doi-asserted-by":"crossref","unstructured":"Altinkaya, E., Barakli, B.: Enhancing face image quality: Strategic patch selection with deep reinforcement learning and Super-Resolution boost via RRDB. IEEE Access. (2024)","DOI":"10.1109\/ACCESS.2024.3450571"},{"key":"5970_CR14","doi-asserted-by":"crossref","unstructured":"Huang, S., Deng, W., Li, G., Yang, Y., Wang, J.: RTEN-SR: A reference-based texture enhancement network for single image super-resolution. Displays, 83, p.102684. (2024)","DOI":"10.1016\/j.displa.2024.102684"},{"key":"5970_CR15","doi-asserted-by":"crossref","unstructured":"Woo, S., Park, J., Lee, J.Y., Kweon, I.S.: Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV) (pp. 3\u201319). (2018)","DOI":"10.1007\/978-3-030-01234-2_1"},{"issue":"10","key":"5970_CR16","doi-asserted-by":"publisher","first-page":"29741","DOI":"10.1007\/s11042-023-16786-9","volume":"83","author":"M Dixit","year":"2024","unstructured":"Dixit, M., Yadav, R.N.: A review of single image super resolution techniques using convolutional neural networks. Multimedia Tools Appl. 83(10), 29741\u201329775 (2024)","journal-title":"Multimedia Tools Appl."},{"issue":"2","key":"5970_CR17","doi-asserted-by":"publisher","first-page":"295","DOI":"10.1109\/TPAMI.2015.2439281","volume":"38","author":"C Dong","year":"2015","unstructured":"Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295\u2013307 (2015)","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"5970_CR18","doi-asserted-by":"publisher","first-page":"164","DOI":"10.1016\/j.procs.2022.12.412","volume":"218","author":"A Lembhe","year":"2023","unstructured":"Lembhe, A., Motarwar, P., Patil, R., Elias, S.: Enhancement in skin cancer detection using image super resolution and convolutional neural network. Procedia Comput. Sci. 218, 164\u2013173 (2023)","journal-title":"Procedia Comput. Sci."},{"key":"5970_CR19","doi-asserted-by":"crossref","unstructured":"Kim, J., Lee, J.K., Lee, K.M.: Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1646\u20131654). (2016)","DOI":"10.1109\/CVPR.2016.182"},{"key":"5970_CR20","doi-asserted-by":"crossref","unstructured":"Kim, J., Lee, J.K., Lee, K.M.: Deeply-recursive convolutional network for image super-resolution. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1637\u20131645). (2016)","DOI":"10.1109\/CVPR.2016.181"},{"key":"5970_CR21","doi-asserted-by":"publisher","first-page":"105519","DOI":"10.1016\/j.imavis.2025.105519","volume":"158","author":"M Gao","year":"2025","unstructured":"Gao, M., Sun, J., Li, Q., Khan, M.A., Shang, J., Zhu, X., Jeon, G.: Towards trustworthy image super-resolution via symmetrical and recursive artificial neural network. Image Vis. Comput. 158, 105519 (2025)","journal-title":"Image Vis. Comput."},{"key":"5970_CR22","doi-asserted-by":"publisher","first-page":"p113241","DOI":"10.1016\/j.knosys.2025.113241","volume":"315","author":"Z Chen","year":"2025","unstructured":"Chen, Z., Zhang, L., Zhang, X.: Edge fusion diffusion for single image Super-Resolution. Knowl. Based Syst. 315, p113241 (2025)","journal-title":"Knowl. Based Syst."},{"key":"5970_CR23","doi-asserted-by":"crossref","unstructured":"Tong, T., Li, G., Liu, X., Gao, Q.: Image super-resolution using dense skip connections. In Proceedings of the IEEE international conference on computer vision (pp. 4799\u20134807). (2017)","DOI":"10.1109\/ICCV.2017.514"},{"key":"5970_CR24","doi-asserted-by":"crossref","unstructured":"Wang, X., Yu, K., Wu, S., Gu, J., Liu, Y., Dong, C., Qiao, Y., Loy, C.: C., Esrgan: Enhanced super-resolution generative adversarial networks. In Proceedings of the European conference on computer vision (ECCV) workshops (pp. 0\u20130). (2018)","DOI":"10.1007\/978-3-030-11021-5_5"},{"key":"5970_CR25","doi-asserted-by":"crossref","unstructured":"Chen, H., Li, H., Yao, C., Liu, G., Wang, Z.: Image super-resolution based on improved ESRGAN and its application in camera calibration. Measurement, 242, p.115899. (2025)","DOI":"10.1016\/j.measurement.2024.115899"},{"key":"5970_CR26","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Li, K., Li, K., Wang, L., Zhong, B., Fu, Y.: Image super-resolution using very deep residual channel attention networks. In Proceedings of the European conference on computer vision (ECCV) (pp. 286\u2013301). (2018)","DOI":"10.1007\/978-3-030-01234-2_18"},{"key":"5970_CR27","doi-asserted-by":"crossref","unstructured":"Dai, T., Cai, J., Zhang, Y., Xia, S.T., Zhang, L.: Second-order attention network for single image super-resolution. In Proceedings of the IEEE\/CVF conference on computer vision and pattern recognition (pp. 11065\u201311074). (2019)","DOI":"10.1109\/CVPR.2019.01132"},{"issue":"2","key":"5970_CR28","doi-asserted-by":"publisher","first-page":"162","DOI":"10.1007\/s40747-024-01760-1","volume":"11","author":"J Talreja","year":"2025","unstructured":"Talreja, J., Aramvith, S., Onoye, T.: XTNSR: Xception-based transformer network for single image super resolution. Complex. Intell. Syst. 11(2), 162 (2025)","journal-title":"Complex. Intell. Syst."},{"key":"5970_CR29","doi-asserted-by":"publisher","first-page":"112569","DOI":"10.1016\/j.asoc.2024.112569","volume":"169","author":"X Song","year":"2025","unstructured":"Song, X., Pang, X., Zhang, L., Lu, X., Hei, X.: Single image super-resolution with lightweight multi-scale dilated attention network. Appl. Soft Comput. 169, 112569 (2025)","journal-title":"Appl. Soft Comput."},{"key":"5970_CR30","doi-asserted-by":"crossref","unstructured":"Liang, J., Cao, J., Sun, G., Zhang, K., Van Gool, L., Timofte, R.: Swinir: Image restoration using swin transformer. In Proceedings of the IEEE\/CVF international conference on computer vision (pp. 1833\u20131844). (2021)","DOI":"10.1109\/ICCVW54120.2021.00210"},{"key":"5970_CR31","doi-asserted-by":"crossref","unstructured":"Zhang, J., Tu, Y.: SwinFR: Combining SwinIR and Fast Fourier for Super-Resolution Reconstruction of Remote Sensing Images. Digital Signal Processing, p.105026. (2025)","DOI":"10.1016\/j.dsp.2025.105026"},{"key":"5970_CR32","doi-asserted-by":"crossref","unstructured":"Zhang, X., Zhang, Y., Yu, F.: September. HiT-SR: Hierarchical transformer for efficient image super-resolution. In European Conference on Computer Vision (pp. 483\u2013500). Cham: Springer Nature Switzerland. (2024)","DOI":"10.1007\/978-3-031-73661-2_27"},{"key":"5970_CR33","doi-asserted-by":"crossref","unstructured":"Wang, J., Fan, Q., Chen, J., Gu, H., Huang, F., Ren, W.: April. RAP-SR: RestorAtion Prior Enhancement in Diffusion Models for Realistic Image Super-Resolution. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 39, No. 7, pp. 7727\u20137735). (2025)","DOI":"10.1609\/aaai.v39i7.32832"},{"key":"5970_CR34","doi-asserted-by":"crossref","unstructured":"Jeevan, P., Srinidhi, A., Prathiba, P., Sethi, A.: Wavemixsr: Resource-efficient neural network for image super-resolution. In Proceedings of the IEEE\/CVF winter conference on applications of computer vision (pp. 5884\u20135892). (2024)","DOI":"10.1109\/WACV57701.2024.00578"},{"key":"5970_CR35","doi-asserted-by":"crossref","unstructured":"Luo, X., Ai, Z., Liang, Q., Xie, Y., Shi, Z., Fan, J., Qu, Y.: EdgeFormer: Edge-aware Efficient Transformer for Image Super-resolution. IEEE Transactions on Instrumentation and Measurement. (2024)","DOI":"10.1109\/TIM.2024.3436070"},{"key":"5970_CR36","first-page":"13294","volume":"36","author":"Z Yue","year":"2023","unstructured":"Yue, Z., Wang, J., Loy, C.C.: Resshift: Efficient diffusion model for image super-resolution by residual shifting. Adv. Neural. Inf. Process. Syst. 36, 13294\u201313307 (2023)","journal-title":"Adv. Neural. Inf. Process. Syst."},{"key":"5970_CR37","unstructured":"Safarov, F., Mukhiddin, T ., Komoliddin, M., Abdusalomov, A., Lee, W. : Hyperspectral Anomaly Detection with Enhanced Spectral Graph Transformer Network. IEEE Access, (2025)"},{"key":"5970_CR38","doi-asserted-by":"publisher","unstructured":"Safarov, F., Muksimova, Sh., Misirov, K., Cho, YI.: Fire and Smoke Detection in Complex Environments. Fire (2024), 7(11), 389; https:\/\/doi.org\/10.3390\/fire7110389","DOI":"10.3390\/fire7110389"},{"key":"5970_CR39","doi-asserted-by":"publisher","unstructured":"Safarov, F., Khojamuratova, U., Misirov ,K., Xusinov, II., & Cho, YI.: A Multimodal Deep Learning Framework for Accurate Biomass and Carbon Sequestration Estimation from UAV Imagery Drones (2025), 9(7), 496; https:\/\/doi.org\/10.3390\/drones9070496","DOI":"10.3390\/drones9070496"}],"container-title":["Cluster Computing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10586-026-05970-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s10586-026-05970-9","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s10586-026-05970-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2026,2,16]],"date-time":"2026-02-16T13:24:07Z","timestamp":1771248247000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s10586-026-05970-9"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2026,2,16]]},"references-count":39,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2026,6]]}},"alternative-id":["5970"],"URL":"https:\/\/doi.org\/10.1007\/s10586-026-05970-9","relation":{},"ISSN":["1386-7857","1573-7543"],"issn-type":[{"value":"1386-7857","type":"print"},{"value":"1573-7543","type":"electronic"}],"subject":[],"published":{"date-parts":[[2026,2,16]]},"assertion":[{"value":"24 June 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"10 October 2025","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 January 2026","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"16 February 2026","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}},{"value":"Not applicable.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics"}}],"article-number":"152"}}