{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,5]],"date-time":"2026-03-05T01:32:01Z","timestamp":1772674321919,"version":"3.50.1"},"reference-count":51,"publisher":"MDPI AG","issue":"11","license":[{"start":{"date-parts":[[2023,5,24]],"date-time":"2023-05-24T00:00:00Z","timestamp":1684886400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61866009"],"award-info":[{"award-number":["61866009"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["62172120"],"award-info":[{"award-number":["62172120"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["82272075"],"award-info":[{"award-number":["82272075"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["2019GXNSFFA245014"],"award-info":[{"award-number":["2019GXNSFFA245014"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["AB21220037"],"award-info":[{"award-number":["AB21220037"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["YCBZ2022112"],"award-info":[{"award-number":["YCBZ2022112"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Guangxi Science Fund for Distinguished Young Scholars","award":["61866009"],"award-info":[{"award-number":["61866009"]}]},{"name":"Guangxi Science Fund for Distinguished Young Scholars","award":["62172120"],"award-info":[{"award-number":["62172120"]}]},{"name":"Guangxi Science Fund for Distinguished Young Scholars","award":["82272075"],"award-info":[{"award-number":["82272075"]}]},{"name":"Guangxi Science Fund for Distinguished Young Scholars","award":["2019GXNSFFA245014"],"award-info":[{"award-number":["2019GXNSFFA245014"]}]},{"name":"Guangxi Science Fund for Distinguished Young Scholars","award":["AB21220037"],"award-info":[{"award-number":["AB21220037"]}]},{"name":"Guangxi Science Fund for Distinguished Young Scholars","award":["YCBZ2022112"],"award-info":[{"award-number":["YCBZ2022112"]}]},{"DOI":"10.13039\/501100017691","name":"Guangxi Key Research and Development Program","doi-asserted-by":"publisher","award":["61866009"],"award-info":[{"award-number":["61866009"]}],"id":[{"id":"10.13039\/501100017691","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100017691","name":"Guangxi Key Research and Development Program","doi-asserted-by":"publisher","award":["62172120"],"award-info":[{"award-number":["62172120"]}],"id":[{"id":"10.13039\/501100017691","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100017691","name":"Guangxi Key Research and Development Program","doi-asserted-by":"publisher","award":["82272075"],"award-info":[{"award-number":["82272075"]}],"id":[{"id":"10.13039\/501100017691","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100017691","name":"Guangxi Key Research and Development Program","doi-asserted-by":"publisher","award":["2019GXNSFFA245014"],"award-info":[{"award-number":["2019GXNSFFA245014"]}],"id":[{"id":"10.13039\/501100017691","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100017691","name":"Guangxi Key Research and Development Program","doi-asserted-by":"publisher","award":["AB21220037"],"award-info":[{"award-number":["AB21220037"]}],"id":[{"id":"10.13039\/501100017691","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100017691","name":"Guangxi Key Research and Development Program","doi-asserted-by":"publisher","award":["YCBZ2022112"],"award-info":[{"award-number":["YCBZ2022112"]}],"id":[{"id":"10.13039\/501100017691","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Innovation Project of Guangxi Graduate Education","award":["61866009"],"award-info":[{"award-number":["61866009"]}]},{"name":"Innovation Project of Guangxi Graduate Education","award":["62172120"],"award-info":[{"award-number":["62172120"]}]},{"name":"Innovation Project of Guangxi Graduate Education","award":["82272075"],"award-info":[{"award-number":["82272075"]}]},{"name":"Innovation Project of Guangxi Graduate Education","award":["2019GXNSFFA245014"],"award-info":[{"award-number":["2019GXNSFFA245014"]}]},{"name":"Innovation Project of Guangxi Graduate Education","award":["AB21220037"],"award-info":[{"award-number":["AB21220037"]}]},{"name":"Innovation Project of Guangxi Graduate Education","award":["YCBZ2022112"],"award-info":[{"award-number":["YCBZ2022112"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>The quality of videos varies due to the different capabilities of sensors. Video super-resolution (VSR) is a technology that improves the quality of captured video. However, the development of a VSR model is very costly. In this paper, we present a novel approach for adapting single-image super-resolution (SISR) models to the VSR task. To achieve this, we first summarize a common architecture of SISR models and perform a formal analysis of adaptation. Then, we propose an adaptation method that incorporates a plug-and-play temporal feature extraction module into existing SISR models. The proposed temporal feature extraction module consists of three submodules: offset estimation, spatial aggregation, and temporal aggregation. In the spatial aggregation submodule, the features obtained from the SISR model are aligned to the center frame based on the offset estimation results. The aligned features are fused in the temporal aggregation submodule. Finally, the fused temporal feature is fed to the SISR model for reconstruction. To evaluate the effectiveness of our method, we adapt five representative SISR models and evaluate these models on two popular benchmarks. The experiment results show the proposed method is effective on different SISR models. In particular, on the Vid4 benchmark, the VSR-adapted models achieve at least 1.26 dB and 0.067 improvement over the original SISR models in terms of PSNR and SSIM metrics, respectively. Additionally, these VSR-adapted models achieve better performance than the state-of-the-art VSR models.<\/jats:p>","DOI":"10.3390\/s23115030","type":"journal-article","created":{"date-parts":[[2023,5,25]],"date-time":"2023-05-25T02:30:06Z","timestamp":1684981806000},"page":"5030","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":3,"title":["Adapting Single-Image Super-Resolution Models to Video Super-Resolution: A Plug-and-Play Approach"],"prefix":"10.3390","volume":"23","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-5779-9808","authenticated-orcid":false,"given":"Wenhao","family":"Wang","sequence":"first","affiliation":[{"name":"School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China"}]},{"given":"Zhenbing","family":"Liu","sequence":"additional","affiliation":[{"name":"School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China"}]},{"given":"Haoxiang","family":"Lu","sequence":"additional","affiliation":[{"name":"School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China"}]},{"given":"Rushi","family":"Lan","sequence":"additional","affiliation":[{"name":"School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China"}]},{"given":"Yingxin","family":"Huang","sequence":"additional","affiliation":[{"name":"School of Computer Science and Information Security, Guilin University of Electronic Technology, Guilin 541004, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,5,24]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Yang, C., Huang, Z., and Wang, N. (2022, January 18\u201324). QueryDet: Cascaded Sparse Query for Accelerating High-Resolution Small Object Detection. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA.","DOI":"10.1109\/CVPR52688.2022.01330"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Shermeyer, J., and Etten, A.V. (2019, January 16\u201320). The Effects of Super-Resolution on Object Detection Performance in Satellite Imagery. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2019, Computer Vision Foundation\/IEEE, Long Beach, CA, USA.","DOI":"10.1109\/CVPRW.2019.00184"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Dong, H., Xie, K., Xie, A., Wen, C., He, J., Zhang, W., Yi, D., and Yang, S. (2023). Detection of Occluded Small Commodities Based on Feature Enhancement under Super-Resolution. Sensors, 23.","DOI":"10.3390\/s23052439"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Yuan, X., Fu, D., and Han, S. (2023). LRF-SRNet: Large-Scale Super-Resolution Network for Estimating Aircraft Pose on the Airport Surface. Sensors, 23.","DOI":"10.3390\/s23031248"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"583","DOI":"10.1038\/s41586-021-03819-2","article-title":"Highly accurate protein structure prediction with AlphaFold","volume":"596","author":"Jumper","year":"2021","journal-title":"Nature"},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"640","DOI":"10.1007\/978-3-031-19815-1_37","article-title":"XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model","volume":"Volume 13688","author":"Avidan","year":"2022","journal-title":"Proceedings of the Computer Vision-ECCV 2022\u201417th European Conference"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"103776","DOI":"10.1016\/j.jvcir.2023.103776","article-title":"FFTI: Image inpainting algorithm via features fusion and two-steps inpainting","volume":"91","author":"Chen","year":"2023","journal-title":"J. Vis. Commun. Image Represent."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"101772","DOI":"10.1016\/j.surfin.2022.101772","article-title":"Molecular beam epitaxy growth of high mobility InN film for high-performance broadband heterointerface photodetectors","volume":"29","author":"Imran","year":"2022","journal-title":"Surf. Interfaces"},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"5981","DOI":"10.1007\/s10462-022-10147-y","article-title":"Video super-resolution based on deep learning: A comprehensive survey","volume":"55","author":"Liu","year":"2022","journal-title":"Artif. Intell. Rev."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Haris, M., Shakhnarovich, G., and Ukita, N. (2019, January 16\u201320). Recurrent Back-Projection Network for Video Super-Resolution. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Computer Vision Foundation\/IEEE, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00402"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Haris, M., Shakhnarovich, G., and Ukita, N. (2018, January 18\u201322). Deep Back-Projection Networks for Super-Resolution. Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Computer Vision Foundation\/IEEE Computer Society, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00179"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Tian, Y., Zhang, Y., Fu, Y., and Xu, C. (2020, January 13\u201319). TDAN: Temporally-Deformable Alignment Network for Video Super-Resolution. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Computer Vision Foundation\/IEEE, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00342"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Lim, B., Son, S., Kim, H., Nah, S., and Lee, K.M. (2017, January 21\u201326). Enhanced Deep Residual Networks for Single Image Super-Resolution. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2017, IEEE Computer Society, Honolulu, HI, USA.","DOI":"10.1109\/CVPRW.2017.151"},{"key":"ref_14","first-page":"378","article-title":"Recurrent Video Restoration Transformer with Guided Deformable Attention","volume":"35","author":"Liang","year":"2022","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Liang, J., Cao, J., Sun, G., Zhang, K., Gool, L.V., and Timofte, R. (2021, January 11\u201317). SwinIR: Image Restoration Using Swin Transformer. Proceedings of the IEEE\/CVF International Conference on Computer Vision Workshops, ICCVW 2021, Montreal, BC, Canada.","DOI":"10.1109\/ICCVW54120.2021.00210"},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"1106","DOI":"10.1007\/s11263-018-01144-2","article-title":"Video Enhancement with Task-Oriented Flow","volume":"127","author":"Xue","year":"2019","journal-title":"Int. J. Comput. Vis."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"4323","DOI":"10.1109\/TIP.2020.2967596","article-title":"Deep Video Super-Resolution Using HR Optical Flow Estimation","volume":"29","author":"Wang","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_18","first-page":"591","article-title":"Sliding Window Recurrent Network for Efficient Video Super-Resolution","volume":"Volume 13802","author":"Karlinsky","year":"2022","journal-title":"Proceedings of the Computer Vision-ECCV 2022 Workshops"},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Ledig, C., Theis, L., Huszar, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A.P., Tejani, A., Totz, J., and Wang, Z. (2017, January 21\u201326). Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, IEEE Computer Society, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.19"},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"294","DOI":"10.1007\/978-3-030-01234-2_18","article-title":"Image Super-Resolution Using Very Deep Residual Channel Attention Networks","volume":"Volume 11211","author":"Ferrari","year":"2018","journal-title":"Proceedings of the Computer Vision-ECCV 2018\u201415th European Conference"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Tian, Y., Kong, Y., Zhong, B., and Fu, Y. (2018, January 18\u201322). Residual Dense Network for Image Super-Resolution. Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Computer Vision Foundation\/IEEE Computer Society, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00262"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Hor\u00e9, A., and Ziou, D. (2010, January 23\u201326). Image Quality Metrics: PSNR vs. SSIM. Proceedings of the 2010 20th International Conference on Pattern Recognition, Istanbul, Turkey.","DOI":"10.1109\/ICPR.2010.579"},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"600","DOI":"10.1109\/TIP.2003.819861","article-title":"Image quality assessment: From error visibility to structural similarity","volume":"13","author":"Wang","year":"2004","journal-title":"IEEE Trans. Image Process."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Liu, Y., Chu, Z., and Li, B. (2022). A Local and Non-Local Features Based Feedback Network on Super-Resolution. Sensors, 22.","DOI":"10.3390\/s22249604"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Chen, Y., Xia, R., Yang, K., and Zou, K. (2023). MFFN: Image super-resolution via multi-level features fusion network. Vis. Comput., 1\u201316.","DOI":"10.1007\/s00371-023-02795-0"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Shi, W., Caballero, J., Huszar, F., Totz, J., Aitken, A.P., Bishop, R., Rueckert, D., and Wang, Z. (2016, January 27\u201330). Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, IEEE Computer Society, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.207"},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"1443","DOI":"10.1109\/TCYB.2020.2970104","article-title":"MADNet: A Fast and Lightweight Network for Single-Image Super Resolution","volume":"51","author":"Lan","year":"2021","journal-title":"IEEE Trans. Cybern."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"115","DOI":"10.1109\/TCYB.2019.2952710","article-title":"Cascading and Enhanced Residual Networks for Accurate Single-Image Super-Resolution","volume":"51","author":"Lan","year":"2021","journal-title":"IEEE Trans. Cybern."},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"1271","DOI":"10.1109\/JAS.2021.1004009","article-title":"Lightweight Image Super-Resolution via Weighted Multi-Scale Residual Network","volume":"8","author":"Sun","year":"2021","journal-title":"IEEE\/CAA J. Autom. Sin."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"295","DOI":"10.1109\/TPAMI.2015.2439281","article-title":"Image Super-Resolution Using Deep Convolutional Networks","volume":"38","author":"Dong","year":"2016","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep Residual Learning for Image Recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, IEEE Computer Society, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_32","first-page":"5998","article-title":"Attention is All you Need","volume":"30","author":"Vaswani","year":"2017","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_33","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021, January 10\u201317). Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. Proceedings of the 2021 IEEE\/CVF International Conference on Computer Vision, ICCV 2021, IEEE, Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Sajjadi, M.S.M., Vemulapalli, R., and Brown, M. (2018, January 18\u201322). Frame-Recurrent Video Super-Resolution. Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Computer Vision Foundation\/IEEE Computer Society, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00693"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Sajjadi, M.S.M., Sch\u00f6lkopf, B., and Hirsch, M. (2017, January 22\u201329). EnhanceNet: Single Image Super-Resolution Through Automated Texture Synthesis. Proceedings of the IEEE International Conference on Computer Vision, ICCV 2017, IEEE Computer Society, Venice, Italy.","DOI":"10.1109\/ICCV.2017.481"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Wang, X., Chan, K.C.K., Yu, K., Dong, C., and Loy, C.C. (2019, January 16\u201320). EDVR: Video Restoration With Enhanced Deformable Convolutional Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, CVPR Workshops 2019, Computer Vision Foundation\/IEEE, Long Beach, CA, USA.","DOI":"10.1109\/CVPRW.2019.00247"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Choi, Y.J., Lee, Y., and Kim, B. (2021, January 10\u201315). Wavelet Attention Embedding Networks for Video Super-Resolution. Proceedings of the 25th International Conference on Pattern Recognition, ICPR 2020, Milan, Italy.","DOI":"10.1109\/ICPR48806.2021.9412623"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"2803","DOI":"10.1007\/s11063-021-10593-9","article-title":"Video Super-Resolution with Frame-Wise Dynamic Fusion and Self-Calibrated Deformable Alignment","volume":"54","author":"Xu","year":"2022","journal-title":"Neural Process. Lett."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Cao, Y., Wang, C., Song, C., Tang, Y., and Li, H. (2021, January 7\u20139). Real-Time Super-Resolution System of 4K-Video Based on Deep Learning. Proceedings of the 32nd IEEE International Conference on Application-specific Systems, Architectures and Processors, ASAP 2021, Virtual.","DOI":"10.1109\/ASAP52443.2021.00019"},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Jo, Y., Oh, S.W., Kang, J., and Kim, S.J. (2018, January 18\u201322). Deep Video Super-Resolution Network Using Dynamic Upsampling Filters Without Explicit Motion Compensation. Proceedings of the 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Computer Vision Foundation\/IEEE Computer Society, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00340"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Kim, S.Y., Lim, J., Na, T., and Kim, M. (2019, January 22\u201325). Video Super-Resolution Based on 3D-CNNS with Consideration of Scene Change. Proceedings of the 2019 IEEE International Conference on Image Processing, ICIP 2019, Taipei, Taiwan.","DOI":"10.1109\/ICIP.2019.8803297"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"Isobe, T., Li, S., Jia, X., Yuan, S., Slabaugh, G.G., Xu, C., Li, Y., Wang, S., and Tian, Q. (2020, January 13\u201319). Video Super-Resolution With Temporal Group Attention. Proceedings of the 2020 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Computer Vision Foundation\/IEEE, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00803"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Chan, K.C.K., Wang, X., Yu, K., Dong, C., and Loy, C.C. (2021, January 19\u201325). BasicVSR: The Search for Essential Components in Video Super-Resolution and Beyond. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, Computer Vision Foundation\/IEEE, Virtual.","DOI":"10.1109\/CVPR46437.2021.00491"},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"106049","DOI":"10.1109\/ACCESS.2021.3098326","article-title":"Efficient Video Super-Resolution via Hierarchical Temporal Residual Networks","volume":"9","author":"Liu","year":"2021","journal-title":"IEEE Access"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Lee, Y., Cho, S., and Jun, D. (2022). Video Super-Resolution Method Using Deformable Convolution-Based Alignment Network. Sensors, 22.","DOI":"10.3390\/s22218476"},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"60:1","DOI":"10.1145\/3390462","article-title":"A Deep Journey into Super-resolution: A Survey","volume":"53","author":"Anwar","year":"2021","journal-title":"ACM Comput. Surv."},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"1500","DOI":"10.1109\/LSP.2020.3013518","article-title":"Deformable 3D Convolution for Video Super-Resolution","volume":"27","author":"Ying","year":"2020","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_48","doi-asserted-by":"crossref","first-page":"346","DOI":"10.1109\/TPAMI.2013.127","article-title":"On Bayesian Adaptive Video Super Resolution","volume":"36","author":"Liu","year":"2014","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_49","unstructured":"Bengio, Y., and LeCun, Y. (2015, January 7\u20139). Adam: A Method for Stochastic Optimization. Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA. Conference Track Proceedings."},{"key":"ref_50","doi-asserted-by":"crossref","first-page":"1761","DOI":"10.1109\/TIP.2022.3146625","article-title":"Video Super-Resolution via a Spatio-Temporal Alignment Network","volume":"31","author":"Wen","year":"2022","journal-title":"IEEE Trans. Image Process."},{"key":"ref_51","doi-asserted-by":"crossref","first-page":"107619","DOI":"10.1016\/j.patcog.2020.107619","article-title":"Video super-resolution based on a spatio-temporal matching network","volume":"110","author":"Zhu","year":"2021","journal-title":"Pattern Recognit."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/11\/5030\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T19:41:03Z","timestamp":1760125263000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/23\/11\/5030"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,5,24]]},"references-count":51,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2023,6]]}},"alternative-id":["s23115030"],"URL":"https:\/\/doi.org\/10.3390\/s23115030","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,5,24]]}}}