{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,26]],"date-time":"2026-02-26T15:10:50Z","timestamp":1772118650788,"version":"3.50.1"},"reference-count":14,"publisher":"Wiley","issue":"5","license":[{"start":{"date-parts":[[2025,7,26]],"date-time":"2025-07-26T00:00:00Z","timestamp":1753488000000},"content-version":"vor","delay-in-days":0,"URL":"http:\/\/onlinelibrary.wiley.com\/termsAndConditions#vor"}],"content-domain":{"domain":["onlinelibrary.wiley.com"],"crossmark-restriction":true},"short-container-title":["Internet Technology Letters"],"published-print":{"date-parts":[[2025,9]]},"abstract":"<jats:title>ABSTRACT<\/jats:title>\n                  <jats:p>In the modern era, video stabilization is one of the essential advancement features of digital video processing equipped with 5G technology. Also, this technology leverages the intelligent software innovations to deliver high quality and smooth video recording experiences. Despite advancement in machine learning (ML) algorithms for video stabilization, there are numerous challenges, especially when applying 5G technologies like stable and unstable videos for training performance. Consequently, video stabilization includes complex analyses such as frame interpolation and motion assessment. Moreover, the advanced stabilization modes are developed to analyze the motion data. Nevertheless, they decrease or fail to calculate the features and provide poor results. To overcome these issues, an adaptive video stabilization methodology is proposed. In the proposed method, a novel Convolution Neural with StabNet based Hawks Optimization (CNSbHO) algorithm is introduced. In this research, hand\u2010held video clips generally suffer from unwanted video jitters due to unbalanced camera motion. Therefore, 5G ultra\u2010low latency with respect to drone footage video feeds is taken as the stabilization process. Then, a pre\u2010processing Gaussian filter was enabled to enhance consistency and quality. Hereafter, a Convolution Neural Network (CNN) was used to extract the features, and motion estimation is also done in this section with feature tracking point of CNN. Furthermore, end\u2010to\u2010end stabilization strategy as StabNet model can provide stabilized frame outputs. Then, the Harris Hawks Optimization (HHO) algorithm was used to enhance the accuracy of the entire performance. The developed CNSbHO strategy was implemented in Python and validated using the 5G traffic datasets. In order to validate the effectiveness of the developed strategy, we selected the traditional algorithms for the comparison in terms of learned perceptual image patch similarity (LPIPS), structural similarity index (SSIM), accuracy, and peak signal\u2010to\u2010noise ratio (PSNR). The comparative assessment confirms that the proposed method outperforms conventional stabilization techniques, making it a reliable solution for real\u2010time video processing tasks.<\/jats:p>","DOI":"10.1002\/itl2.70089","type":"journal-article","created":{"date-parts":[[2025,7,26]],"date-time":"2025-07-26T12:39:45Z","timestamp":1753533585000},"update-policy":"https:\/\/doi.org\/10.1002\/crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["An Intelligent Hybrid Machine Learning With Meta\u2010Heuristic Optimization Algorithms for Enhancing Real\u2010Time Video Stabilization"],"prefix":"10.1002","volume":"8","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-1749-556X","authenticated-orcid":false,"given":"S.","family":"Afsal","sequence":"first","affiliation":[{"name":"Noorul Islam Center for Higher Education  Kumaracoil Tamilnadu India"}]},{"given":"J. Arul","family":"Linsely","sequence":"additional","affiliation":[{"name":"Noorul Islam Center for Higher Education  Kumaracoil Tamilnadu India"}]}],"member":"311","published-online":{"date-parts":[[2025,7,26]]},"reference":[{"key":"e_1_2_9_2_1","doi-asserted-by":"publisher","DOI":"10.3390\/electronics14030497"},{"key":"e_1_2_9_3_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cag.2024.104154"},{"key":"e_1_2_9_4_1","unstructured":"J.Bai M.Xia X.Fu et al. \u201cReCamMaster: Camera\u2010Controlled Generative Rendering From a Single Video \u201d preprint arXiv:2503.11647 2025."},{"key":"e_1_2_9_5_1","unstructured":"Y.He S.Li J.Wang et al. \u201cEnhancing Low\u2010Cost Video Editing With Lightweight Adaptors and Temporal\u2010Aware Inversion \u201d preprint arXiv:2501.04606 2025."},{"key":"e_1_2_9_6_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2025.3531763"},{"key":"e_1_2_9_7_1","unstructured":"Y.Zhou J.Bu P.Ling et al. \u201cLight\u2010A\u2010Video: Training\u2010Free Video Relighting via Progressive Light Fusion \u201d preprint arXiv:2502.08590 2025."},{"key":"e_1_2_9_8_1","unstructured":"X.Li Y.Liu S.Cao et al. \u201cDiffVSR: Enhancing Real\u2010World Video Super\u2010Resolution With Diffusion Models for Advanced Visual Quality and Temporal Consistency \u201d preprint arXiv:2501.10110 2025."},{"key":"e_1_2_9_9_1","doi-asserted-by":"publisher","DOI":"10.1002\/itl2.556"},{"key":"e_1_2_9_10_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2024.3493175"},{"key":"e_1_2_9_11_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11263\u2010024\u201002264\u20108"},{"key":"e_1_2_9_12_1","doi-asserted-by":"publisher","DOI":"10.1007\/s11042\u2010023\u201016607\u2010z"},{"key":"e_1_2_9_13_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00530\u2010025\u201001740\u20106"},{"key":"e_1_2_9_14_1","unstructured":"https:\/\/www.kaggle.com\/datasets\/kimdaegyeom\/5g\u2010traffic\u2010datasets."},{"key":"e_1_2_9_15_1","volume-title":"Conference on Graphics, Patterns and Images (SIBGRAPI)","author":"Roberto M.","year":"2024"}],"container-title":["Internet Technology Letters"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/pdf\/10.1002\/itl2.70089","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,21]],"date-time":"2025-10-21T18:26:05Z","timestamp":1761071165000},"score":1,"resource":{"primary":{"URL":"https:\/\/onlinelibrary.wiley.com\/doi\/10.1002\/itl2.70089"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,7,26]]},"references-count":14,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2025,9]]}},"alternative-id":["10.1002\/itl2.70089"],"URL":"https:\/\/doi.org\/10.1002\/itl2.70089","archive":["Portico"],"relation":{"has-review":[{"id-type":"doi","id":"10.1002\/ITL2.70089\/v2\/response1","asserted-by":"object"},{"id-type":"doi","id":"10.1002\/ITL2.70089\/v2\/decision1","asserted-by":"object"},{"id-type":"doi","id":"10.1002\/ITL2.70089\/v1\/review1","asserted-by":"object"},{"id-type":"doi","id":"10.1002\/ITL2.70089\/v1\/review2","asserted-by":"object"},{"id-type":"doi","id":"10.1002\/ITL2.70089\/v1\/decision1","asserted-by":"object"},{"id-type":"doi","id":"10.1002\/ITL2.70089\/v2\/review1","asserted-by":"object"},{"id-type":"doi","id":"10.1002\/ITL2.70089\/v2\/review2","asserted-by":"object"}]},"ISSN":["2476-1508","2476-1508"],"issn-type":[{"value":"2476-1508","type":"print"},{"value":"2476-1508","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,7,26]]},"assertion":[{"value":"2025-04-18","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-07-02","order":2,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2025-07-26","order":3,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}],"article-number":"e70089"}}