{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,7]],"date-time":"2026-04-07T16:20:46Z","timestamp":1775578846936,"version":"3.50.1"},"reference-count":41,"publisher":"MDPI AG","issue":"19","license":[{"start":{"date-parts":[[2021,9,26]],"date-time":"2021-09-26T00:00:00Z","timestamp":1632614400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Video coding technology makes the required storage and transmission bandwidth of video services decrease by reducing the bitrate of the video stream. However, the compressed video signals may involve perceivable information loss, especially when the video is overcompressed. In such cases, the viewers can observe visually annoying artifacts, namely, Perceivable Encoding Artifacts (PEAs), which degrade their perceived video quality. To monitor and measure these PEAs (including blurring, blocking, ringing and color bleeding), we propose an objective video quality metric named Saliency-Aware Artifact Measurement (SAAM) without any reference information. The SAAM metric first introduces video saliency detection to extract interested regions and further splits these regions into a finite number of image patches. For each image patch, the data-driven model is utilized to evaluate intensities of PEAs. Finally, these intensities are fused into an overall metric using Support Vector Regression (SVR). In experiment section, we compared the SAAM metric with other popular video quality metrics on four publicly available databases: LIVE, CSIQ, IVP and FERIT-RTRK. The results reveal the promising quality prediction performance of the SAAM metric, which is superior to most of the popular compressed video quality evaluation models.<\/jats:p>","DOI":"10.3390\/s21196429","type":"journal-article","created":{"date-parts":[[2021,9,27]],"date-time":"2021-09-27T22:16:38Z","timestamp":1632780998000},"page":"6429","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":9,"title":["Compressed Video Quality Index Based on Saliency-Aware Artifact Detection"],"prefix":"10.3390","volume":"21","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-5900-4175","authenticated-orcid":false,"given":"Liqun","family":"Lin","sequence":"first","affiliation":[{"name":"Fujian Key Lab for Intelligent Processing and Wireless Transmission of Media Information, College of Physics and Information Engineering, Fuzhou University, Fuzhou 350002, China"}]},{"given":"Jing","family":"Yang","sequence":"additional","affiliation":[{"name":"Fujian Key Lab for Intelligent Processing and Wireless Transmission of Media Information, College of Physics and Information Engineering, Fuzhou University, Fuzhou 350002, China"}]},{"given":"Zheng","family":"Wang","sequence":"additional","affiliation":[{"name":"Fujian Key Lab for Intelligent Processing and Wireless Transmission of Media Information, College of Physics and Information Engineering, Fuzhou University, Fuzhou 350002, China"}]},{"given":"Liping","family":"Zhou","sequence":"additional","affiliation":[{"name":"Fujian Key Lab for Intelligent Processing and Wireless Transmission of Media Information, College of Physics and Information Engineering, Fuzhou University, Fuzhou 350002, China"}]},{"given":"Weiling","family":"Chen","sequence":"additional","affiliation":[{"name":"Fujian Key Lab for Intelligent Processing and Wireless Transmission of Media Information, College of Physics and Information Engineering, Fuzhou University, Fuzhou 350002, China"}]},{"given":"Yiwen","family":"Xu","sequence":"additional","affiliation":[{"name":"Fujian Key Lab for Intelligent Processing and Wireless Transmission of Media Information, College of Physics and Information Engineering, Fuzhou University, Fuzhou 350002, China"}]}],"member":"1968","published-online":{"date-parts":[[2021,9,26]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"3898","DOI":"10.1109\/TCSVT.2020.2980571","article-title":"PEA265: Perceptual assessment of video compression artifacts","volume":"28","author":"Lin","year":"2020","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_2","unstructured":"International Telecommunication Union (2012). Methodology for the Subjective Assessment of the Quality of Television Pictures, International Telecommunication Union. Recommendation ITU-R BT.500-13."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"600","DOI":"10.1109\/TIP.2003.819861","article-title":"Image quality assessment: From error visibility to structural similarity","volume":"13","author":"Wang","year":"2004","journal-title":"IEEE Trans. Image Process."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"684","DOI":"10.1109\/TCSVT.2012.2214933","article-title":"Video quality assessment by reduced reference spatiotemporal entropic differencing","volume":"23","author":"Soundararajan","year":"2013","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1333","DOI":"10.1109\/LSP.2017.2726542","article-title":"SpEED-QA: Spatial efficient entropic differencing for image and video quality","volume":"24","author":"Bampis","year":"2017","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"1352","DOI":"10.1109\/TIP.2014.2299154","article-title":"Blind prediction of natural video quality","volume":"23","author":"Saad","year":"2014","journal-title":"IEEE Trans. Image Process."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"1044","DOI":"10.1109\/TCSVT.2015.2430711","article-title":"No-reference video quality assessment with 3D shearlet transform and convolutional neural networks","volume":"26","author":"Li","year":"2016","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"532","DOI":"10.1109\/JETCAS.2016.2598756","article-title":"Subjective and Objective Quality Assessment of Compressed Screen Content Images","volume":"6","author":"Wang","year":"2016","journal-title":"IEEE J. Emerg. Sel. Top. Circuits Syst."},{"key":"ref_9","unstructured":"Ye, P., Kumar, J., Kang, L., and Doermann, D. (2012, January 16\u201321). Unsupervised feature learning framework for no-reference image quality assessment. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI, USA."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Xu, J., Ye, P., Liu, Y., and Doermann, D. (2014, January 27\u201330). No-reference video quality assessment via feature learning. Proceedings of the IEEE Conference on Image Processing (ICIP), Paris, France.","DOI":"10.1109\/ICIP.2014.7025098"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"289","DOI":"10.1109\/TIP.2015.2502725","article-title":"A completely blind video integrity oracle","volume":"25","author":"Mittal","year":"2016","journal-title":"IEEE Trans. Image Process."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Zhu, Y., Wang, Y., and Shuai, Y. (2017, January 17\u201320). Blind video quality assessment based on spatio-temporal internal generative mechanism. Proceedings of the IEEE Conference on Image Processing (ICIP), Beijing, China.","DOI":"10.1109\/ICIP.2017.8296292"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"5612","DOI":"10.1109\/TIP.2020.2984879","article-title":"No-reference video quality assessment using natural spatiotemporal scene statistics","volume":"29","author":"Reddy","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"453","DOI":"10.1007\/s11760-013-0448-z","article-title":"Detection and measurement of the blocking artifact in decoded video frames","volume":"7","author":"Abate","year":"2013","journal-title":"Signal Image Video Process."},{"key":"ref_15","first-page":"408","article-title":"A no reference quality metric to measure the blocking artefacts for video sequences","volume":"64","author":"Amor","year":"2016","journal-title":"J. Photogr. Sci."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"134","DOI":"10.1109\/TMM.2014.2368272","article-title":"A novel no-reference video quality metric for evaluating temporal jerkiness due to frame freezing","volume":"17","author":"Xue","year":"2014","journal-title":"IEEE Trans. Multimedia"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"533","DOI":"10.1109\/TCSVT.2014.2363737","article-title":"No-reference video quality assessment based on artifact measurement and statistical analysis","volume":"25","author":"Zhu","year":"2015","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_18","unstructured":"Men, H., Lin, H., and Saupe, D. (June, January 31). Empirical evaluation of no-reference VQA methods on a natural video quality database. Proceedings of the International Conference on Quality of Multimedia Experience (QoMEX), Erfurt, Germany."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"345","DOI":"10.1016\/j.image.2019.07.015","article-title":"No-reference artifacts measurements based video quality metric","volume":"78","author":"Vranje","year":"2019","journal-title":"Signal Process Image Commun."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"205","DOI":"10.1007\/s11760-019-01543-z","article-title":"An improved model for no-reference image quality assessment and a no-reference video quality assessment model based on frame analysis","volume":"14","author":"Rohil","year":"2020","journal-title":"Signal Image Video Process."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"220","DOI":"10.1109\/TPAMI.2019.2924417","article-title":"Revisiting video saliency prediction in the deep learning era","volume":"43","author":"Wang","year":"2019","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_22","unstructured":"Kai, Z., Zhao, T., Rehman, A., and Zhou, W. (2014, January 3\u20136). Characterizing perceptual artifacts in compressed video streams. Proceedings of the Human Vision and Electronic Imaging XIX, San Francisco, CA, USA."},{"key":"ref_23","unstructured":"(2021, September 20). LIVE Video Quality Database. Available online: http:\/\/live.ece.utexas.edu\/research\/quality\/live_video.html."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"1427","DOI":"10.1109\/TIP.2010.2042111","article-title":"Study of subjective and objective quality assessment of video","volume":"19","author":"Seshadrinathan","year":"2010","journal-title":"IEEE Trans. Image Process."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Seshadrinathan, K., Soundararajan, R., Bovik, A.C., and Cormack, L. (2010, January 18\u201321). A Subjective Study to Evaluate Video Quality Assessment Algorithms. Proceedings of the Human Vision and Electronic Imaging XV, San Jose, CA, USA.","DOI":"10.1117\/12.845382"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"918","DOI":"10.1109\/LSP.2014.2320743","article-title":"Learning structural regularity for evaluating blocking artifacts in JPEG images","volume":"21","author":"Li","year":"2014","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_27","unstructured":"(2021, September 20). Ultra Video Group. Available online: http:\/\/ultravideo.cs.tut.fi\/#main."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"2738","DOI":"10.1109\/TMM.2019.2908377","article-title":"Quality Assessment for Video With Degradation Along Salient Trajectories","volume":"21","author":"Wu","year":"2019","journal-title":"IEEE Trans. Multimedia"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"154","DOI":"10.1109\/JSTSP.2016.2608329","article-title":"A Quality-of-Experience Index for Streaming Video","volume":"11","author":"Duanmu","year":"2017","journal-title":"IEEE J. Sel. Top. Signal Process."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"1844","DOI":"10.1109\/TCSVT.2016.2556499","article-title":"Objective Video Quality Assessment Based on Perceptually Weighted Mean Squared Error","volume":"27","author":"Hu","year":"2017","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"341","DOI":"10.1109\/TBC.2018.2789583","article-title":"No reference quality assessment of stereo video based on saliency and sparsity","volume":"64","author":"Yang","year":"2018","journal-title":"IEEE Trans. Broadcast."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Feng, W., Li, X., Gao, G., Chen, X., and Liu, Q. (2020). Multi-Scale Global Contrast CNN for Salient Object Detection. Sensors, 20.","DOI":"10.3390\/s20092656"},{"key":"ref_33","unstructured":"Shi, X., Chen, Z., Wang, H., Yeung, D., Wong, W., and Woo, W. (2015, January 7\u201312). Convolutional LSTM network: A machine learning approach for precipitation nowcasting. Proceedings of the 28th International Conference on Neural Information Processing Systems, Montreal, QC, Canada."},{"key":"ref_34","unstructured":"Xu, K., Ba, J., Kiros, R., Cho, K., Courville, A., Salakhutdinov, R., Zemel, R., and Bengio, Y. (2015, January 7\u20139). Show, attend and tell: Neural image caption generation with visual attention. Proceedings of the International Conference on Machine Learning, Lille, France."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1117\/1.JEI.23.1.013016","article-title":"ViS3: An algorithm for video quality assessment via analysis of spatial and spatiotemporal slices","volume":"23","author":"Vu","year":"2014","journal-title":"J. Electron. Imaging"},{"key":"ref_37","unstructured":"Zhang, F., Li, S., Ma, L., Wong, Y., and Ngan, K. (2021, September 20). IVP Subjective Quality Video Database. Available online: http:\/\/ivp.ee.cuhk.edu.hk\/research\/database\/subjective."},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Baj\u010dinovci, V., Vranje\u0161, M., Babi\u0107, D., and Kova\u010devi\u0107, B. (2017, January 18\u201320). Subjective and objective quality assessment of MPEG-2, H.264 and H.265 videos. Proceedings of the International Symposium ELMAR, Zadar, Croatia.","DOI":"10.23919\/ELMAR.2017.8124438"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"4232","DOI":"10.1109\/TIP.2018.2837341","article-title":"SPSIM: A Superpixel-Based Similarity Index for Full-Reference Image Quality Assessment","volume":"27","author":"Sun","year":"2018","journal-title":"IEEE Trans. Image Process."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"4695","DOI":"10.1109\/TIP.2012.2214050","article-title":"No-reference image quality assessment in the spatial domain","volume":"21","author":"Mittal","year":"2012","journal-title":"IEEE Trans. Image Process."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"209","DOI":"10.1109\/LSP.2012.2227726","article-title":"Making a completely blind image quality analyzer","volume":"20","author":"Mittal","year":"2013","journal-title":"IEEE Signal Process. Lett."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/19\/6429\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T07:05:28Z","timestamp":1760166328000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/19\/6429"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,9,26]]},"references-count":41,"journal-issue":{"issue":"19","published-online":{"date-parts":[[2021,10]]}},"alternative-id":["s21196429"],"URL":"https:\/\/doi.org\/10.3390\/s21196429","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,9,26]]}}}