{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,2,21]],"date-time":"2025-02-21T17:19:04Z","timestamp":1740158344576,"version":"3.37.3"},"reference-count":28,"publisher":"Wiley","license":[{"start":{"date-parts":[[2022,9,14]],"date-time":"2022-09-14T00:00:00Z","timestamp":1663113600000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61973283"],"award-info":[{"award-number":["61973283"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Journal of Electrical and Computer Engineering"],"published-print":{"date-parts":[[2022,9,14]]},"abstract":"<jats:p>Saliency detection is a technique for automatically extracting regions of interest from the background and has been widely used in the computer vision field. This study proposes a simple and effective saliency detection method combining color contrast and hash fingerprint. In our solution, the input image is segmented into nonoverlapping superpixels, so as to perform the saliency detection at the region level to reduce computational complexity. A background optimization selection is used to construct an accurate background template. Based on this, a saliency map that highlights the whole salient region is obtained by estimating color contrast. Besides, another saliency map that enhances the salient region while restraining the background is also generated through hash fingerprint matching. Ultimately, the final saliency map can be obtained by fusing the two saliency maps. Comparing the performance with other methods, the proposed algorithm works better even in the presence of complex background or very large salient regions.<\/jats:p>","DOI":"10.1155\/2022\/9476111","type":"journal-article","created":{"date-parts":[[2022,9,15]],"date-time":"2022-09-15T01:20:49Z","timestamp":1663204849000},"page":"1-10","source":"Crossref","is-referenced-by-count":1,"title":["Saliency Detection via Fusing Color Contrast and Hash Fingerprint"],"prefix":"10.1155","volume":"2022","author":[{"given":"Yin","family":"Lv","sequence":"first","affiliation":[{"name":"School of Mechanical Engineering and Electronic Information, China University of Geosciences, Wuhan 430074, China"}]},{"given":"Xuanrui","family":"Zhang","sequence":"additional","affiliation":[{"name":"School of Mechanical Engineering and Electronic Information, China University of Geosciences, Wuhan 430074, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0954-2856","authenticated-orcid":true,"given":"Yong","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Mechanical Engineering and Electronic Information, China University of Geosciences, Wuhan 430074, China"}]}],"member":"311","reference":[{"key":"1","doi-asserted-by":"publisher","DOI":"10.1109\/tcsvt.2013.2280096"},{"article-title":"Is bottom-up attention useful for object recognition?","author":"U. Rutishauser","key":"2","doi-asserted-by":"crossref","DOI":"10.1109\/CVPR.2004.1315142"},{"key":"3","doi-asserted-by":"publisher","DOI":"10.1109\/tip.2009.2030969"},{"key":"4","doi-asserted-by":"publisher","DOI":"10.1109\/mmul.2013.15"},{"first-page":"2341","article-title":"Robust salient object detection via fusing foreground and background priors","author":"K. Huang","key":"5"},{"first-page":"965","article-title":"A two-stage approach to saliency detection in images","author":"Z. S. Wang","key":"6"},{"first-page":"1","article-title":"Learning to detect a salient object","author":"T. Liu","key":"7"},{"first-page":"105","article-title":"Automatic salient object extraction with contextual cue","author":"L. Wang","key":"8"},{"first-page":"2790","article-title":"Learning optimal seeds for diffusion-based salient object detection","author":"S. Lu","key":"9"},{"first-page":"379","article-title":"Object detection via region-based fully convolutional networks","author":"J. F. Dai","key":"10"},{"first-page":"2083","article-title":"Salient object detection: a discriminative regional feature integration approach","author":"H. Jiang","key":"11"},{"key":"12","doi-asserted-by":"publisher","DOI":"10.1109\/34.730558"},{"first-page":"1597","article-title":"Frequency-tuned salient region detection","author":"R. Achanta","key":"13"},{"first-page":"815","article-title":"Visual attention detection in video sequences using spatiotemporal cues","author":"Y. Zhai","key":"14"},{"first-page":"2653","article-title":"Saliency detection using maximum symmetric surround","author":"R. Achanta","key":"15"},{"first-page":"29","article-title":"Geodesic saliency using background priors","author":"Y. C. Wei","key":"16"},{"first-page":"3166","article-title":"Saliency detection via graph-based manifold ranking","author":"C. Yang","key":"17"},{"first-page":"2814","article-title":"Saliency optimization from robust background detection","author":"W. J. Zhu","key":"18"},{"key":"19","doi-asserted-by":"publisher","DOI":"10.1007\/s11042-020-09073-4"},{"key":"20","doi-asserted-by":"publisher","DOI":"10.1109\/tpami.2012.120"},{"key":"21","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2014.10.056"},{"key":"22","doi-asserted-by":"publisher","DOI":"10.1109\/tpami.2011.272"},{"first-page":"733","article-title":"Saliency filters: contrast based filtering for salient region detection","author":"F. Perazzi","key":"23"},{"first-page":"3796","article-title":"Learning to detect salient objects with image-level supervision","author":"L. J. Wang","key":"24"},{"first-page":"1155","article-title":"Hierarchical saliency detection","author":"Q. Yan","key":"25"},{"key":"26","doi-asserted-by":"publisher","DOI":"10.1109\/tpami.2014.2345401"},{"first-page":"49","article-title":"Design and perceptual validation of performance measures for salient object segmentation","author":"V. Movahedi","key":"27"},{"first-page":"5455","article-title":"Visual saliency based on multiscale deep features","author":"G. B. Li","key":"28"}],"container-title":["Journal of Electrical and Computer Engineering"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/downloads.hindawi.com\/journals\/jece\/2022\/9476111.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/jece\/2022\/9476111.xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/downloads.hindawi.com\/journals\/jece\/2022\/9476111.pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,9,15]],"date-time":"2022-09-15T01:20:51Z","timestamp":1663204851000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.hindawi.com\/journals\/jece\/2022\/9476111\/"}},"subtitle":[],"editor":[{"given":"B. Rajanarayan","family":"Prusty","sequence":"additional","affiliation":[]}],"short-title":[],"issued":{"date-parts":[[2022,9,14]]},"references-count":28,"alternative-id":["9476111","9476111"],"URL":"https:\/\/doi.org\/10.1155\/2022\/9476111","relation":{},"ISSN":["2090-0155","2090-0147"],"issn-type":[{"type":"electronic","value":"2090-0155"},{"type":"print","value":"2090-0147"}],"subject":[],"published":{"date-parts":[[2022,9,14]]}}}