{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2024,4,2]],"date-time":"2024-04-02T09:21:18Z","timestamp":1712049678427},"reference-count":42,"publisher":"National Library of Serbia","issue":"3","license":[{"start":{"date-parts":[[2022,1,1]],"date-time":"2022-01-01T00:00:00Z","timestamp":1640995200000},"content-version":"unspecified","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["ComSIS","COMPUT SCI INF SYST","COMPUT SCI INFORM SY","COMPUTER SCI INFORM","COMSIS J"],"published-print":{"date-parts":[[2022]]},"abstract":"<jats:p>Image saliency detection is an important research topic in the field of computer vision. With the traditional saliency detection models, the texture details are not obvious and the edge contour is not complete. The accuracy and recall rate of object detection are low, which are mostly based on the manual features and prior information. With the rise of deep convolutional neural networks, saliency detection has been rapidly developed. However, the existing saliency methods still have some common shortcomings, and it is difficult to uniformly highlight the clear boundary and internal region of the whole object in complex images, mainly because of the lack of sufficient and rich features. In this paper, a new frog leaping algorithm-oriented fully convolutional neural network is proposed for dance motion object saliency detection. The VGG (Visual Geometry Group) model is improved. The final full connection layer is removed, and the jump connection layer is used for the saliency prediction, which can effectively combine the multi-scale information from different convolution layers in the convolutional neural network. Meanwhile, an improved frog leaping algorithm is used to optimize the selection of initial weights during network initialization. In the process of network iteration, the forward propagation loss of convolutional neural network is calculated, and the anomaly weight is corrected by using the improved frog leaping algorithm. When the network satisfies the terminal conditions, the final weight is optimized by one frog leaping to make the network weight further optimization. In addition, the new network can combine high-level semantic information and low-level detail information in a data-driven framework. In order to preserve the unity of the object boundary and inner region effectively, the fully connected conditional random field (CRF) model is used to adjust the obtained saliency feature map. In this paper, the precision recall (PR) curve, F-measure, maximum F-measure, weighted F-measure and mean absolute error (MAE) are tested on six widely used public data sets. Compared with other most advanced and representative methods, the results show that the proposed method achieves better performance and it is superior to most representative methods. The presented method reveals that it has strong robustness for image saliency detection with various scenes, and can make the boundary and inner region of the saliency object more uniform and the detection results more accurate.<\/jats:p>","DOI":"10.2298\/csis220320035l","type":"journal-article","created":{"date-parts":[[2022,9,19]],"date-time":"2022-09-19T14:37:16Z","timestamp":1663598236000},"page":"1349-1370","source":"Crossref","is-referenced-by-count":1,"title":["A new frog leaping algorithm-oriented fully convolutional neural network for dance motion object saliency detection"],"prefix":"10.2298","volume":"19","author":[{"given":"Yin","family":"Lyu","sequence":"first","affiliation":[{"name":"Music College, Huaiyin Normal University Huaian City, China"}]},{"given":"Chen","family":"Zhang","sequence":"additional","affiliation":[{"name":"College of Sports Art, Harbin Sport University Harbin City, China"}]}],"member":"1078","reference":[{"key":"ref1","doi-asserted-by":"crossref","unstructured":"Song H, Deng B, Pound M, et al. \u201dA fusion spatial attention approach for few-shot learning,\u201d Information Fusion, vol. 81, pp. 187-202, 2022.","DOI":"10.1016\/j.inffus.2021.11.019"},{"key":"ref2","doi-asserted-by":"crossref","unstructured":"C. Guo and L. Zhang. \u201dA Novel Multiresolution Spatiotemporal Saliency Detection Model and Its Applications in Image and Video Compression,\u201d 2013 IEEE Transactions on Image Processing, vol. 19, no. 1, pp. 185-198, Jan. 2010, doi: 10.1109\/TIP.2009.2030969.","DOI":"10.1109\/TIP.2009.2030969"},{"key":"ref3","doi-asserted-by":"crossref","unstructured":"Radoji\u010di\u010d, D., Radoji\u010di\u0107, N., Kredatus, S. \u201dA multicriteria optimization approach for the stock market feature selection,\u201d Computer Science and Information Systems, Vol. 18, No. 3, pp. 749- 769, 2021. https:\/\/doi.org\/doi.org\/10.2298\/CSIS200326044R","DOI":"10.2298\/CSIS200326044R"},{"key":"ref4","doi-asserted-by":"crossref","unstructured":"Chen Y, Yang X, Zhong B, et al. \u201dCNNTracker: Online discriminative object tracking via deep convolutional neural network,\u201d Applied Soft Computing, vol. 38, pp. 1088-1098, 2016.","DOI":"10.1016\/j.asoc.2015.06.048"},{"key":"ref5","doi-asserted-by":"crossref","unstructured":"Guo, Z., Han, D., Li, K. \u201dDouble-Layer Affective Visual Question Answering Network,\u201d Computer Science and Information Systems, Vol. 18, No. 1, pp. 155-168, 2021. https:\/\/doi.org\/10.2298\/CSIS200515038G","DOI":"10.2298\/CSIS200515038G"},{"key":"ref6","doi-asserted-by":"crossref","unstructured":"Li, H., Han, D. \u201dMultimodal Encoders and Decoders with Gate Attention for Visual Question Answering,\u201d Computer Science and Information Systems, Vol. 18, No. 3, pp. 1023-1040, 2021. https:\/\/doi.org\/10.2298\/CSIS201120032L","DOI":"10.2298\/CSIS201120032L"},{"key":"ref7","doi-asserted-by":"crossref","unstructured":"N. Tong, H. Lu, L. Zhang and X. Ruan. \u201dSaliency Detection with Multi-Scale Superpixels,\u201d IEEE Signal Processing Letters, vol. 21, no. 9, pp. 1035-1039, 2014. doi: 10.1109\/LSP.2014.2323407.","DOI":"10.1109\/LSP.2014.2323407"},{"key":"ref8","unstructured":"Gao S. \u201dA Two-channel Attention Mechanism-based MobileNetV2 And Bidirectional Long Short Memory Network For Multi-modal Dimension Dance Emotion Recognition,\u201d Journal of Applied Science and Engineering, 26(4): 455-464, 2022."},{"key":"ref9","doi-asserted-by":"crossref","unstructured":"Lamsiyah S, Mahdaouy A E, Ouatik S, et al. \u201dUnsupervised extractive multi-document summarization method based on transfer learning from BERT multi-task fine-tuning,\u201d Journal of Information Science, 2021:016555152199061.","DOI":"10.1177\/0165551521990616"},{"key":"ref10","doi-asserted-by":"crossref","unstructured":"L. Jing, Y. Chen and Y. Tian, \u201dCoarse-to-Fine Semantic Segmentation From Image- Level Labels,\u201d IEEE Transactions on Image Processing, vol. 29, pp. 225-236, 2020. doi: 10.1109\/TIP.2019.2926748.","DOI":"10.1109\/TIP.2019.2926748"},{"key":"ref11","doi-asserted-by":"crossref","unstructured":"Wang G,Wang Z, Jiang K, et al. \u201dSilicone Mask Face Anti-spoofing Detection based on Visual Saliency and Facial Motion,\u201d Neurocomputing, vol. 458, pp. 416-427, 2021.","DOI":"10.1016\/j.neucom.2021.06.033"},{"key":"ref12","doi-asserted-by":"crossref","unstructured":"Zheng X, ChenW. \u201dAn Attention-based Bi-LSTM Method for Visual Object Classification via EEG,\u201d Biomedical Signal Processing and Control, vol. 63:102174, 2021.","DOI":"10.1016\/j.bspc.2020.102174"},{"key":"ref13","doi-asserted-by":"crossref","unstructured":"X. Shen and Y. Wu. \u201dA unified approach to salient object detection via low rank matrix recovery,\u201d 2012 IEEE Conference on Computer Vision and Pattern Recognition, pp. 853-860, 2012. doi: 10.1109\/CVPR.2012.6247758.","DOI":"10.1109\/CVPR.2012.6247758"},{"key":"ref14","doi-asserted-by":"crossref","unstructured":"C. Yang, L. Zhang and H. Lu, \u201dGraph-Regularized Saliency Detection With Convex-Hull- Based Center Prior,\u201d IEEE Signal Processing Letters, vol. 20, no. 7, pp. 637-640, July 2013. doi: 10.1109\/LSP.2013.2260737.","DOI":"10.1109\/LSP.2013.2260737"},{"key":"ref15","doi-asserted-by":"crossref","unstructured":"Y. Xie, H. Lu and M. Yang, \u201dBayesian Saliency via Low and Mid Level Cues,\u201d IEEE Transactions on Image Processing, vol. 22, no. 5, pp. 1689-1698, May 2013. doi: 10.1109\/TIP.2012.2216276.","DOI":"10.1109\/TIP.2012.2216276"},{"key":"ref16","doi-asserted-by":"crossref","unstructured":"L. Zhang, Z. Gu and H. Li, \u201dSDSP: A novel saliency detection method by combining simple priors,\u201d 2013 IEEE International Conference on Image Processing, pp. 171-175, 2013. doi: 10.1109\/ICIP.2013.6738036.","DOI":"10.1109\/ICIP.2013.6738036"},{"key":"ref17","doi-asserted-by":"crossref","unstructured":"Li L, Zhou F, Zheng Y, et al. \u201dSaliency detection based on foreground appearance and background-prior,\u201d Neurocomputing, vol. 301(AUG.2), pp. 46-61, 2018.","DOI":"10.1016\/j.neucom.2018.03.049"},{"key":"ref18","doi-asserted-by":"crossref","unstructured":"Y. Piao, X. Li, M. Zhang, J. Yu and H. Lu, \u201dSaliency Detection via Depth-Induced Cellular Automata on Light Field,\u201d IEEE Transactions on Image Processing, vol. 29, pp. 1879-1889, 2020, doi: 10.1109\/TIP.2019.2942434.","DOI":"10.1109\/TIP.2019.2942434"},{"key":"ref19","doi-asserted-by":"crossref","unstructured":"L. Zhou, Z. Yang, Q. Yuan, Z. Zhou and D. Hu, \u201dSalient Region Detection via Integrating Diffusion-Based Compactness and Local Contrast,\u201d IEEE Transactions on Image Processing, vol. 24, no. 11, pp. 3308-3320, Nov. 2015, doi: 10.1109\/TIP.2015.2438546.","DOI":"10.1109\/TIP.2015.2438546"},{"key":"ref20","doi-asserted-by":"crossref","unstructured":"C. Tang, P. Wang, C. Zhang and W. Li, \u201dSalient Object Detection via Weighted Low Rank Matrix Recovery,\u201d IEEE Signal Processing Letters, vol. 24, no. 4, pp. 490-494, April 2017, doi: 10.1109\/LSP.2016.2620162.","DOI":"10.1109\/LSP.2016.2620162"},{"key":"ref21","doi-asserted-by":"crossref","unstructured":"L.Wang, H. Lu, X. Ruan and M. Yang, \u201dDeep networks for saliency detection via local estimation and global search,\u201d 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 3183-3192, doi: 10.1109\/CVPR.2015.7298938.","DOI":"10.1109\/CVPR.2015.7298938"},{"key":"ref22","doi-asserted-by":"crossref","unstructured":"Guanbin Li and Y. Yu, \u201dVisual saliency based on multiscale deep features,\u201d 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015, pp. 5455-5463, doi: 10.1109\/CVPR.2015.7299184.","DOI":"10.1109\/CVPR.2015.7299184"},{"key":"ref23","doi-asserted-by":"crossref","unstructured":"Chen H., Li Y., Su D. \u201dRGB-D Saliency Detection by Multi-stream Late Fusion Network,\u201d ICVS 2017. Lecture Notes in Computer Science, vol. 10528, 2017. Springer, Cham.","DOI":"10.1007\/978-3-319-68345-4_41"},{"key":"ref24","doi-asserted-by":"crossref","unstructured":"S. Wang, R. Clark, H. Wen and N. Trigoni, \u201dDeepVO: Towards end-to-end visual odometry with deep Recurrent Convolutional Neural Networks,\u201d 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017, pp. 2043-2050, doi: 10.1109\/ICRA.2017.7989236.","DOI":"10.1109\/ICRA.2017.7989236"},{"key":"ref25","doi-asserted-by":"crossref","unstructured":"N. Liu and J. Han, \u201dDHSNet: Deep Hierarchical Saliency Network for Salient Object Detection,\u201d 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 678-686, doi: 10.1109\/CVPR.2016.80.","DOI":"10.1109\/CVPR.2016.80"},{"key":"ref26","doi-asserted-by":"crossref","unstructured":"X. Li et al., \u201dDeepSaliency: Multi-Task Deep Neural Network Model for Salient Object Detection,\u201d IEEE Transactions on Image Processing, vol. 25, no. 8, pp. 3919-3930, Aug. 2016, doi: 10.1109\/TIP.2016.2579306.","DOI":"10.1109\/TIP.2016.2579306"},{"key":"ref27","unstructured":"Zou L. \u201dAn Intelligent Improvement Method Of Classroom Cognitive Efficiency Based On Multidimensional Interactive Devices,\u201d Journal of Applied Science and Engineering, 2022, 26(3): 445-454."},{"key":"ref28","doi-asserted-by":"crossref","unstructured":"I. Batatia, \u201dA Deep Learning Method with CRF for Instance Segmentation of Metal-Organic Frameworks in Scanning Electron Microscopy Images,\u201d 2020 28th European Signal Processing Conference (EUSIPCO), 2021, pp. 625-629, doi: 10.23919\/Eusipco47968.2020.9287366.","DOI":"10.23919\/Eusipco47968.2020.9287366"},{"key":"ref29","unstructured":"Zhang Q, Zuo B C, Shi Y J and Dai M. \u201dA multi-scale convolutional neural network for salient object detection,\u201d Journal of Image and Graphics, vol. 25, no. 06, pp. 116-129, 2020. doi: 10.11834\/jig. 190395."},{"key":"ref30","doi-asserted-by":"crossref","unstructured":"S. Yin and H. Li. \u201dHot Region Selection Based on Selective Search and Modified Fuzzy CMeans in Remote Sensing Images,\u201d IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 13, pp. 5862-5871, 2020, doi: 10.1109\/JSTARS.2020.3025582.","DOI":"10.1109\/JSTARS.2020.3025582"},{"key":"ref31","doi-asserted-by":"crossref","unstructured":"L. Wang, L. Wang, H. Lu, P. Zhang and X. Ruan, \u201dSalient Object Detection with Recurrent Fully Convolutional Networks,\u201d IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 41, no. 7, pp. 1734-1746, 1 July 2019, doi: 10.1109\/TPAMI.2018.2846598.","DOI":"10.1109\/TPAMI.2018.2846598"},{"key":"ref32","doi-asserted-by":"crossref","unstructured":"X. Zhang, T. Wang, J. Qi, H. Lu and G. Wang, \u201dProgressive Attention Guided Recurrent Network for Salient Object Detection,\u201d 2018 IEEE\/CVF Conference on Computer Vision and Pattern Recognition, 2018, pp. 714-722, doi: 10.1109\/CVPR.2018.00081.","DOI":"10.1109\/CVPR.2018.00081"},{"key":"ref33","doi-asserted-by":"crossref","unstructured":"P. Zhang, D. Wang, H. Lu, H. Wang and B. Yin, \u201dLearning Uncertain Convolutional Features for Accurate Saliency Detection,\u201d 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 212-221, doi: 10.1109\/ICCV.2017.32.","DOI":"10.1109\/ICCV.2017.32"},{"key":"ref34","doi-asserted-by":"crossref","unstructured":"D. Zhang, J. Han and Y. Zhang, \u201dSupervision by Fusion: Towards Unsupervised Learning of Deep Salient Object Detector,\u201d 2017 IEEE International Conference on Computer Vision (ICCV), 2017, pp. 4068-4076, doi: 10.1109\/ICCV.2017.436.","DOI":"10.1109\/ICCV.2017.436"},{"key":"ref35","doi-asserted-by":"crossref","unstructured":"G. Li and Y. Yu, \u201dDeep Contrast Learning for Salient Object Detection,\u201d 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 478-487, doi: 10.1109\/CVPR.2016.58.","DOI":"10.1109\/CVPR.2016.58"},{"key":"ref36","doi-asserted-by":"crossref","unstructured":"L. Huang, K. Song, J. Wang, M. Niu and Y. Yan, \u201dMulti-graph Fusion and Learning for RGBT Image Saliency Detection,\u201d IEEE Transactions on Circuits and Systems for Video Technology, doi: 10.1109\/TCSVT.2021.3069812.","DOI":"10.1109\/TCSVT.2021.3069812"},{"key":"ref37","doi-asserted-by":"crossref","unstructured":"G. Lee, Y. Tai and J. Kim, \u201dDeep Saliency with Encoded Low Level Distance Map and High Level Features,\u201d 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 660-668, doi: 10.1109\/CVPR.2016.78.","DOI":"10.1109\/CVPR.2016.78"},{"key":"ref38","doi-asserted-by":"crossref","unstructured":"J Su, Yi H, Ling L, et al. A surface roughness grade recognition model for milled workpieces based on deep transfer learning,\u201d Measurement Science and Technology, vol. 33, no. 4, 045014, 2022 (11pp).","DOI":"10.1088\/1361-6501\/ac3f86"},{"key":"ref39","doi-asserted-by":"crossref","unstructured":"L. Zhang, J. Sun, T. Wang, Y. Min and H. Lu, \u201dVisual Saliency Detection via Kernelized Subspace Ranking With Active Learning,\u201d IEEE Transactions on Image Processing, vol. 29, pp. 2258-2270, 2020, doi: 10.1109\/TIP.2019.2945679.","DOI":"10.1109\/TIP.2019.2945679"},{"key":"ref40","doi-asserted-by":"crossref","unstructured":"Wang J, Jiang H, Yuan Z, et al. Salient Object Detection: A Discriminative Regional Feature Integration Approach,\u201d International Journal of Computer Vision, vol. 123, pp. 251-268, 2017. https:\/\/doi.org\/10.1007\/s11263-016-0977-3","DOI":"10.1007\/s11263-016-0977-3"},{"key":"ref41","doi-asserted-by":"crossref","unstructured":"J. Li, Z. Wang and Z. Pan, \u201dDouble Structured Nuclear Norm-Based Matrix Decomposition for Saliency Detection,\u201d IEEE Access, vol. 8, pp. 159816-159827, 2020. doi: 10.1109\/ACCESS. 2020.3020966.","DOI":"10.1109\/ACCESS.2020.3020966"},{"key":"ref42","doi-asserted-by":"crossref","unstructured":"Y. Yuan, C. Li, J. Kim,W. Cai and D. D. Feng, \u201dReversion Correction and Regularized Random Walk Ranking for Saliency Detection,\u201d IEEE Transactions on Image Processing, vol. 27, no. 3, pp. 1311-1322, March 2018, doi: 10.1109\/TIP.2017.2762422.","DOI":"10.1109\/TIP.2017.2762422"}],"container-title":["Computer Science and Information Systems"],"original-title":[],"language":"en","deposited":{"date-parts":[[2023,8,11]],"date-time":"2023-08-11T08:22:48Z","timestamp":1691742168000},"score":1,"resource":{"primary":{"URL":"https:\/\/doiserbia.nb.rs\/Article.aspx?ID=1820-02142200035L"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022]]},"references-count":42,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2022]]}},"URL":"https:\/\/doi.org\/10.2298\/csis220320035l","relation":{},"ISSN":["1820-0214","2406-1018"],"issn-type":[{"value":"1820-0214","type":"print"},{"value":"2406-1018","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022]]}}}