{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,8]],"date-time":"2026-04-08T16:27:11Z","timestamp":1775665631673,"version":"3.50.1"},"reference-count":38,"publisher":"Walter de Gruyter GmbH","issue":"1","license":[{"start":{"date-parts":[[2024,1,1]],"date-time":"2024-01-01T00:00:00Z","timestamp":1704067200000},"content-version":"unspecified","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2024,7,9]]},"abstract":"<jats:title>Abstract<\/jats:title>\n               <jats:p>Foreground segmentation (FS) plays a fundamental and important role in computer vision, but it remains a challenging task in dynamic backgrounds. The supervised method has achieved good results, but the generalization ability needs to be improved. To address this challenge and improve the performance of FS in dynamic scenarios, a simple yet effective method has been proposed that leverages superpixel features and a one-dimensional convolution neural network (1D-CNN) named SPF-CNN. SPF-CNN involves several steps. First, the coined Iterated Robust CUR (IRCUR) is utilized to obtain candidate foregrounds for an image sequence. Simultaneously, the image sequence is segmented using simple linear iterative clustering. Next, the proposed feature extraction approach is applied to the candidate matrix region corresponding to the superpixel block. Finally, the 1D-CNN is trained using the obtained superpixel features. Experimental results demonstrate the effectiveness of SPF-CNN, which also exhibits strong generalization capabilities. The average <jats:italic>F<\/jats:italic>1-score reaches 0.83.<\/jats:p>","DOI":"10.1515\/comp-2024-0009","type":"journal-article","created":{"date-parts":[[2024,7,9]],"date-time":"2024-07-09T13:51:02Z","timestamp":1720533062000},"source":"Crossref","is-referenced-by-count":2,"title":["Moving object detection via feature extraction and classification"],"prefix":"10.1515","volume":"14","author":[{"given":"Yang","family":"Li","sequence":"first","affiliation":[{"name":"School of Computer Science and Engineering, Tianjin University of Technology , Tianjin , China"},{"name":"School of IoT Engineering, Jiangsu Vocational College of Information Technology , Jiangsu , China"}]}],"member":"374","published-online":{"date-parts":[[2024,7,9]]},"reference":[{"key":"2024070913505291476_j_comp-2024-0009_ref_001","doi-asserted-by":"crossref","unstructured":"Y.-F. Li, L. Liu, J.-X. Song, Z. Zhang, and X. Chen, \u201cCombination of local binary pattern operator with sample consensus model for moving objects detection,\u201d Infrared Phys. Technol., vol. 92, pp. 44\u201352, 2018.","DOI":"10.1016\/j.infrared.2018.05.009"},{"key":"2024070913505291476_j_comp-2024-0009_ref_002","doi-asserted-by":"crossref","unstructured":"O. Tezcan, P. Ishwar, and J. Konrad, \u201cBsuv-net: a fully-convolutional neural network for background subtraction of unseen videos,\u201d In: The IEEE Winter Conference on Applications of Computer Vision, 2020, pp. 2774\u20132783.","DOI":"10.1109\/WACV45572.2020.9093464"},{"key":"2024070913505291476_j_comp-2024-0009_ref_003","doi-asserted-by":"crossref","unstructured":"D. Sakkos, H. Liu, J. Han, and L. Shao, \u201cEnd-to-end video background subtraction with 3d convolutional neural networks,\u201d Multimedia Tools Appl., vol. 77, no. 17, pp. 23023\u201323041, 2018.","DOI":"10.1007\/s11042-017-5460-9"},{"key":"2024070913505291476_j_comp-2024-0009_ref_004","doi-asserted-by":"crossref","unstructured":"J. Liao, G. Guo, Y. Yan, and H. Wang, \u201cMultiscale cascaded scene-specific convolutional neural networks for background subtraction,\u201d in: Pacific Rim Conference on Multimedia, Springer, 2018, pp. 524\u2013533.","DOI":"10.1007\/978-3-030-00776-8_48"},{"key":"2024070913505291476_j_comp-2024-0009_ref_005","doi-asserted-by":"crossref","unstructured":"L. A. Lim and H. Y. Keles, \u201cLearning multi-scale features for foreground segmentation,\u201d Pattern Anal. Appl., vol. 23, no. 3, pp. 1369\u20131380, 2020.","DOI":"10.1007\/s10044-019-00845-9"},{"key":"2024070913505291476_j_comp-2024-0009_ref_006","doi-asserted-by":"crossref","unstructured":"D. Liang, Z. Wei, H. Sun, and H. Zhou, \u201cRobust cross-scene foreground segmentation in surveillance video,\u201d in: 2021 IEEE International Conference on Multimedia and Expo (ICME). IEEE, 2021, pp. 1\u20136.","DOI":"10.1109\/ICME51207.2021.9428086"},{"key":"2024070913505291476_j_comp-2024-0009_ref_007","doi-asserted-by":"crossref","unstructured":"O. Barnich and M. Van Droogenbroeck, \u201cVibe: A universal background subtraction algorithm for video sequences,\u201d IEEE Trans. Image Process, vol. 20, no. 6, pp. 1709\u20131724, 2010.","DOI":"10.1109\/TIP.2010.2101613"},{"key":"2024070913505291476_j_comp-2024-0009_ref_008","unstructured":"C. Stauffer and W. E. L. Grimson, \u201cAdaptive background mixture models for real-time tracking,\u201d in: Proceedings. 1999 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), vol. 2, IEEE, 1999, pp. 246\u2013252."},{"key":"2024070913505291476_j_comp-2024-0009_ref_009","doi-asserted-by":"crossref","unstructured":"M. Hofmann, P. Tiefenbacher, and G. Rigoll, \u201cBackground segmentation with feedback: The pixel-based adaptive segmenter,\u201d In: 2012 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE, 2012, pp. 38\u201343.","DOI":"10.1109\/CVPRW.2012.6238925"},{"key":"2024070913505291476_j_comp-2024-0009_ref_010","doi-asserted-by":"crossref","unstructured":"E. J. Cand\u00e8s, X. Li, Y. Ma, and J. Wright, \u201cRobust principal component analysis?,\u201d J. ACM, vol. 58, no. 3, pp. 1\u201337, 2011.","DOI":"10.1145\/1970392.1970395"},{"key":"2024070913505291476_j_comp-2024-0009_ref_011","doi-asserted-by":"crossref","unstructured":"S. E. Ebadi and E. Izquierdo, \u201cForeground segmentation with tree-structured sparse rpc,\u201d IEEE Trans. Pattern Anal. Machine Intelligence, vol. 40, no. 9, pp. 2273\u20132280, 2018.","DOI":"10.1109\/TPAMI.2017.2745573"},{"key":"2024070913505291476_j_comp-2024-0009_ref_012","doi-asserted-by":"crossref","unstructured":"J. Wang, G. Xu, C. Li, Z. Wang, and F. Yan, \u201cSurface defects detection using non-convex total variation regularized RPCA with kernelization,\u201d IEEE Trans. Instrument. Measurement, vol. 70, 2021, pp. 1\u201313.","DOI":"10.1109\/TIM.2021.3056738"},{"key":"2024070913505291476_j_comp-2024-0009_ref_013","doi-asserted-by":"crossref","unstructured":"Y. Guo, G. Liao, J. Li, and X. Chen, \u201cA novel moving target detection method based on RPCA for sar systems,\u201d IEEE Trans. Geosci. Remote Sens., vol. 58, no. 9, pp. 6677\u20136690, 2020.","DOI":"10.1109\/TGRS.2020.2978496"},{"key":"2024070913505291476_j_comp-2024-0009_ref_014","doi-asserted-by":"crossref","unstructured":"D. Giveki, \u201cRobust moving object detection based on fusing atanassov\u2019s intuitionistic 3d fuzzy histon roughness index and texture features,\u201d Int. J. Approximate Reasoning, vol. 135, pp. 1\u201320, 2021.","DOI":"10.1016\/j.ijar.2021.04.007"},{"key":"2024070913505291476_j_comp-2024-0009_ref_015","doi-asserted-by":"crossref","unstructured":"Y. Wang, Z. Luo, and P.-M. Jodoin, \u201cInteractive deep learning method for segmenting moving objects,\u201d Pattern Recognit. Lett., vol. 96, pp. 66\u201375, 2017.","DOI":"10.1016\/j.patrec.2016.09.014"},{"key":"2024070913505291476_j_comp-2024-0009_ref_016","doi-asserted-by":"crossref","unstructured":"M. Sultana, A. Mahmood, S. Javed, and S. K. Jung, \u201cUnsupervised deep context prediction for background estimation and foreground segmentation,\u201d Machine Vision Appl., vol. 30, no. 3, pp. 375\u2013395, 2019.","DOI":"10.1007\/s00138-018-0993-0"},{"key":"2024070913505291476_j_comp-2024-0009_ref_017","doi-asserted-by":"crossref","unstructured":"M. Braham and M. Van Droogenbroeck, \u201cDeep background subtraction with scene-specific convolutional neural networks,\u201d in: 2016 International Conference on Systems, Signals and Image Processing (IWSSIP), IEEE, 2016, pp. 1\u20134.","DOI":"10.1109\/IWSSIP.2016.7502717"},{"key":"2024070913505291476_j_comp-2024-0009_ref_018","doi-asserted-by":"crossref","unstructured":"L. A. Lim and H. Y. Keles, \u201cForeground segmentation using convolutional neural networks for multiscale feature encoding,\u201d Pattern Recognit. Lett., vol. 112, pp. 256\u2013262, 2018.","DOI":"10.1016\/j.patrec.2018.08.002"},{"key":"2024070913505291476_j_comp-2024-0009_ref_019","doi-asserted-by":"crossref","unstructured":"G. Rahmon, F. Bunyak, G. Seetharaman, and K. Palaniappan, \u201cMotion u-net: Multi-cue encoder-decoder network for motion segmentation,\u201d in: 2020 25th International Conference on Pattern Recognition (ICPR), IEEE, 2021, pp. 8125\u20138132.","DOI":"10.1109\/ICPR48806.2021.9413211"},{"key":"2024070913505291476_j_comp-2024-0009_ref_020","doi-asserted-by":"crossref","unstructured":"T. Liu, \u201cMoving object detection in dynamic environment via weighted low-rank structured sparse RPCA and Kalman filtering,\u201d Math. Problems Eng., vol. 2022, pp. 1\u201311, 2022.","DOI":"10.1155\/2022\/7087130"},{"key":"2024070913505291476_j_comp-2024-0009_ref_021","doi-asserted-by":"crossref","unstructured":"\u015e. I\u015f\u0131k, K. \u00d6zkan, and \u00d6. N. Gerek, \u201cCvabs: moving object segmentation with common vector approach for videos,\u201d IET Comput. Vision, vol. 13, no. 8, pp. 719\u2013729, 2019.","DOI":"10.1049\/iet-cvi.2018.5642"},{"key":"2024070913505291476_j_comp-2024-0009_ref_022","doi-asserted-by":"crossref","unstructured":"O. Oreifej, X. Li, and M. Shah, \u201cSimultaneous video stabilization and moving object detection in turbulence,\u201d IEEE Trans. Pattern Anal. Machine Intelligence, vol. 35, no. 2, pp. 450\u2013462, 2012.","DOI":"10.1109\/TPAMI.2012.97"},{"key":"2024070913505291476_j_comp-2024-0009_ref_023","doi-asserted-by":"crossref","unstructured":"Y. Li, G. Liu, Q. Liu, Y. Sun, and S. Chen, \u201cMoving object detection via segmentation and saliency constrained RPCA,\u201d Neurocomputing, vol. 323, pp. 352\u2013362, 2019.","DOI":"10.1016\/j.neucom.2018.10.012"},{"key":"2024070913505291476_j_comp-2024-0009_ref_024","doi-asserted-by":"crossref","unstructured":"S. Javed, A. Mahmood, S. Al-Maadeed, T. Bouwmans, and S. K. Jung, \u201cMoving object detection in complex scene using spatiotemporal structured-sparse RPCA,\u201d IEEE Trans. Image Processing, vol. 28, no. 2, pp. 1007\u20131022, 2019.","DOI":"10.1109\/TIP.2018.2874289"},{"key":"2024070913505291476_j_comp-2024-0009_ref_025","doi-asserted-by":"crossref","unstructured":"H. Cai, K. Hamm, L. Huang, J. Li, and T. Wang, \u201cRapid robust principal component analysis: Cur accelerated inexact low rank estimation,\u201d IEEE Signal Processing Letters, vol. 28, pp. 116\u2013120, 2020.","DOI":"10.1109\/LSP.2020.3044130"},{"key":"2024070913505291476_j_comp-2024-0009_ref_026","doi-asserted-by":"crossref","unstructured":"J. Zhao, R. Bo, Q. Hou, M.-M. Cheng, and P. Rosin, \u201cFlic: Fast linear iterative clustering with active search,\u201d Comput. Visual Media, vol. 4, no. 4, 333\u2013348, 2018.","DOI":"10.1007\/s41095-018-0123-y"},{"key":"2024070913505291476_j_comp-2024-0009_ref_027","doi-asserted-by":"crossref","unstructured":"P. Sulewski, \u201cEqual-bin-width histogram versus equal-bin-count histogram,\u201d J. Appl. Stat., vol. 48, no. 12, pp. 2092\u20132111, 2021.","DOI":"10.1080\/02664763.2020.1784853"},{"key":"2024070913505291476_j_comp-2024-0009_ref_028","doi-asserted-by":"crossref","unstructured":"Y. Wang, P.-M. Jodoin, F. Porikli, J. Konrad, Y. Benezeth, and P. Ishwar, \u201cCdnet 2014: An expanded change detection benchmark dataset,\u201d In 2014 IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp. 393\u2013400, 2014.","DOI":"10.1109\/CVPRW.2014.126"},{"key":"2024070913505291476_j_comp-2024-0009_ref_029","doi-asserted-by":"crossref","unstructured":"Y. Li, \u201cMoving object detection for unseen videos via truncated weighted robust principal component analysis and salience convolution neural network,\u201d Multimedia Tools Appl., vol. 81, pp. 1\u201312, 2022.","DOI":"10.1007\/s11042-022-12832-0"},{"key":"2024070913505291476_j_comp-2024-0009_ref_030","doi-asserted-by":"crossref","unstructured":"S. Isik, K. \u00d6zkan, S. G\u00fcnal, and \u00d6. N. Gerek, \u201cSwcd: a sliding window and self-regulated learning-based background updating method for change detection in videos,\u201d J. Electronic Imaging, vol. 27, no. 2, p. 023002, 2018.","DOI":"10.1117\/1.JEI.27.2.023002"},{"key":"2024070913505291476_j_comp-2024-0009_ref_031","doi-asserted-by":"crossref","unstructured":"A. Elgammal, D. Harwood, and L. Davis, \u201cNon-parametric model for background subtraction,\u201d in: European Conference on Computer Vision, Springer, Berlin, Heidelberg, 2000, pp. 751\u2013767.","DOI":"10.1007\/3-540-45053-X_48"},{"key":"2024070913505291476_j_comp-2024-0009_ref_032","doi-asserted-by":"crossref","unstructured":"D. Liang, B. Kang, X. Liu, P. Gao, X. Tan, and S. Kaneko, \u201cCross-scene foreground segmentation with supervised and unsupervised model communication,\u201d Pattern Recognition, vol. 117, p. 107995, 2021.","DOI":"10.1016\/j.patcog.2021.107995"},{"key":"2024070913505291476_j_comp-2024-0009_ref_033","doi-asserted-by":"crossref","unstructured":"W. Zhou, S. Kaneko, M. Hashimoto, Y. Satoh, and D. Liang, \u201cForeground detection based on co-occurrence background model with hypothesis on degradation modification in dynamic scenes,\u201d Signal Processing, vol. 160, pp. 66\u201379, 2019.","DOI":"10.1016\/j.sigpro.2019.02.021"},{"key":"2024070913505291476_j_comp-2024-0009_ref_034","doi-asserted-by":"crossref","unstructured":"S. M. Roy and A. Ghosh, \u201cForeground segmentation using adaptive 3 phase background model,\u201d IEEE Trans. Intelligent Transport. Syst., vol. 21, no. 6, pp. 2287\u20132296, 2019.","DOI":"10.1109\/TITS.2019.2915568"},{"key":"2024070913505291476_j_comp-2024-0009_ref_035","doi-asserted-by":"crossref","unstructured":"M. Sultana, A. Mahmood, S. Javed, and S. K. Jung, \u201cUnsupervised moving object detection in complex scenes using adversarial regularizations,\u201d IEEE Trans. Multimedia, vol. 23, 2020, pp. 2005\u20132018.","DOI":"10.1109\/TMM.2020.3006419"},{"key":"2024070913505291476_j_comp-2024-0009_ref_036","doi-asserted-by":"crossref","unstructured":"Q. Qi, X. Yu, P. Lei, W. He, G. Zhang, J. Wu, and B. Tu, \u201cBackground subtraction via regional multi-feature-frequency model in complex scenes,\u201d Soft Computing, vol. 27, no. 20, pp. 15305\u201315318, 2023.","DOI":"10.1007\/s00500-023-07955-x"},{"key":"2024070913505291476_j_comp-2024-0009_ref_037","doi-asserted-by":"crossref","unstructured":"Y. Yang, Z. Yang, and J. Li, \u201cNovel RPCA with nonconvex logarithm and truncated fraction norms for moving object detection,\u201d Digit. Signal Process, vol. 133, 103892, 2023.","DOI":"10.1016\/j.dsp.2022.103892"},{"key":"2024070913505291476_j_comp-2024-0009_ref_038","unstructured":"C. Stauffer and W. E. L. Grimson, \u201cAdaptive background mixture models for real-time tracking,\u201d in: Proceedings. 1999 IEEE Computer Ssociety Conference on Computer Vision and Pattern Recognition (Cat. No PR00149), vol. 2, IEEE, 1999, pp. 246\u2013252."}],"container-title":["Open Computer Science"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.degruyter.com\/document\/doi\/10.1515\/comp-2024-0009\/xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.degruyter.com\/document\/doi\/10.1515\/comp-2024-0009\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,7,9]],"date-time":"2024-07-09T13:51:32Z","timestamp":1720533092000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.degruyter.com\/document\/doi\/10.1515\/comp-2024-0009\/html"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,1,1]]},"references-count":38,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2024,7,9]]},"published-print":{"date-parts":[[2024,7,9]]}},"alternative-id":["10.1515\/comp-2024-0009"],"URL":"https:\/\/doi.org\/10.1515\/comp-2024-0009","relation":{},"ISSN":["2299-1093"],"issn-type":[{"value":"2299-1093","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,1,1]]},"article-number":"20240009"}}