{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T04:57:43Z","timestamp":1760245063803,"version":"3.37.3"},"reference-count":72,"publisher":"Springer Science and Business Media LLC","issue":"5","license":[{"start":{"date-parts":[[2022,3,25]],"date-time":"2022-03-25T00:00:00Z","timestamp":1648166400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,3,25]],"date-time":"2022-03-25T00:00:00Z","timestamp":1648166400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Complex Intell. Syst."],"published-print":{"date-parts":[[2022,10]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Video foreground detection (VFD), as one of the basic pre-processing tasks, is very essential for subsequent target tracking and recognition. However, due to the interference of shadow, dynamic background, and camera jitter, constructing a suitable detection network is still challenging. Recently, convolution neural networks have proved its reliability in many fields with their powerful feature extraction ability. Therefore, an interactive spatio-temporal feature learning network (ISFLN) for VFD is proposed in this paper. First, we obtain the deep and shallow spatio-temporal information of two paths with multi-level and multi-scale. The deep feature is conducive to enhancing feature identification capabilities, while the shallow feature is dedicated to fine boundary segmentation. Specifically, an interactive multi-scale feature extraction module (IMFEM) is designed to facilitate the information transmission between different types of features. Then, a multi-level feature enhancement module (MFEM), which provides precise object knowledge for decoder, is proposed to guide the coding information of each layer by the fusion spatio-temporal difference characteristic. Experimental results on LASIESTA, CDnet2014, INO, and AICD datasets demonstrate that the proposed ISFLN is more effective than the existing advanced methods.<\/jats:p>","DOI":"10.1007\/s40747-022-00712-x","type":"journal-article","created":{"date-parts":[[2022,3,25]],"date-time":"2022-03-25T04:39:15Z","timestamp":1648183155000},"page":"4251-4263","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["Interactive spatio-temporal feature learning network for video foreground detection"],"prefix":"10.1007","volume":"8","author":[{"given":"Hongrui","family":"Zhang","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-2336-4803","authenticated-orcid":false,"given":"Huan","family":"Li","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,3,25]]},"reference":[{"key":"712_CR1","doi-asserted-by":"crossref","first-page":"326","DOI":"10.1016\/j.ins.2021.02.004","volume":"565","author":"X Tang","year":"2021","unstructured":"Tang X, Tu W, Li K, Cheng J (2021) DFFNet: an IoT-perceptive dual feature fusion network for general real-time semantic segmentation. Inf Sci 565:326\u2013343","journal-title":"Inf Sci"},{"issue":"3","key":"712_CR2","doi-asserted-by":"crossref","first-page":"431","DOI":"10.1109\/LGRS.2020.2975541","volume":"18","author":"G Cheng","year":"2021","unstructured":"Cheng G, Si Y, Hong H, Yao X, Guo L (2021) Cross-scale feature fusion for object detection in optical remote sensing images. IEEE Geosci Remote Sens 18(3):431\u2013435","journal-title":"IEEE Geosci Remote Sens"},{"key":"712_CR3","doi-asserted-by":"crossref","first-page":"137","DOI":"10.1016\/j.sigpro.2017.12.008","volume":"145","author":"M Zhang","year":"2018","unstructured":"Zhang M, Yang Y, Ji Y, Xie N, Shen F (2018) Recurrent attention network using spatial-temporal relations for action recognition. Signal Process 145:137\u2013145","journal-title":"Signal Process"},{"issue":"14","key":"712_CR4","doi-asserted-by":"crossref","first-page":"16183","DOI":"10.1109\/JSEN.2021.3075722","volume":"21","author":"F Li","year":"2021","unstructured":"Li F, Zhu A, Liu Z, Huo Y, Xu Y, Hua G (2021) Pyramidal graph convolutional network for skeleton-based human action recognition. IEEE Sens J 21(14):16183\u201316191","journal-title":"IEEE Sens J"},{"issue":"3","key":"712_CR5","doi-asserted-by":"crossref","first-page":"469","DOI":"10.1007\/s40747-020-00140-9","volume":"6","author":"S Hua","year":"2020","unstructured":"Hua S, Wang C, Xie Z, Wu X (2020) A force levels and gestures integrated multi-task strategy for neural decoding. Complex Intell Syst 6(3):469\u2013478","journal-title":"Complex Intell Syst"},{"key":"712_CR6","doi-asserted-by":"crossref","first-page":"63971","DOI":"10.1109\/ACCESS.2020.2984680","volume":"8","author":"H Zhang","year":"2020","unstructured":"Zhang H, Qu S, Li H, Luo J, Xu W (2020) A moving shadow elimination method based on fusion of multi-feature. IEEE Access 8:63971\u201363982","journal-title":"IEEE Access"},{"key":"712_CR7","doi-asserted-by":"publisher","DOI":"10.1007\/s00521-021-05870-6","author":"Z Wang","year":"2021","unstructured":"Wang Z, Ma Y (2021) Detection and recognition of stationary vehicles and seat belts in intelligent Internet of Things traffic management system. Neural Comput Appl. https:\/\/doi.org\/10.1007\/s00521-021-05870-6","journal-title":"Neural Comput Appl"},{"issue":"4","key":"712_CR8","doi-asserted-by":"crossref","first-page":"518","DOI":"10.1109\/TCSVT.2009.2035843","volume":"20","author":"C Chiu","year":"2010","unstructured":"Chiu C, Ku M, Liang L (2010) A robust object segmentation system using a probability-based background extraction algorithm. IEEE Trans Circ Syst Video Technol 20(4):518\u2013528","journal-title":"IEEE Trans Circ Syst Video Technol"},{"issue":"7","key":"712_CR9","doi-asserted-by":"crossref","first-page":"1933","DOI":"10.1109\/TCSVT.2018.2854273","volume":"29","author":"C Zhao","year":"2019","unstructured":"Zhao C, Sain A, Qu Y, Ge Y, Hu H (2019) Background subtraction based on integration of alternative cues in freely moving camera. IEEE Trans Circ Syst Video Technol 29(7):1933\u20131945","journal-title":"IEEE Trans Circ Syst Video Technol"},{"key":"712_CR10","volume":"207","author":"Y Xu","year":"2020","unstructured":"Xu Y, Ji H, Zhang W (2020) Coarse-to-fine sample-based background subtraction for moving object detection. Optik 207:164195","journal-title":"Optik"},{"key":"712_CR11","doi-asserted-by":"crossref","first-page":"1747","DOI":"10.1007\/s11760-021-01920-7","volume":"15","author":"S Qu","year":"2021","unstructured":"Qu S, Zhang H, Wu W, Xu W, Li Y (2021) Symmetric pyramid attention convolutional neural network for moving object detection. Signal Image Video Process 15:1747\u20131755","journal-title":"Signal Image Video Process"},{"key":"712_CR12","doi-asserted-by":"crossref","unstructured":"Stauffer C, Grimson WEL (1999) Adaptive background mixture models for real-time tracking. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 246\u2013252","DOI":"10.1109\/CVPR.1999.784637"},{"key":"712_CR13","doi-asserted-by":"crossref","unstructured":"Elgammal A, Harwood D (2000) Non-parametric Model for Background Subtraction. In: Proceedings of the European conference on computer vision, pp 751\u2013767","DOI":"10.1007\/3-540-45053-X_48"},{"issue":"11","key":"712_CR14","doi-asserted-by":"crossref","first-page":"3488","DOI":"10.1016\/j.patcog.2015.04.016","volume":"48","author":"S Varadarajan","year":"2015","unstructured":"Varadarajan S, Miller P, Zhou H (2015) Region-based mixture of Gaussians modelling for foreground detection in dynamic scenes. Pattern Recogn 48(11):3488\u20133503","journal-title":"Pattern Recogn"},{"issue":"7","key":"712_CR15","doi-asserted-by":"crossref","first-page":"3192","DOI":"10.1109\/TIP.2019.2894966","volume":"28","author":"S Minaee","year":"2019","unstructured":"Minaee S, Wang Y (2019) An ADMM approach to masked signal decomposition using subspace representation. IEEE Trans Image Process 28(7):3192\u20133204","journal-title":"IEEE Trans Image Process"},{"key":"712_CR16","doi-asserted-by":"crossref","first-page":"256","DOI":"10.1016\/j.patrec.2018.08.002","volume":"112","author":"LA Lim","year":"2018","unstructured":"Lim LA, Keles HY (2018) Foreground segmentation using convolutional neural networks for multiscale feature encoding. Pattern Recogn Lett 112:256\u2013262","journal-title":"Pattern Recogn Lett"},{"issue":"4","key":"712_CR17","doi-asserted-by":"crossref","first-page":"617","DOI":"10.1109\/LGRS.2018.2797538","volume":"15","author":"D Zeng","year":"2018","unstructured":"Zeng D, Zhu M (2018) Multiscale fully convolutional network for foreground object detection in infrared videos. IEEE Geosci Remote S 15(4):617\u2013621","journal-title":"IEEE Geosci Remote S"},{"key":"712_CR18","doi-asserted-by":"crossref","first-page":"66","DOI":"10.1016\/j.patrec.2016.09.014","volume":"96","author":"Y Wang","year":"2017","unstructured":"Wang Y, Luo Z, Jodoin P (2017) Interactive deep learning method for segmenting moving objects. Pattern Recogn Lett 96:66\u201375","journal-title":"Pattern Recogn Lett"},{"issue":"9","key":"712_CR19","doi-asserted-by":"crossref","first-page":"2567","DOI":"10.1109\/TCSVT.2017.2770319","volume":"29","author":"Y Chen","year":"2017","unstructured":"Chen Y, Wang J, Zhu B, Tang M, Lu H (2017) Pixelwise deep sequence learning for moving object detection. IEEE Trans Circ Syst Video Technol 29(9):2567\u20132579","journal-title":"IEEE Trans Circ Syst Video Technol"},{"key":"712_CR20","doi-asserted-by":"crossref","unstructured":"Tezcan MO, Ishwar P, Konrad J (2020) BSUV-net: a fully-convolutional neural network for background subtraction of unseen videos. In: Proceedings of the IEEE winter conference on applications of computer vision (WACV), pp 2774\u20132783","DOI":"10.1109\/WACV45572.2020.9093464"},{"issue":"10","key":"712_CR21","doi-asserted-by":"crossref","first-page":"4435","DOI":"10.1109\/TITS.2019.2940547","volume":"21","author":"T Akilan","year":"2020","unstructured":"Akilan T, Wu QMJ (2020) sEnDec: an improved image to image CNN for foreground localization. IEEE Trans Intell Transp 21(10):4435\u20134443","journal-title":"IEEE Trans Intell Transp"},{"issue":"6","key":"712_CR22","doi-asserted-by":"crossref","first-page":"1709","DOI":"10.1109\/TIP.2010.2101613","volume":"20","author":"O Barnich","year":"2011","unstructured":"Barnich O, Droogenbroeck M (2011) ViBe: a universal background subtraction algorithm for video sequences. IEEE Trans Image Process 20(6):1709\u20131724","journal-title":"IEEE Trans Image Process"},{"key":"712_CR23","doi-asserted-by":"crossref","first-page":"205","DOI":"10.1016\/j.infrared.2018.08.003","volume":"93","author":"Y Zhao","year":"2018","unstructured":"Zhao Y (2018) ALI-TM: a moving objects detection algorithm for infrared images with dynamic background. Infrared Phys Techn 93:205\u2013212","journal-title":"Infrared Phys Techn"},{"issue":"7","key":"712_CR24","doi-asserted-by":"crossref","first-page":"3249","DOI":"10.1109\/TIP.2017.2695882","volume":"26","author":"H Sajid","year":"2017","unstructured":"Sajid H, Cheung S (2017) Universal multimode background subtraction. IEEE Trans Image Process 26(7):3249\u20133260","journal-title":"IEEE Trans Image Process"},{"issue":"6","key":"712_CR25","doi-asserted-by":"crossref","first-page":"2287","DOI":"10.1109\/TITS.2019.2915568","volume":"21","author":"SM Roy","year":"2020","unstructured":"Roy SM, Ghosh A (2020) Foreground segmentation using adaptive 3 phase background model. IEEE Trans Intell Transp 21(6):2287\u20132296","journal-title":"IEEE Trans Intell Transp"},{"issue":"2","key":"712_CR26","doi-asserted-by":"crossref","first-page":"1004","DOI":"10.1109\/TCYB.2019.2921827","volume":"51","author":"AJ Tom","year":"2021","unstructured":"Tom AJ, George SN (2021) A three-way optimization technique for noise robust moving object detection using tensor low-rank approximation, l1\/2, and TTV regularizations. IEEE Trans Cybern 51(2):1004\u20131014","journal-title":"IEEE Trans Cybern"},{"key":"712_CR27","doi-asserted-by":"crossref","first-page":"8326","DOI":"10.1109\/TIP.2020.3013162","volume":"29","author":"T Zhou","year":"2020","unstructured":"Zhou T, Li J, Wang S, Tao R, Shen J (2020) MATNet: motion-attentive transition network for zero-shot video object segmentation. IEEE Trans Image Process 29:8326\u20138338","journal-title":"IEEE Trans Image Process"},{"key":"712_CR28","doi-asserted-by":"crossref","first-page":"9017","DOI":"10.1109\/TIP.2020.3023591","volume":"29","author":"B Wang","year":"2020","unstructured":"Wang B, Liu W, Han G, He S (2020) Learning long-term structural dependencies for video salient object detection. IEEE Trans Image Process 29:9017\u20139031","journal-title":"IEEE Trans Image Process"},{"key":"712_CR29","doi-asserted-by":"crossref","first-page":"3530","DOI":"10.1109\/TMM.2020.3026913","volume":"23","author":"K Xu","year":"2020","unstructured":"Xu K, Wen L, Li G, Huang Q (2020) Self-supervised deep TripleNet for video object segmentation. IEEE Trans Multimed 23:3530\u20133539","journal-title":"IEEE Trans Multimed"},{"key":"712_CR30","doi-asserted-by":"crossref","unstructured":"Lu X, Wang W, Shen J, Tai Y, Crandall D, Hoi S (2020) Learning video object segmentation from unlabeled videos. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 8957\u20138967","DOI":"10.1109\/CVPR42600.2020.00898"},{"issue":"12","key":"712_CR31","doi-asserted-by":"crossref","first-page":"14550","DOI":"10.1109\/TVT.2020.3043575","volume":"69","author":"PW Patil","year":"2020","unstructured":"Patil PW, Dudhane A, Murala S (2020) End-to-end recurrent generative adversarial network for traffic and surveillance applications. IEEE Trans Veh Technol 69(12):14550\u201314562","journal-title":"IEEE Trans Veh Technol"},{"key":"712_CR32","doi-asserted-by":"crossref","unstructured":"Akula A, Singh A, Ghosh R, Kumar S, Sardana HK (2016) Target recognition in infrared imagery using convolutional neural network. In: Proceedings of the international conference on computer vision and image processing, pp 25\u201334","DOI":"10.1007\/978-981-10-2107-7_3"},{"key":"712_CR33","doi-asserted-by":"crossref","unstructured":"Patil PW, Murala S, Dhall A, Chaudhary S (2018) MsEDNet: multi-scale deep saliency learning for moving object detection. In: Proceedings of the IEEE international conference on systems, man, and cybernetics, pp 1670\u20131675","DOI":"10.1109\/SMC.2018.00289"},{"issue":"1","key":"712_CR34","doi-asserted-by":"crossref","first-page":"254","DOI":"10.1109\/TITS.2017.2754099","volume":"19","author":"L Yang","year":"2018","unstructured":"Yang L, Li J, Luo Y, Zhao Y, Cheng H, Li J (2018) Deep background modeling using fully convolutional network. IEEE Trans Intell Transp 19(1):254\u2013262","journal-title":"IEEE Trans Intell Transp"},{"key":"712_CR35","unstructured":"Guerra VM, Rouco J, Novo J (2019) An end-to-end deep learning approach for simultaneous background modeling and subtraction. In: Proceedings of the British machine vision conference, pp 1\u201312"},{"key":"712_CR36","doi-asserted-by":"crossref","DOI":"10.1016\/j.eswa.2020.114450","volume":"169","author":"Z Huang","year":"2021","unstructured":"Huang Z, Li W, Li J, Zhou D (2021) Dual-path attention network for single image super-resolution. Expert Syst Appl 169:114450","journal-title":"Expert Syst Appl"},{"key":"712_CR37","doi-asserted-by":"crossref","unstructured":"Fu J, Liu J, Tian H (2019) Dual attention network for scene segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 3141\u20133149","DOI":"10.1109\/CVPR.2019.00326"},{"key":"712_CR38","doi-asserted-by":"crossref","first-page":"163","DOI":"10.1109\/TIP.2020.3033158","volume":"30","author":"B Xiao","year":"2021","unstructured":"Xiao B, Xu B, Bi X, Li W (2021) Global-feature encoding U-Net (GEU-Net) for multi-focus image fusion. IEEE Trans Image Process 30:163\u2013175","journal-title":"IEEE Trans Image Process"},{"key":"712_CR39","doi-asserted-by":"crossref","unstructured":"Minematsu T, Shimada A, Taniguchi R (2019) Simple background subtraction constraint for weakly supervised background subtraction network. In: Proceedings of the ieee international conference on advanced video and signal based surveillance (AVSS), pp 1\u20138","DOI":"10.1109\/AVSS.2019.8909896"},{"issue":"17","key":"712_CR40","doi-asserted-by":"crossref","first-page":"23023","DOI":"10.1007\/s11042-017-5460-9","volume":"77","author":"D Sakkos","year":"2018","unstructured":"Sakkos D, Liu H, Han J, Shao L (2018) End-to-end video background subtraction with 3d convolutional neural networks. Multimed Tools Appl 77(17):23023\u201323041","journal-title":"Multimed Tools Appl"},{"issue":"3","key":"712_CR41","doi-asserted-by":"crossref","first-page":"959","DOI":"10.1109\/TITS.2019.2900426","volume":"21","author":"T Akilan","year":"2020","unstructured":"Akilan T, Wu QJ, Safaei A, Huo J, Yang Y (2020) A 3D CNN-LSTM-based image-to-image foreground segmentation. IEEE Trans Intell Transp 21(3):959\u2013971","journal-title":"IEEE Trans Intell Transp"},{"issue":"11","key":"712_CR42","doi-asserted-by":"crossref","first-page":"4192","DOI":"10.1109\/TCSVT.2019.2951778","volume":"30","author":"C Zhao","year":"2020","unstructured":"Zhao C, Basu A (2020) Dynamic deep pixel distribution learning for background subtraction. IEEE Trans Circ Syst Vid 30(11):4192\u20134206","journal-title":"IEEE Trans Circ Syst Vid"},{"key":"712_CR43","doi-asserted-by":"crossref","unstructured":"Bakkay MC, Rashwan HA, Salmane H, Khoudour L, Puig D, Ruichek Y (2018), BScGAN: deep background subtraction with conditional generative adversarial networks. In: Proceedings of the IEEE international conference on image processing (ICIP), pp 4018\u20134022","DOI":"10.1109\/ICIP.2018.8451603"},{"key":"712_CR44","doi-asserted-by":"crossref","unstructured":"Patil PW, Biradar K, Dudhane A, Murala S (2020) An end-to-end edge aggregation network for moving object segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR), pp 8146\u20138155","DOI":"10.1109\/CVPR42600.2020.00817"},{"key":"712_CR45","volume":"94","author":"S Li","year":"2020","unstructured":"Li S (2020) Change detection in images using shape-aware siamese convolutional network. Eng Appl Artif Intel 94:103819","journal-title":"Eng Appl Artif Intel"},{"key":"712_CR46","doi-asserted-by":"crossref","unstructured":"Dosovitskiy A, Brox T (2016) Inverting visual representations with convolutional networks. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition, pp 4829\u20134837","DOI":"10.1109\/CVPR.2016.522"},{"issue":"34","key":"712_CR47","doi-asserted-by":"crossref","first-page":"4020004","DOI":"10.1061\/(ASCE)CP.1943-5487.0000883","volume":"3","author":"K Zhang","year":"2020","unstructured":"Zhang K, Zhang Y, Cheng H (2020) Self-supervised structure learning for crack detection based on cycle-consistent generative adversarial networks. J Comput Civil Eng 3(34):4020004","journal-title":"J Comput Civil Eng"},{"key":"712_CR48","doi-asserted-by":"crossref","first-page":"198","DOI":"10.1016\/j.future.2020.02.055","volume":"108","author":"T Han","year":"2020","unstructured":"Han T, Ivo RF, Rodrigues D, Peixoto SA, Albuquerque V, Filho P (2020) Cascaded volumetric fully convolutional networks for whole-heart and great vessel 3D segmentation. Future Gener Comput Syst 108:198\u2013209","journal-title":"Future Gener Comput Syst"},{"key":"712_CR49","volume":"67","author":"K Gao","year":"2021","unstructured":"Gao K (2021) Dual-branch combination network (DCN): towards accurate diagnosis and lesion segmentation of COVID-19 using CT images. Med Image Anal 67:101836","journal-title":"Med Image Anal"},{"key":"712_CR50","doi-asserted-by":"crossref","first-page":"103","DOI":"10.1016\/j.cviu.2016.08.005","volume":"152","author":"C Carlos","year":"2016","unstructured":"Carlos C, Eva M, Narciso G (2016) Labeled dataset for integral evaluation of moving object detection algorithms: LASIESTA. Comput Vis Image Underst 152:103\u2013117","journal-title":"Comput Vis Image Underst"},{"key":"712_CR51","doi-asserted-by":"crossref","unstructured":"Wang Y, Jodoin P, Porikli F, Konrad J, Benezeth Y, Ishwar P (2014) CDnet 2014: an expanded change detection benchmark dataset. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops, pp 393\u2013400","DOI":"10.1109\/CVPRW.2014.126"},{"key":"712_CR52","unstructured":"https:\/\/www.ino.ca\/en\/technologies\/video-analytics-dataset\/"},{"key":"712_CR53","doi-asserted-by":"crossref","unstructured":"Bourdis N, Marraud D, Sahbi H (2011) Constrained optical flow for aerial image change detection. In: Proceedings of the IEEE international geoscience and remote sensing symposium, pp 4176\u20134179","DOI":"10.1109\/IGARSS.2011.6050150"},{"key":"712_CR54","doi-asserted-by":"crossref","first-page":"285","DOI":"10.1016\/j.infrared.2019.03.022","volume":"98","author":"S Qiu","year":"2019","unstructured":"Qiu S, Luo J, Yang S, Zhang M, Zhang W (2019) A moving target extraction algorithm based on the fusion of infrared and visible images. Infrared Phys Technol 98:285\u2013291","journal-title":"Infrared Phys Technol"},{"issue":"7","key":"712_CR55","doi-asserted-by":"crossref","first-page":"1168","DOI":"10.1109\/TIP.2008.924285","volume":"17","author":"L Maddalena","year":"2008","unstructured":"Maddalena L, Petrosino A (2008) A self-organizing approach to background subtraction for visual surveillance applications. IEEE Trans Image Process 17(7):1168\u20131177","journal-title":"IEEE Trans Image Process"},{"key":"712_CR56","doi-asserted-by":"crossref","unstructured":"Maddalena L, Petrosino A (2012) The SOBS algorithm: what are the limits? In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition workshops, pp 21\u201326","DOI":"10.1109\/CVPRW.2012.6238922"},{"issue":"4","key":"712_CR57","doi-asserted-by":"crossref","first-page":"670","DOI":"10.1109\/TPAMI.2013.239","volume":"36","author":"TSF Haines","year":"2014","unstructured":"Haines TSF, Xiang T (2014) Background subtraction with Dirichlet process mixture models. IEEE Trans Pattern Anal 36(4):670\u2013683","journal-title":"IEEE Trans Pattern Anal"},{"key":"712_CR58","doi-asserted-by":"crossref","first-page":"156","DOI":"10.1016\/j.patcog.2017.09.009","volume":"74","author":"D Berj\u00f3n","year":"2018","unstructured":"Berj\u00f3n D, Cuevas C, Mor\u00e1n F, Garc\u00eda N (2018) Real-time nonparametric background subtraction with tracking-based foreground update. Pattern Recogn 74:156\u2013170","journal-title":"Pattern Recogn"},{"issue":"3","key":"712_CR59","doi-asserted-by":"crossref","first-page":"1369","DOI":"10.1007\/s10044-019-00845-9","volume":"23","author":"LA Lim","year":"2020","unstructured":"Lim LA, Keles HY (2020) Learning multi-scale features for foreground segmentation. Pattern anal appl 23(3):1369\u20131380","journal-title":"Pattern anal appl"},{"key":"712_CR60","doi-asserted-by":"crossref","first-page":"546","DOI":"10.1109\/TIP.2020.3037472","volume":"30","author":"M Mandal","year":"2021","unstructured":"Mandal M, Dhar V, Mishra A, Vipparthi SK, Mottaleb M (2021) 3DCD: scene independent end-to-end spatiotemporal feature learning framework for change detection in unseen videos. IEEE Trans Image Process 30:546\u2013558","journal-title":"IEEE Trans Image Process"},{"key":"712_CR61","doi-asserted-by":"crossref","first-page":"53849","DOI":"10.1109\/ACCESS.2021.3071163","volume":"9","author":"MO Tezcan","year":"2021","unstructured":"Tezcan MO, Ishwar P, Konrad J, Janusz K (2021) BSUV-Net 2.0: spatio-temporal data augmentations for video-agnostic supervised background subtraction. IEEE Access 9:53849\u201353860","journal-title":"IEEE Access"},{"key":"712_CR62","doi-asserted-by":"crossref","unstructured":"Zivkovic Z (2004) Improved adaptive Gaussian mixture model for background subtraction. In: Proceedings of the IEEE conference pattern recognit (ICPR), pp 28\u201331","DOI":"10.1109\/ICPR.2004.1333992"},{"key":"712_CR63","doi-asserted-by":"crossref","unstructured":"Charles P, Bilodeau G, Bergevin R (2015) A self-adjusting approach to change detection based on background word consensus. In: Proceedings of the IEEE Winter conference on applications of computer vision, pp 990\u2013997","DOI":"10.1109\/WACV.2015.137"},{"key":"712_CR64","doi-asserted-by":"crossref","first-page":"635","DOI":"10.1016\/j.patcog.2017.09.040","volume":"76","author":"M Babaee","year":"2018","unstructured":"Babaee M, Dinh DT, Rigoll G (2018) A deep convolutional neural network for video sequence background subtraction. Pattern Recogn 76:635\u2013649","journal-title":"Pattern Recogn"},{"key":"712_CR65","doi-asserted-by":"crossref","unstructured":"Cioppa A, Droogenbroeck M, Braham M (2020) Real-time semantic background subtraction. http:\/\/arxiv.org\/abs\/2002.04993v3","DOI":"10.1109\/ICIP40778.2020.9190838"},{"key":"712_CR66","doi-asserted-by":"crossref","first-page":"44","DOI":"10.1016\/j.infrared.2015.01.008","volume":"69","author":"Z Li","year":"2015","unstructured":"Li Z (2015) Infrared small moving target detection algorithm based on joint spatio-temporal sparse recovery. Infrared Phys Technol 69:44\u201352","journal-title":"Infrared Phys Technol"},{"key":"712_CR67","doi-asserted-by":"crossref","unstructured":"Bhattacharjee SD, Talukder A, Alam MS (2017) Graph clustering for weapon discharge event detection and tracking in infrared imagery using deep features. In: Proceedings of the conference on pattern recognition and tracking XXVII, SPIE, pp 102030O","DOI":"10.1117\/12.2277737"},{"issue":"57","key":"712_CR68","first-page":"13106","volume":"1","author":"B Sun","year":"2018","unstructured":"Sun B, Li Y, Guo G (2018) Moving target segmentation using Markov random field-based evaluation metric in infrared videos. Opt Eng 1(57):13106","journal-title":"Opt Eng"},{"key":"712_CR69","doi-asserted-by":"crossref","unstructured":"Sakurada K, Okatani T (2015) Change Detection from a street image pair using CNN features and superpixel segmentation. In: Proceedings of the British machine vision conference, pp 1\u201312","DOI":"10.5244\/C.29.61"},{"key":"712_CR70","doi-asserted-by":"crossref","unstructured":"Khan S, He X, Porikli F, Bennamoun M, Sohel F, Togneri R (2017) Learning deep structured network for weakly supervised change detection. In: Proceedings of the international joint conference on artificial intelligence, pp 2008\u20132015","DOI":"10.24963\/ijcai.2017\/279"},{"issue":"7","key":"712_CR71","doi-asserted-by":"crossref","first-page":"1301","DOI":"10.1007\/s10514-018-9734-5","volume":"42","author":"P Alcantarilla","year":"2018","unstructured":"Alcantarilla P (2018) Street-view change detection with deconvolutional networks. Auton Robot 42(7):1301\u20131322","journal-title":"Auton Robot"},{"key":"712_CR72","doi-asserted-by":"crossref","first-page":"166","DOI":"10.1016\/j.neucom.2019.10.022","volume":"378","author":"S Bu","year":"2020","unstructured":"Bu S, Li Q, Han P, Leng P, Li K (2020) Mask-CDNet: a mask based pixel change detection network. Neurocomputing 378:166\u2013178","journal-title":"Neurocomputing"}],"container-title":["Complex &amp; Intelligent Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-022-00712-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s40747-022-00712-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s40747-022-00712-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,9,27]],"date-time":"2022-09-27T13:59:21Z","timestamp":1664287161000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s40747-022-00712-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,3,25]]},"references-count":72,"journal-issue":{"issue":"5","published-print":{"date-parts":[[2022,10]]}},"alternative-id":["712"],"URL":"https:\/\/doi.org\/10.1007\/s40747-022-00712-x","relation":{},"ISSN":["2199-4536","2198-6053"],"issn-type":[{"type":"print","value":"2199-4536"},{"type":"electronic","value":"2198-6053"}],"subject":[],"published":{"date-parts":[[2022,3,25]]},"assertion":[{"value":"24 August 2021","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"5 March 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"25 March 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"There is no conflict of interest from the authors.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}