{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,12]],"date-time":"2025-10-12T01:07:29Z","timestamp":1760231249976,"version":"build-2065373602"},"reference-count":39,"publisher":"MDPI AG","issue":"18","license":[{"start":{"date-parts":[[2022,9,6]],"date-time":"2022-09-06T00:00:00Z","timestamp":1662422400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["U2003110","20JS110"],"award-info":[{"award-number":["U2003110","20JS110"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Key Laboratory Project of Shaanxi Provincial Department of Education","award":["U2003110","20JS110"],"award-info":[{"award-number":["U2003110","20JS110"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Motion segmentation is one of the fundamental steps for detection, tracking, and recognition, and it can separate moving objects from the background. In this paper, we propose a spatial-motion-segmentation algorithm by fusing the events-dimensionality-preprocessing algorithm (EDPA) and the volume of warped events (VWE). The EDPA consists of depth estimation, linear interpolation, and coordinate normalization to obtain an extra dimension (Z) of events. The VWE is conducted by accumulating the warped events (i.e., motion compensation), and the iterative-clustering algorithm is introduced to maximize the contrast (i.e., variance) in the VWE. We established our datasets by utilizing the event-camera simulator (ESIM), which can simulate high-frame-rate videos that are decomposed into frames to generate a large amount of reliable events data. Exterior and interior scenes were segmented in the first part of the experiments. We present the sparrow search algorithm-based gradient ascent (SSA-Gradient Ascent). The SSA-Gradient Ascent, gradient ascent, and particle swarm optimization (PSO) were evaluated in the second part. In Motion Flow 1, the SSA-Gradient Ascent was 0.402% higher than the basic variance value, and 52.941% faster than the basic convergence rate. In Motion Flow 2, the SSA-Gradient Ascent still performed better than the others. The experimental results validate the feasibility of the proposed algorithm.<\/jats:p>","DOI":"10.3390\/s22186732","type":"journal-article","created":{"date-parts":[[2022,9,8]],"date-time":"2022-09-08T04:18:32Z","timestamp":1662610712000},"page":"6732","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":2,"title":["A Spatial-Motion-Segmentation Algorithm by Fusing EDPA and Motion Compensation"],"prefix":"10.3390","volume":"22","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-5665-3535","authenticated-orcid":false,"given":"Xinghua","family":"Liu","sequence":"first","affiliation":[{"name":"School of Electrical Engineering, Xi\u2019an University of Technology, Xi\u2019an 710048, China"}]},{"given":"Yunan","family":"Zhao","sequence":"additional","affiliation":[{"name":"School of Electrical Engineering, Xi\u2019an University of Technology, Xi\u2019an 710048, China"}]},{"given":"Lei","family":"Yang","sequence":"additional","affiliation":[{"name":"School of Electrical Engineering, Xi\u2019an University of Technology, Xi\u2019an 710048, China"}]},{"given":"Shuzhi Sam","family":"Ge","sequence":"additional","affiliation":[{"name":"Department of Electrical and Computer Engineering, National University of Singapore, Singapore 119077, Singapore"}]}],"member":"1968","published-online":{"date-parts":[[2022,9,6]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"566","DOI":"10.1109\/JSSC.2007.914337","article-title":"A 128\u00d7128 120 dB 15 \u00b5s Latency Asynchronous Temporal Contrast Vision Sensor","volume":"43","author":"Lichtsteiner","year":"2008","journal-title":"IEEE J. Solid-State Circuits"},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"2333","DOI":"10.1109\/JSSC.2014.2342715","article-title":"A 240\u00d7180 130 dB 3 \u00b5s Latency Global Shutter Spatiotemporal Vision Sensor","volume":"49","author":"Brandli","year":"2014","journal-title":"IEEE J. Solid-State Circuits"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"154","DOI":"10.1109\/TPAMI.2020.3008413","article-title":"Event-Based Vision: A Survey","volume":"44","author":"Gallego","year":"2022","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"1964","DOI":"10.1109\/TPAMI.2019.2963386","article-title":"High Speed and High Dynamic Range Video with an Event Camera","volume":"43","author":"Rebecq","year":"2021","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Duo, J., and Zhao, L. (2021). An Asynchronous Real-Time Corner Extraction and Tracking Algorithm for Event Camera. Sensors, 21.","DOI":"10.3390\/s21041475"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Iaboni, C., Lobo, D., Choi, J.-W., and Abichandani, P. (2022). Event-Based Motion Capture System for Online Multi-Quadrotor Localization and Tracking. Sensors, 22.","DOI":"10.3390\/s22093240"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Mohamed, E., Ewaisha, M., Siam, M., Rashed, H., Yogamani, S., Hamdy, W., El-Dakdouky, M., and El-Sallab, A. (2021, January 11\u201317). Monocular Instance Motion Segmentation for Autonomous Driving: KITTI InstanceMotSeg Dataset and Multi-Task Baseline. Proceedings of the IEEE Intelligent Vehicles Symposium, Nagoya, Japan.","DOI":"10.1109\/IV48863.2021.9575445"},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"174","DOI":"10.1007\/s001380100064","article-title":"Motion segmentation and pose recognition with motion history gradients","volume":"13","author":"Bradski","year":"2002","journal-title":"Mach. Vis. Appl."},{"key":"ref_9","first-page":"398","article-title":"Motion segmentation: A review","volume":"184","author":"Zappella","year":"2008","journal-title":"Artif. Intell. Res. Dev."},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Mattheus, J., Grobler, H., and Abu-Mahfouz, A.M. (2020, January 25\u201327). A Review of Motion Segmentation: Approaches and Major Challenges. Proceedings of the International Multidisciplinary Information Technology and Engineering Conference (IMITEC), Kimberley, South Africa.","DOI":"10.1109\/IMITEC50163.2020.9334076"},{"key":"ref_11","unstructured":"Stoffregen, T., Gallego, G., Drummond, T., Kleeman, L., and Scaramuzza, D. (November, January 27). Event-Based Motion Segmentation by Motion Compensation. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Seoul, Korea."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/TPAMI.2019.2929146","article-title":"3D Rigid Motion Segmentation with Mixed and Unknown Number of Models","volume":"43","author":"Xu","year":"2021","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_13","unstructured":"Rebecq, H., Gehrig, D., and Scaramuzza, D. (2018, January 29\u201331). ESIM: An Open Event Camera Simulator. Proceedings of the Conference on Robot Learning (CoRL), Zurich, Switzerland."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"177","DOI":"10.1109\/TRO.2013.2279412","article-title":"3-D Mapping With an RGB-D Camera","volume":"30","author":"Endres","year":"2014","journal-title":"IEEE Trans. Robot."},{"key":"ref_15","unstructured":"Lipton, A.J., Fujiyoshi, H., and Patil, R.S. (1998, January 19\u201321). Moving target classification and tracking from real-time video. Proceedings of the IEEE Workshop on Applications of Computer Vision (WACV), Princeton, NJ, USA."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"1316","DOI":"10.1109\/TCSVT.2011.2148490","article-title":"A Joint Approach to Global Motion Estimation and Motion Segmentation from A Coarsely Sampled Motion Vector Field","volume":"21","author":"Chen","year":"2011","journal-title":"IEEE Trans. Circuits Syst. Video Technol."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"4497","DOI":"10.1109\/TIP.2013.2274731","article-title":"Motion-Compensated Frame Interpolation Based on Multihypothesis Motion Estimation and Texture Optimization","volume":"22","author":"Jeong","year":"2013","journal-title":"IEEE Trans. Image Process."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"22","DOI":"10.1080\/21642583.2019.1708830","article-title":"A Novel Swarm Intelligence Optimization Approach: Sparrow Search Algorithm","volume":"8","author":"Xue","year":"2020","journal-title":"Syst. Sci. Control Eng."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"86","DOI":"10.1007\/s41315-016-0001-7","article-title":"High-accuracy, high-speed 3D structured light imaging techniques and potential applications to intelligent robotics","volume":"1","author":"Li","year":"2017","journal-title":"Int. J. Intell. Robot. Appl."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"820","DOI":"10.1016\/j.future.2021.06.045","article-title":"Human action recognition using attention-based LSTM network with dilated CNN features","volume":"125","author":"Muhammad","year":"2021","journal-title":"Future Gener. Comput. Syst."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"364","DOI":"10.1109\/TNSE.2021.3117565","article-title":"RDRL: A Recurrent Deep Reinforcement Learning Scheme for Dynamic Spectrum Access in Reconfigurable Wireless Networks","volume":"9","author":"Chen","year":"2022","journal-title":"IEEE Trans. Netw. Sci. Eng."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"107660","DOI":"10.1016\/j.knosys.2021.107660","article-title":"Zhang, S.; Liu, A. A game-based deep reinforcement learning approach for energy-efficient computation in MEC systems","volume":"235","author":"Chen","year":"2022","journal-title":"Knowl.-Based Syst."},{"key":"ref_23","first-page":"2366","article-title":"Depth map prediction from a single image using a multi-scale deep network","volume":"27","author":"Eigen","year":"2014","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_24","unstructured":"Ibraheem, A., and Wonka, P. (2018). High Quality Monocular Depth Estimation via Transfer Learning. arXiv."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Bi, X., Yang, S., and Tong, P. (2022). Moving Object Detection Based on Fusion of Depth Information and RGB Features. Sensors, 22.","DOI":"10.3390\/s22134702"},{"key":"ref_26","doi-asserted-by":"crossref","first-page":"1394","DOI":"10.1007\/s11263-017-1050-6","article-title":"EMVS: Event-Based Multi-View Stereo-3D Reconstruction with an Event Camera in Real-Time","volume":"126","author":"Rebecq","year":"2018","journal-title":"Int. J. Comput. Vis."},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Kim, H., Leutenegger, S., and Davison, A.J. (2016, January 8\u201316). Real-Time 3D Reconstruction and 6-DoF Tracking with an Event Camera. Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46466-4_21"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Gallego, G., Rebecq, H., and Scaramuzza, D. (2018, January 18\u201322). A Unifying Contrast Maximization Framework for Event Cameras, with Applications to Motion, Depth, and Optical Flow Estimation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00407"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Wang, L., Chae, Y., Yoon, S.H., Kim, T.K., and Yoon, K.J. (2021, January 19\u201325). Evdistill: Asynchronous events to end-task learning via bidirectional reconstruction-guided cross-modal knowledge distillation. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00067"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"9281","DOI":"10.1109\/ACCESS.2017.2787675","article-title":"Sensor Network Oriented Human Motion Segmentation with Motion Change Measurement","volume":"6","author":"Liu","year":"2018","journal-title":"IEEE Access"},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"1884","DOI":"10.1109\/TBME.2018.2880733","article-title":"Discontinuity Preserving Liver MR Registration with Three-Dimensional Active Contour Motion Segmentation","volume":"66","author":"Li","year":"2019","journal-title":"IEEE Trans. Biomed. Eng."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Mitrokhin, A., Ye, C., Fermller, C., Aloimonos, Y., and Delbruck, T. (2019, January 4\u20138). EV-IMO: Motion Segmentation Dataset and Learning Pipeline for Event Cameras. Proceedings of the IEEE\/RSJ International Coference on Intelligent Robots and Systems (IROS), Macau, China.","DOI":"10.1109\/IROS40897.2019.8968520"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"83","DOI":"10.3389\/fnins.2017.00083","article-title":"A Saccade Based Framework for Real-Time Motion Segmentation Using Event Based Vision Sensors","volume":"11","author":"Mishra","year":"2017","journal-title":"Front. Neurosci."},{"key":"ref_34","unstructured":"Zhou, Y., Gallego, G., Lu, X., Liu, S., and Shen, S. (2021). Event-based Motion Segmentation with Spatio-Temporal Graph Cuts. IEEE Trans. Neural Netw. Learn. Syst., 1\u201313."},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"596","DOI":"10.3389\/fnins.2016.00596","article-title":"Event-based 3D Motion Flow Estimation using 4D Spatio Temporal Subspaces Properties","volume":"10","author":"Ieng","year":"2017","journal-title":"Front. Neurosci."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Shiba, S., Aoki, Y., and Gallego, G. (2022). Event Collapse in Contrast Maximization Frameworks. Sensors, 22.","DOI":"10.3390\/s22145190"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"1346","DOI":"10.1109\/TPAMI.2016.2574707","article-title":"HOTS: A hierarchy of event-based time-surfaces for pattern recognition","volume":"39","author":"Lagorce","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_38","unstructured":"Godard, C., Aodha, O.M., Firman, M., and Brostow, G. (November, January 27). Digging into Self-Supervised Monocular Depth Estimation. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Seoul, Korea."},{"key":"ref_39","doi-asserted-by":"crossref","unstructured":"Laina, I., Rupprecht, C., Belagiannis, V., Tombari, F., and Navab, N. (2016, January 25\u201328). Deeper Depth Prediction with Fully Convolutional Residual Networks. Proceedings of the International Conference on 3D Vision (3DV), Stanford, CA, USA.","DOI":"10.1109\/3DV.2016.32"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/18\/6732\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:24:32Z","timestamp":1760142272000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/22\/18\/6732"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,6]]},"references-count":39,"journal-issue":{"issue":"18","published-online":{"date-parts":[[2022,9]]}},"alternative-id":["s22186732"],"URL":"https:\/\/doi.org\/10.3390\/s22186732","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2022,9,6]]}}}