{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T02:33:52Z","timestamp":1760150032002,"version":"build-2065373602"},"reference-count":43,"publisher":"MDPI AG","issue":"19","license":[{"start":{"date-parts":[[2023,9,30]],"date-time":"2023-09-30T00:00:00Z","timestamp":1696032000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>The recently proposed spacecraft three-dimensional (3D) structure recovery method based on optical images and LIDAR has enhanced the working distance of a spacecraft\u2019s 3D perception system. However, the existing methods ignore the richness of temporal features and fail to capture the temporal coherence of consecutive frames. This paper proposes a sequential spacecraft depth completion network (S2DCNet) for generating accurate and temporally consistent depth prediction results, and it can fully exploit temporal\u2013spatial coherence in sequential frames. Specifically, two parallel convolution neural network (CNN) branches were first adopted to extract the features latent in different inputs. The gray image features and the depth features were hierarchically encapsulated into unified feature representations through fusion modules. In the decoding stage, the convolutional long short-term memory (ConvLSTM) networks were embedded with the multi-scale scheme to capture the feature spatial\u2013temporal distribution variation, which could reflect the past state and generate more accurate and temporally consistent depth maps. In addition, a large-scale dataset was constructed, and the experiments revealed the outstanding performance of the proposed S2DCNet, achieving a mean absolute error of 0.192 m within the region of interest.<\/jats:p>","DOI":"10.3390\/rs15194786","type":"journal-article","created":{"date-parts":[[2023,10,2]],"date-time":"2023-10-02T04:28:08Z","timestamp":1696220888000},"page":"4786","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Exploiting Temporal\u2013Spatial Feature Correlations for Sequential Spacecraft Depth Completion"],"prefix":"10.3390","volume":"15","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8877-3056","authenticated-orcid":false,"given":"Xiang","family":"Liu","sequence":"first","affiliation":[{"name":"School of Astronautics, Harbin Institute of Technology, Harbin 150001, China"}]},{"given":"Hongyuan","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Astronautics, Harbin Institute of Technology, Harbin 150001, China"}]},{"given":"Xinlong","family":"Chen","sequence":"additional","affiliation":[{"name":"Qian Xuesen Laboratory of Space Technology, China Academy of Space Technology, Beijing 100080, China"}]},{"given":"Weichun","family":"Chen","sequence":"additional","affiliation":[{"name":"Qian Xuesen Laboratory of Space Technology, China Academy of Space Technology, Beijing 100080, China"}]},{"given":"Zhengyou","family":"Xie","sequence":"additional","affiliation":[{"name":"Qian Xuesen Laboratory of Space Technology, China Academy of Space Technology, Beijing 100080, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,9,30]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"41","DOI":"10.1016\/j.actaastro.2021.10.031","article-title":"A machine learning strategy for optimal path planning of space robotic manipulator in on-orbit servicing","volume":"191","author":"Santos","year":"2022","journal-title":"Acta Astronaut."},{"key":"ref_2","unstructured":"Henshaw, C. (2014, January 17\u201319). The darpa phoenix spacecraft servicing program: Overview and plans for risk reduction. Proceedings of the International Symposium on Artificial Intelligence, Robotics and Automation in Space (I-SAIRAS), Montreal, QC, Canada."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"1494","DOI":"10.1016\/j.cja.2019.08.024","article-title":"Three-line structured light vision system for non-cooperative satellites in proximity operations","volume":"33","author":"Liu","year":"2020","journal-title":"Chin. J. Aeronaut."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"105619","DOI":"10.1016\/j.ast.2019.105619","article-title":"Real-time measurement and estimation of the 3D geometry and motion parameters for spatially unknown moving targets","volume":"97","author":"Guo","year":"2020","journal-title":"Aerosp. Sci. Technol."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"507","DOI":"10.1109\/TAES.2022.3182307","article-title":"Position Awareness Network for Noncooperative Spacecraft Pose Estimation Based on Point Cloud","volume":"59","author":"Liu","year":"2022","journal-title":"IEEE Trans. Aerosp. Electron. Syst."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Wei, Q., Jiang, Z., and Zhang, H. (2018). Robust spacecraft component detection in point clouds. Sensors, 18.","DOI":"10.3390\/s18040933"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"164","DOI":"10.1016\/j.actaastro.2019.12.006","article-title":"Experiment for pose estimation of uncooperative space debris using stereo vision","volume":"168","author":"De","year":"2020","journal-title":"Acta Astronaut."},{"key":"ref_8","unstructured":"Jacopo, V., Andreas, F., and Ulrich, W. (2016., January 4\u20138). Pose tracking of a noncooperative spacecraft during docking maneuvers using a time-of-flight sensor. Proceedings of the AIAA Guidance, Navigation, and Control Conference (GNC), San Diego, CA, USA."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Liu, X., Wang, H., Yan, Z., Chen, Y., Chen, X., and Chen, W. Spacecraft depth completion based on the gray image and the sparse depth map, IEEE Trans. Aerosp. Electron. Syst., 2023, in press.","DOI":"10.1109\/TAES.2023.3286387"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Ma, F., and Karaman, S. (2018, January 21\u201325). Sparse-to-dense: Depth prediction from sparse depth samples and a single image. Proceedings of the International Conference on Robotics and Automation (ICRA), Brisbane, QLD, Australia.","DOI":"10.1109\/ICRA.2018.8460184"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Imran, S., Long, Y., Liu, X., and Morris, D. (2019, January 15\u201320). Depth coefficients for depth completion. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.01273"},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"1055","DOI":"10.1109\/LRA.2020.2967296","article-title":"Aerial single-view depth completion with image-guided uncertainty estimation","volume":"5","author":"Teixeira","year":"2020","journal-title":"IEEE Robots. Autom. Lett."},{"key":"ref_13","unstructured":"Luo, Z., Zhang, F., Fu, G., and Xu, J. (June, January 30). Self-Guided Instance-Aware Network for Depth Completion and Enhancement. Proceedings of the International Conference on Robotics and Automation (ICRA), Xi\u2019an, China."},{"key":"ref_14","unstructured":"Chen, Y., Yang, B., Liang, M., and Urtasun, R. (November, January 27). Learning joint 2d-3d representations for depth completion. Proceedings of the International Conference on Computer Vision (ICCV), Seoul, Korea."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"1116","DOI":"10.1109\/TIP.2020.3040528","article-title":"Learning guided convolutional network for depth completion","volume":"30","author":"Tang","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Liu, L., Song, X., Lyu, X., Diao, J., Wang, M., Liu, Y., and Zhang, L. (2021, January 2\u20139). Fcfr-net: Feature fusion based coarse-to-fine residual learning for depth completion. Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), Vancouver, BC, Canada.","DOI":"10.1609\/aaai.v35i3.16311"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Yan, Z., Wang, K., Li, X., Zhang, Z., Xu, B., Li, J., and Yang, J. (2022, January 23\u201327). RigNet: Repetitive image guided network for depth completion. Proceedings of the European Conference on Computer Vision (ECCV), Tel Aviv, Israel.","DOI":"10.1007\/978-3-031-19812-0_13"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"327","DOI":"10.1109\/LRA.2020.3043172","article-title":"Sequential Depth Completion with Confidence Estimation for 3D Model Reconstruction","volume":"6","author":"Giang","year":"2020","journal-title":"IEEE Robots. Autom. Lett."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Nguyen, T., and Yoo, M. (2021, January 23\u201325). Dense-depth-net: A spatial-temporal approach on depth completion task. Proceedings of the Region 10 Symposium (TENSYMP), Jeju, Korea.","DOI":"10.1109\/TENSYMP52854.2021.9550990"},{"key":"ref_20","unstructured":"Chen, Y., Zhao, S., Ji, W., Gong, M., and Xie, L. (2022). MetaComp: Learning to Adapt for Online Depth Completion. arXiv."},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Yang, Q., Yang, R., Davis, J., and Nister, D. (2007, January 17\u201322). Spatial-depth super resolution for range images. Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR), Minneapolis, MN, USA.","DOI":"10.1109\/CVPR.2007.383211"},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"96","DOI":"10.1145\/1276377.1276497","article-title":"Joint bilateral upsampling","volume":"26","author":"Kopf","year":"2007","journal-title":"ACM Trans. Graph."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Ferstl, D., Reinbacher, C., Ranftl, R., Ruther, M., and Bischof, H. (2013, January 1\u20138). Image guided depth upsampling using anisotropic total generalized variation. Proceedings of the International Conference on Computer Vision (ICCV), Sydney, NSW, Australia.","DOI":"10.1109\/ICCV.2013.127"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Barron, J., and Poole, B. (2016, January 8\u201316). The fast bilateral solver. Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46487-9_38"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"1397","DOI":"10.1109\/TPAMI.2012.213","article-title":"Guided image filtering","volume":"35","author":"He","year":"2012","journal-title":"IEEE Trans Pattern Anal. Mach. Intell."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Lee, H., Soohwan, S., and Sungho, J. (2016, January 1\u20134). 3D reconstruction using a sparse laser scanner and a single camera for outdoor autonomous vehicle. Proceedings of the International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil.","DOI":"10.1109\/ITSC.2016.7795619"},{"key":"ref_27","unstructured":"Liu, S., Mello, D., Gu, J., Zhong, G., Yang, M., and Kautz, J. (2017, January 4\u20139). Learning affinity via spatial propagation networks. Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA."},{"key":"ref_28","doi-asserted-by":"crossref","first-page":"2361","DOI":"10.1109\/TPAMI.2019.2947374","article-title":"Learning depth with convolutional spatial propagation network","volume":"42","author":"Cheng","year":"2019","journal-title":"IEEE Trans Pattern Anal. Mach. Intell."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Cheng, X., Wang, P., Guan, C., and Yang, R. (2020, January 7\u201312). Cspn++: Learning context and resource aware convolutional spatial propagation networks for depth completion. Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), New York, NY, USA.","DOI":"10.1609\/aaai.v34i07.6635"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Park, J., Joo, K., Hu, Z., Liu, C., and So, K. (2020, January 23\u201328). Non-local spatial propagation network for depth completion. Proceedings of the European Conference on Computer Vision (ECCV), Glasgow, Scotland, UK.","DOI":"10.1007\/978-3-030-58601-0_8"},{"key":"ref_31","unstructured":"Lin, Y., Cheng, T., Zhong, Q., Zhou, W., and Yang, H. (March, January 22). Dynamic spatial propagation network for depth completion. Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), Virtual."},{"key":"ref_32","unstructured":"Hu, M., Wang, S., Li, B., Ning, S., Fan, L., and Gong, X. (June, January 30). Penet: Towards precise and efficient image guided depth completion. Proceedings of the International Conference on Robotics and Automation (ICRA), Xi\u2019an, China."},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"7803","DOI":"10.1109\/TGRS.2020.3038425","article-title":"Automatic clustering-based two-branch CNN for hyperspectral image classification","volume":"59","author":"Li","year":"2020","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Yang, J., Zhao, Y., and Chan, J. (2018). Hyperspectral and multispectral image fusion via deep two-branches convolutional neural network. Remote Sens., 10.","DOI":"10.3390\/rs10050800"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Fu, Y., and Wu, X. (2021, January 10\u201315). A dual-branch network for infrared and visible image fusion. Proceedings of the International Conference on Pattern Recognition (ICPR), Milan, Italy.","DOI":"10.1109\/ICPR48806.2021.9412293"},{"key":"ref_36","first-page":"1","article-title":"Progressive Task-based Universal Network for Raw Infrared Remote Sensing Imagery Ship Detection","volume":"61","author":"Li","year":"2023","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Ku, J., Harakeh, A., and Waslander, S. (2018, January 8\u201310). In defense of classical image processing: Fast depth completion on the CPU. Proceedings of the Conference on Computer and Robot Vision (CRV), Toronto, ON, Canada.","DOI":"10.1109\/CRV.2018.00013"},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Uhrig, J., Schneider, N., Schneider, L., Franke, U., Brox, T., and Geiger, A. (2017, January 10\u201312). Sparsity invariant CNNs. Proceedings of the International Conference on 3D Vision (3DV), Qingdao, China.","DOI":"10.1109\/3DV.2017.00012"},{"key":"ref_39","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A., Kaiser, L., and Polosukhin, I. (2017, January 4\u20139). Attention is all you need. Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA."},{"key":"ref_40","unstructured":"Shi, X., Chen, Z., Wang, H., Yeung, D., Wong, W., and Woo, W. (2015, January 7\u201312). Convolutional LSTM network: A machine learning approach for precipitation nowcasting. Proceedings of the Advances in Neural Information Processing Systems, Montreal, QC, Canada."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"205","DOI":"10.1080\/09500340.2023.2219776","article-title":"Research on elaborate image simulation method for close-range space target","volume":"70","author":"Wang","year":"2023","journal-title":"J. Mod. Opt."},{"key":"ref_42","first-page":"105","article-title":"PaddlePaddle: An open-source deep learning platform from industrial practice","volume":"1","author":"Ma","year":"2019","journal-title":"Front. Data Comput."},{"key":"ref_43","unstructured":"Kingma, D., and Ba, J. (2015, January 7\u20139). Adam: A method for stochastic optimization. Proceedings of the International Conference on Learning Representations, San Diego, CA, USA."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/19\/4786\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T21:03:02Z","timestamp":1760130182000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/19\/4786"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,9,30]]},"references-count":43,"journal-issue":{"issue":"19","published-online":{"date-parts":[[2023,10]]}},"alternative-id":["rs15194786"],"URL":"https:\/\/doi.org\/10.3390\/rs15194786","relation":{},"ISSN":["2072-4292"],"issn-type":[{"type":"electronic","value":"2072-4292"}],"subject":[],"published":{"date-parts":[[2023,9,30]]}}}