{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,15]],"date-time":"2026-01-15T11:48:21Z","timestamp":1768477701008,"version":"3.49.0"},"reference-count":43,"publisher":"MDPI AG","issue":"17","license":[{"start":{"date-parts":[[2023,9,4]],"date-time":"2023-09-04T00:00:00Z","timestamp":1693785600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Fundamental Research Funds for the Central Universities, China","award":["2042022dx0001"],"award-info":[{"award-number":["2042022dx0001"]}]},{"name":"Fundamental Research Funds for the Central Universities, China","award":["2021-JCJQ-JJ-0251"],"award-info":[{"award-number":["2021-JCJQ-JJ-0251"]}]},{"name":"Foundation Strengthening Fund Project, China","award":["2042022dx0001"],"award-info":[{"award-number":["2042022dx0001"]}]},{"name":"Foundation Strengthening Fund Project, China","award":["2021-JCJQ-JJ-0251"],"award-info":[{"award-number":["2021-JCJQ-JJ-0251"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Scene matching plays a vital role in the visual positioning of aircraft. The position and orientation of aircraft can be determined by comparing acquired real-time imagery with reference imagery. To enhance precise scene matching during flight, it is imperative to conduct a comprehensive analysis of the reference imagery\u2019s matchability beforehand. Conventional approaches to image matchability analysis rely heavily on features that are manually designed. However, these features are inadequate in terms of comprehensiveness, efficiency, and taking into account the scene matching process, ultimately leading to unsatisfactory results. This paper innovatively proposes a core approach to quantifying matchability by utilizing scene information from imagery. The first proposal for generating image matchability samples through a simulation of the matching process has been developed. The RSPNet network architecture is designed to effectively leverage regional scene perception in order to accurately predict the matchability of reference imagery. This network comprises two core modules: saliency analysis and uniqueness analysis. The attention mechanism employed by saliency analysis module extracts features at different levels and scales, guaranteeing an accurate and meticulous quantification of image saliency. The uniqueness analysis module quantifies image uniqueness by comparing neighborhood scene features. The proposed method is compared with traditional and deep learning methods for experiments based on simulated datasets, respectively. The results demonstrate that RSPNet exhibits significant advantages in terms of accuracy and reliability.<\/jats:p>","DOI":"10.3390\/rs15174353","type":"journal-article","created":{"date-parts":[[2023,9,4]],"date-time":"2023-09-04T10:24:30Z","timestamp":1693823070000},"page":"4353","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":4,"title":["Analysis of the Matchability of Reference Imagery for Aircraft Based on Regional Scene Perception"],"prefix":"10.3390","volume":"15","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-2569-7380","authenticated-orcid":false,"given":"Xin","family":"Li","sequence":"first","affiliation":[{"name":"State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3987-5336","authenticated-orcid":false,"given":"Guo","family":"Zhang","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China"}]},{"given":"Hao","family":"Cui","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China"}]},{"given":"Jinhao","family":"Ma","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China"}]},{"given":"Wei","family":"Wang","sequence":"additional","affiliation":[{"name":"State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, Wuhan 430079, China"},{"name":"School of Surveying, Mapping and Geosciences, Liaoning Technical University, Fuxin 125105, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,9,4]]},"reference":[{"key":"ref_1","unstructured":"Wang, J.Z. (2015). Research on Key Technologies of Scene Matching Areas Selection of Cruise Missile. [Master\u2019s Thesis, National University of Defense Technology]."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"21","DOI":"10.1080\/10095020.2017.1420509","article-title":"A survey on vision-based UAV navigation","volume":"21","author":"Lu","year":"2018","journal-title":"Geo-Spat. Inf. Sci."},{"key":"ref_3","unstructured":"Leng, X.F. (2007). Research on the Key Technology for Scene Matching Aided Navigation System Based on Image Features. [Ph.D. Thesis, Nanjing University of Aeronautics and Astronautics]."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"84","DOI":"10.1145\/3065386","article-title":"ImageNet classification with deep convolutional neural networks","volume":"60","author":"Krizhevsky","year":"2017","journal-title":"Commun. ACM"},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1671","DOI":"10.3390\/rs4061671","article-title":"Unmanned Aircraft Systems in Remote Sensing and Scientific Research: Classification and Considerations of Use","volume":"4","author":"Watts","year":"2012","journal-title":"Remote Sens."},{"key":"ref_6","first-page":"553","article-title":"Research on Matching-Area Suitability for Scene Matching Aided Navigation","volume":"31","author":"Shen","year":"2010","journal-title":"Acta Aeronaut. Astronaut. Sin."},{"key":"ref_7","first-page":"677","article-title":"Image matching methods","volume":"24","author":"Jia","year":"2019","journal-title":"J. Image Graph."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"507","DOI":"10.1360\/N112018-00316","article-title":"Review of scene matching visual navigation for unmanned aerial vehicles","volume":"49","author":"Zhao","year":"2019","journal-title":"Sci. Sin. Inf."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Johnson, M. (1972, January 17\u201319). Analytical development and test results of acquisition probability for terrain correlation devices used in navigation systems. Proceedings of the 10th Aerospace Sciences Meeting, San Diego, CA, USA.","DOI":"10.2514\/6.1972-122"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Zhang, X., He, Z., Liang, Y., and Zeng, P. (2012, January 28\u201329). Selection method for scene matching area based on information entropy. Proceedings of the 2012 Fifth International Symposium on Computational Intelligence and Design (ISCID), Hangzhou, China.","DOI":"10.1109\/ISCID.2012.98"},{"key":"ref_11","first-page":"137","article-title":"Study on reference image selection roles for scene matching guidance","volume":"5","author":"Cao","year":"2005","journal-title":"Appl. Res. Comput."},{"key":"ref_12","first-page":"850","article-title":"Reference image preparation approach for scene matching simulation","volume":"22","author":"Yang","year":"2010","journal-title":"J. Syst. Simul."},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"355","DOI":"10.1016\/S0262-8856(03)00032-5","article-title":"Prediction of the suitability for image-matching based on self-similarity of vision contents","volume":"22","author":"Pang","year":"2004","journal-title":"Image Vis. Comput."},{"key":"ref_14","unstructured":"Wei, D. (2011). Research on SAR Image Matching. [Master\u2019s Thesis, Huazhong University of Science and Technology]."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"1071","DOI":"10.3788\/OPE.20142204.1071","article-title":"Matching probability metric for remote sensing image based on interest points","volume":"22","author":"Ju","year":"2014","journal-title":"Opt. Precis. Eng."},{"key":"ref_16","first-page":"690","article-title":"Support Vector Machine for Scene Matching Area Selection","volume":"37","author":"Yang","year":"2009","journal-title":"J. Tongji Univ. Nat. Sci."},{"key":"ref_17","first-page":"104","article-title":"SAR Scene Matching Area Selection Based on Multi-Attribute Comprehensive Analysis","volume":"36","author":"Zhang","year":"2016","journal-title":"J. Proj. Rocket. Missiles Guid."},{"key":"ref_18","first-page":"201","article-title":"Selection for matching area in terrain aided navigation based on entropy-weighted grey correlation decision-making","volume":"23","author":"Xu","year":"2015","journal-title":"J. Chin. Inert. Technol."},{"key":"ref_19","first-page":"93","article-title":"Selection criterion based on analytic hierarchy process for matching region in gravity aided INS","volume":"21","author":"Cai","year":"2013","journal-title":"J. Chin. Inert. Technol."},{"key":"ref_20","first-page":"2037","article-title":"Automatic suitable-matching area selection method based on multi-feature fusion","volume":"40","author":"Luo","year":"2011","journal-title":"Infrared Laser Eng."},{"key":"ref_21","unstructured":"Wang, J.Z. (2016). Research on Matching Area Selection of Remote Sensing Image Based on Convolutional Neural Networks. [Master\u2019s Thesis, Huazhong University of Science and Technology]."},{"key":"ref_22","unstructured":"Sun, K. (2019). Scene Navigability Analysis Based on Deep Learning Model. [Master\u2019s Thesis, National University of Defense Technology]."},{"key":"ref_23","unstructured":"Yang, J. (2019). Suitable Matching Area Selection Method Based on Deep Learning. [Master\u2019s Thesis, Huazhong University of Science and Technology]."},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Xie, S., Girshick, R., Doll\u00e1r, P., Tu, Z., and He, K. (2017, January 21\u201326). Aggregated residual transformations for deep neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.634"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"736","DOI":"10.1007\/s11263-020-01401-3","article-title":"Compositional convolutional neural networks: A robust and interpretable model for object recognition under occlusion","volume":"129","author":"Kortylewski","year":"2021","journal-title":"Int. J. Comput. Vis."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Cao, J., Leng, H., Lischinski, D., Cohen-Or, D., Tu, C., and Li, Y. (2021, January 10\u201317). ShapeConv: Shape-aware convolutional layer for indoor RGB-D semantic segmentation. Proceedings of the IEEE\/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada.","DOI":"10.1109\/ICCV48922.2021.00700"},{"key":"ref_27","unstructured":"Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, \u0141., and Polosukhin, I. (2017, January 4\u20139). Attention is all you need. Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Zhao, H., Jia, J., and Koltun, V. (2020, January 13\u201319). Exploring self-attention for image recognition. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.01009"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/TGRS.2023.3335418","article-title":"F3-Net: Multiview Scene Matching for Drone-Based Geo-Localization","volume":"61","author":"Sun","year":"2023","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_30","unstructured":"Arjovsky, M., Chintala, S., and Bottou, L. (2017, January 6\u201311). Wasserstein generative adversarial networks. Proceedings of the 34th International Conference on Machine Learning, Sydney, NSW, Australia."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Tareen, S.A.K., and Saleem, Z. (2018, January 3\u20134). A comparative analysis of SIFT, SURF, KAZE, AKAZE, ORB, and BRISK. Proceedings of the 2018 International Conference on Computing, Mathematics and Engineering Technologies, Sukkur, Pakistan.","DOI":"10.1109\/ICOMET.2018.8346440"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Rublee, E., Rabaud, V., Konolige, K., and Bradski, G. (2011, January 6\u201313). ORB: An efficient alternative to SIFT or SURF. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Barcelona, Spain.","DOI":"10.1109\/ICCV.2011.6126544"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"62","DOI":"10.1016\/j.isprsjprs.2015.06.003","article-title":"Distinctive Order Based Self-Similarity Descriptor for Multi-Sensor Remote Sensing Image Matching","volume":"108","author":"Amin","year":"2015","journal-title":"ISPRS J. Photogramm. Remote Sens."},{"key":"ref_34","first-page":"1360","article-title":"Integral Experiment and Simulation System for Image Matching","volume":"22","author":"Yang","year":"2010","journal-title":"J. Syst. Simul."},{"key":"ref_35","first-page":"1553","article-title":"Suitability analysis on scene matching aided navigation based on CR-DSmT","volume":"8","author":"Ling","year":"2015","journal-title":"Chin. Sci. Pap. Online"},{"key":"ref_36","first-page":"4","article-title":"A Rule of Selecting Scene Matching Area","volume":"35","author":"Jiang","year":"2007","journal-title":"J. Tongji Univ. Nat. Sci."},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"3239","DOI":"10.1109\/TPAMI.2021.3051099","article-title":"Salient Object Detection in the Deep Learning Era: An In-Depth Survey","volume":"44","author":"Wang","year":"2022","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"117","DOI":"10.1007\/s41095-019-0149-9","article-title":"Salient object detection: A survey","volume":"5","author":"Borji","year":"2019","journal-title":"Comput. Vis. Media"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"107404","DOI":"10.1016\/j.patcog.2020.107404","article-title":"U2-Net: Going deeper with nested U-structure for salient object detection","volume":"106","author":"Qin","year":"2020","journal-title":"Pattern Recognit."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"2011","DOI":"10.1109\/TPAMI.2019.2913372","article-title":"Squeeze-and-Excitation Networks","volume":"42","author":"Hu","year":"2019","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_41","first-page":"128","article-title":"Research on Selection for Scene Matching Area","volume":"23","author":"Liu","year":"2013","journal-title":"Comput. Technol. Dev."},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep Residual Learning for Image Recognition. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Chen, L.C., Zhu, Y., Papandreou, G., Schroff, F., and Adam, H. (2018, January 8\u201314). Encoder-decoder with atrous separable convolution for semantic image segmentation. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01234-2_49"}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/17\/4353\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T20:46:13Z","timestamp":1760129173000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/17\/4353"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,9,4]]},"references-count":43,"journal-issue":{"issue":"17","published-online":{"date-parts":[[2023,9]]}},"alternative-id":["rs15174353"],"URL":"https:\/\/doi.org\/10.3390\/rs15174353","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,9,4]]}}}