{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,17]],"date-time":"2026-01-17T22:49:48Z","timestamp":1768690188888,"version":"3.49.0"},"reference-count":40,"publisher":"MDPI AG","issue":"24","license":[{"start":{"date-parts":[[2023,12,17]],"date-time":"2023-12-17T00:00:00Z","timestamp":1702771200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Natural Science Foundation of China","award":["62102268"],"award-info":[{"award-number":["62102268"]}]},{"name":"National Natural Science Foundation of China","award":["20220812102547001"],"award-info":[{"award-number":["20220812102547001"]}]},{"name":"National Natural Science Foundation of China","award":["6022312044K"],"award-info":[{"award-number":["6022312044K"]}]},{"name":"National Natural Science Foundation of China","award":["6023310030K"],"award-info":[{"award-number":["6023310030K"]}]},{"name":"National Natural Science Foundation of China","award":["6021310008K"],"award-info":[{"award-number":["6021310008K"]}]},{"name":"National Natural Science Foundation of China","award":["6019310010K"],"award-info":[{"award-number":["6019310010K"]}]},{"name":"National Natural Science Foundation of China","award":["6020271005K"],"award-info":[{"award-number":["6020271005K"]}]},{"name":"National Natural Science Foundation of China","award":["6021271004K"],"award-info":[{"award-number":["6021271004K"]}]},{"name":"Stable Supporting Program for Universities of Shenzhen","award":["62102268"],"award-info":[{"award-number":["62102268"]}]},{"name":"Stable Supporting Program for Universities of Shenzhen","award":["20220812102547001"],"award-info":[{"award-number":["20220812102547001"]}]},{"name":"Stable Supporting Program for Universities of Shenzhen","award":["6022312044K"],"award-info":[{"award-number":["6022312044K"]}]},{"name":"Stable Supporting Program for Universities of Shenzhen","award":["6023310030K"],"award-info":[{"award-number":["6023310030K"]}]},{"name":"Stable Supporting Program for Universities of Shenzhen","award":["6021310008K"],"award-info":[{"award-number":["6021310008K"]}]},{"name":"Stable Supporting Program for Universities of Shenzhen","award":["6019310010K"],"award-info":[{"award-number":["6019310010K"]}]},{"name":"Stable Supporting Program for Universities of Shenzhen","award":["6020271005K"],"award-info":[{"award-number":["6020271005K"]}]},{"name":"Stable Supporting Program for Universities of Shenzhen","award":["6021271004K"],"award-info":[{"award-number":["6021271004K"]}]},{"name":"Research Foundation of Shenzhen Polytechnic University","award":["62102268"],"award-info":[{"award-number":["62102268"]}]},{"name":"Research Foundation of Shenzhen Polytechnic University","award":["20220812102547001"],"award-info":[{"award-number":["20220812102547001"]}]},{"name":"Research Foundation of Shenzhen Polytechnic University","award":["6022312044K"],"award-info":[{"award-number":["6022312044K"]}]},{"name":"Research Foundation of Shenzhen Polytechnic University","award":["6023310030K"],"award-info":[{"award-number":["6023310030K"]}]},{"name":"Research Foundation of Shenzhen Polytechnic University","award":["6021310008K"],"award-info":[{"award-number":["6021310008K"]}]},{"name":"Research Foundation of Shenzhen Polytechnic University","award":["6019310010K"],"award-info":[{"award-number":["6019310010K"]}]},{"name":"Research Foundation of Shenzhen Polytechnic University","award":["6020271005K"],"award-info":[{"award-number":["6020271005K"]}]},{"name":"Research Foundation of Shenzhen Polytechnic University","award":["6021271004K"],"award-info":[{"award-number":["6021271004K"]}]},{"name":"Post-doctoral Later-stage Foundation Project of Shenzhen Polytechnic University","award":["62102268"],"award-info":[{"award-number":["62102268"]}]},{"name":"Post-doctoral Later-stage Foundation Project of Shenzhen Polytechnic University","award":["20220812102547001"],"award-info":[{"award-number":["20220812102547001"]}]},{"name":"Post-doctoral Later-stage Foundation Project of Shenzhen Polytechnic University","award":["6022312044K"],"award-info":[{"award-number":["6022312044K"]}]},{"name":"Post-doctoral Later-stage Foundation Project of Shenzhen Polytechnic University","award":["6023310030K"],"award-info":[{"award-number":["6023310030K"]}]},{"name":"Post-doctoral Later-stage Foundation Project of Shenzhen Polytechnic University","award":["6021310008K"],"award-info":[{"award-number":["6021310008K"]}]},{"name":"Post-doctoral Later-stage Foundation Project of Shenzhen Polytechnic University","award":["6019310010K"],"award-info":[{"award-number":["6019310010K"]}]},{"name":"Post-doctoral Later-stage Foundation Project of Shenzhen Polytechnic University","award":["6020271005K"],"award-info":[{"award-number":["6020271005K"]}]},{"name":"Post-doctoral Later-stage Foundation Project of Shenzhen Polytechnic University","award":["6021271004K"],"award-info":[{"award-number":["6021271004K"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>In multi-modal images (MMI), the differences in their imaging mechanisms lead to large signal-to-noise ratio differences, which means that the matching of geometric invariance and the matching accuracy of the matching algorithms often cannot be balanced. Therefore, how to weaken the signal-to-noise interference of MMI, maintain good scale and rotation invariance, and obtain high-precision matching correspondences becomes a challenge for multimodal remote sensing image matching. Based on this, a lightweight MMI alignment of the phase exponent of the differences in the Gaussian pyramid (PEDoG) is proposed, which takes into account the phase exponent differences of the Gaussian pyramid with normalized filtration, i.e., it achieves the high-precision identification of matching correspondences points while maintaining the geometric invariance of multi-modal matching. The proposed PEDoG method consists of three main parts, introducing the phase consistency model into the differential Gaussian pyramid to construct a new phase index. Then, three types of MMI (multi-temporal image, infrared\u2013optical image, and map\u2013optical image) are selected as the experimental datasets and compared with the advanced matching methods, and the results show that the NCM (number of correct matches) of the PEDoG method displays a minimum improvement of 3.3 times compared with the other methods, and the average RMSE (root mean square error) is 1.69 pixels, which is the lowest value among all the matching methods. Finally, the alignment results of the image are shown in the tessellated mosaic mode, which shows that the feature edges of the image are connected consistently without interlacing and artifacts. It can be seen that the proposed PEDoG method can realize high-precision alignment while taking geometric invariance into account.<\/jats:p>","DOI":"10.3390\/rs15245764","type":"journal-article","created":{"date-parts":[[2023,12,18]],"date-time":"2023-12-18T10:04:47Z","timestamp":1702893887000},"page":"5764","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Multi-Modal Image Registration Based on Phase Exponent Differences of the Gaussian Pyramid"],"prefix":"10.3390","volume":"15","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-5657-5271","authenticated-orcid":false,"given":"Xiaohu","family":"Yan","sequence":"first","affiliation":[{"name":"School of Undergraduate Education, Shenzhen Polytechnic University, Shenzhen 518055, China"},{"name":"Institute of Applied Artificial Intelligence of the Guangdong-Hong Kong-Macao Greater Bay Area, Shenzhen Polytechnic University, Shenzhen 518055, China"}]},{"given":"Yihang","family":"Cao","sequence":"additional","affiliation":[{"name":"School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China"}]},{"given":"Yijun","family":"Yang","sequence":"additional","affiliation":[{"name":"School of Artificial Intelligence, Shenzhen Polytechnic University, Shenzhen 518055, China"}]},{"given":"Yongxiang","family":"Yao","sequence":"additional","affiliation":[{"name":"School of Remote Sensing and Information Engineering, Wuhan University, Wuhan 430079, China"}]}],"member":"1968","published-online":{"date-parts":[[2023,12,17]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"5147","DOI":"10.1109\/TIP.2020.2980972","article-title":"Boosting structure consistency for multispectral and multimodal image registration","volume":"29","author":"Cao","year":"2020","journal-title":"IEEE Trans. Image Process."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"1410","DOI":"10.1109\/TMM.2020.2997193","article-title":"Image-only real-time incremental uav image mosaic for multi-strip flight","volume":"23","author":"Zhang","year":"2020","journal-title":"IEEE Trans. Multimed."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1016\/j.isprsjprs.2022.12.018","article-title":"Histogram of the orientation of the weighted phase descriptor for multi-modal remote sensing image matching","volume":"196","author":"Zhang","year":"2023","journal-title":"ISPRS J. Photogramm. Remote Sens."},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Zhang, X., Leng, C., Hong, Y., Pei, Z., Cheng, I., and Basu, A. (2021). Multimodal remote sensing image registration methods and advancements: A survey. Remote Sens., 13.","DOI":"10.3390\/rs13245128"},{"key":"ref_5","first-page":"2","article-title":"Sift-the scale invariant feature transform","volume":"2","author":"Lowe","year":"2004","journal-title":"Int. J."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"346","DOI":"10.1016\/j.cviu.2007.09.014","article-title":"Speeded-up robust features (SURF)","volume":"110","author":"Bay","year":"2008","journal-title":"Comput. Vis. Image Underst."},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"136","DOI":"10.1007\/s004260000024","article-title":"Phase congruency: A low-level image invariant","volume":"64","author":"Kovesi","year":"2000","journal-title":"Psychol. Res."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"317","DOI":"10.1109\/TPAMI.2013.138","article-title":"Matching by Tone Mapping: Photometric Invariant Template Matching","volume":"36","author":"Helor","year":"2013","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_9","doi-asserted-by":"crossref","first-page":"1266","DOI":"10.1109\/83.506761","article-title":"An FFT-based technique for translation, rotation, and scale-invariant image registration","volume":"5","author":"Reddy","year":"1996","journal-title":"IEEE Trans. Image Process."},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"1153","DOI":"10.1109\/TMI.2013.2265603","article-title":"Deformable medical image registration: A survey","volume":"32","author":"Sotiras","year":"2013","journal-title":"IEEE Trans. Med. Imaging"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"97","DOI":"10.1016\/j.isprsjprs.2016.10.005","article-title":"Multimodal registration of remotely sensed images based on Jeffrey\u2019s divergence","volume":"122","author":"Xu","year":"2016","journal-title":"ISPRS J. Photogramm. Remote Sens."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"2941","DOI":"10.1109\/TGRS.2017.2656380","article-title":"Robust registration of multimodal remote sensing images based on structural similarity","volume":"55","author":"Ye","year":"2017","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Ye, Y., Bruzzone, L., Shan, J., and Shen, L. (2018). A Fast and Robust Matching Framework for Multimodal Remote Sensing Image Registration. arXiv.","DOI":"10.1109\/TGRS.2019.2924684"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Fan, Z., Zhang, L., Liu, Y., Wang, Q., and Zlatanova, S. (2021). Exploiting High Geopositioning Accuracy of SAR Data to Obtain Accurate Geometric Orientation of Optical Satellite Images. Remote Sens., 13.","DOI":"10.3390\/rs13173535"},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"99","DOI":"10.5194\/isprs-archives-XLIII-B2-2022-99-2022","article-title":"Motif: Multi-orientation tensor index feature descriptor for sar-optical image registration","volume":"XLIII-B2-2022","author":"Yao","year":"2022","journal-title":"Int. Arch. Photogramm. Remote Sens. Spatial Inf. Sci."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Chien, H.J., Chuang, C.C., Chen, C.Y., and Klette, R. (2016, January 21\u201322). When to use what feature? SIFT, SURF, ORB, or A-KAZE features for monocular visual odometry. Proceedings of the 2016 International Conference on Image and Vision Computing New Zealand (IVCNZ), Palmerston North, New Zealand.","DOI":"10.1109\/IVCNZ.2016.7804434"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"3","DOI":"10.1109\/LGRS.2016.2600858","article-title":"Remote sensing image registration with modified SIFT and enhanced feature matching","volume":"14","author":"Ma","year":"2016","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"21","DOI":"10.1016\/j.isprsjprs.2019.04.018","article-title":"Illumination-Robust Remote Sensing Image Matching Based on Oriented Self-Similarity","volume":"153","author":"Sedaghat","year":"2019","journal-title":"ISPRS J. Photogramm. Rem. Sens."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"2358","DOI":"10.1109\/JSTARS.2021.3055023","article-title":"Robust SAR image registration using rank-based ratio self-similarity","volume":"14","author":"Xiong","year":"2021","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_20","doi-asserted-by":"crossref","first-page":"5847","DOI":"10.1109\/JSTARS.2020.3026162","article-title":"Automatic Registration of Optical and SAR Images Via Improved Phase Congruency Model","volume":"13","author":"Xiang","year":"2020","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."},{"key":"ref_21","first-page":"1","article-title":"A Novel Multiscale Adaptive Binning Phase Congruency Feature for SAR and Optical Image Registration","volume":"60","author":"Fan","year":"2022","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_22","first-page":"1727","article-title":"Heterologous Images Matching Considering Anisotropic Weighted Moment and Absolute Phase Orientation","volume":"46","author":"Yao","year":"2021","journal-title":"Geomat. Inf. Sci. Wuhan Univ."},{"key":"ref_23","doi-asserted-by":"crossref","first-page":"7811109","DOI":"10.1109\/JPHOT.2022.3144227","article-title":"LPSO: Multi-source image matching considering the description of local phase sharpness orientation","volume":"14","author":"Yang","year":"2022","journal-title":"IEEE Photonics J."},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"2584","DOI":"10.1109\/TIP.2022.3157450","article-title":"Multi-modal Remote Sensing Image Matching Considering Co-occurrence Filter","volume":"31","author":"Yao","year":"2022","journal-title":"IEEE Trans. Image Process."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Gao, C., Li, W., Tao, R., and Du, Q. (2022). MS-HLMO: Multi-scale Histogram of Local Main Orientation for Remote Sensing Image Registration. arXiv.","DOI":"10.1109\/TGRS.2022.3193109"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Li, J., Xu, W., Shi, P., Zhang, Y., and Hu, Q. (2022). LNIFT: Locally Normalized Image for Rotation Invariant Multimodal Feature Matching. IEEE Trans. Geosci. Remote Sens., 60.","DOI":"10.1109\/TGRS.2022.3165940"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Yi, K.M., Trulls, E., Lepetit, V., and Fua, P. (2016, January 11\u201314). Lift: Learned Invariant Feature Transform. Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46466-4_28"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Sarlin, P.E., DeTone, D., Malisiewicz, T., and Rabinovich, A. (2020, January 13\u201319). Superglue: Learning feature matching with graph neural networks. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA.","DOI":"10.1109\/CVPR42600.2020.00499"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Dusmanu, M., Rocco, I., Pajdla, T., Pollefeys, M., Sivic, J., Torii, A., and Sattler, T. (2019, January 15\u201320). D2-net: A trainable cnn for joint description and detection of local features. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00828"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Sun, J., Shen, Z., Wang, Y., Bao, H., and Zhou, X. (2021, January 20\u201325). LoFTR: Detector-free local feature matching with transformers. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA.","DOI":"10.1109\/CVPR46437.2021.00881"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"B\u00f6kman, G., and Kahl, F. (2022, January 18\u201324). A case for using rotation invariant features in state of the art feature matchers. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPRW56347.2022.00559"},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Liu, Y., Zhang, H., and Ma, G. (2022). Multimodal Remote Sensing Image Matching Combining Learning Features and Delaunay Triangulation. IEEE Trans. Geosci. Remote. Sens., 60.","DOI":"10.1109\/TGRS.2022.3229366"},{"key":"ref_33","first-page":"2","article-title":"Fast approximate nearest neighbors with automatic algorithm configuration","volume":"2","author":"Muja","year":"2009","journal-title":"VISAPP"},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"43","DOI":"10.1109\/LGRS.2014.2325970","article-title":"A novel point-matching algorithm based on fast sample consensus for image registration","volume":"12","author":"Wu","year":"2014","journal-title":"IEEE Geosci. Remote Sens. Lett."},{"key":"ref_35","first-page":"1","article-title":"Image Features FromPhase Congruency","volume":"1","author":"Kovesi","year":"1999","journal-title":"Videre J. Comput. Vis. Res."},{"key":"ref_36","doi-asserted-by":"crossref","first-page":"853","DOI":"10.1162\/089976699300016467","article-title":"A fast, compact approximation of the exponential function","volume":"11","author":"Schraudolph","year":"1999","journal-title":"Neural Comput."},{"key":"ref_37","unstructured":"Gao, W., Zhang, X., Yang, L., and Liu, H. (2010, January 9\u201311). An improved Sobel edge detection. Proceedings of the 2010 3rd International Conference on Computer Science and Information Technology, Chengdu, China."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"1615","DOI":"10.1109\/TPAMI.2005.188","article-title":"A performance evaluation of local descriptors","volume":"27","author":"Mikolajczyk","year":"2005","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"3296","DOI":"10.1109\/TIP.2019.2959244","article-title":"RIFT: Multi-modal image matching based on radiation-variation insensitive feature transform","volume":"29","author":"Li","year":"2019","journal-title":"IEEE Trans. Image Process."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"12440","DOI":"10.1109\/JSTARS.2021.3131489","article-title":"Self-similarity features for multimodal remote sensing image matching","volume":"14","author":"Xiong","year":"2021","journal-title":"IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/24\/5764\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T21:40:18Z","timestamp":1760132418000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/15\/24\/5764"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,12,17]]},"references-count":40,"journal-issue":{"issue":"24","published-online":{"date-parts":[[2023,12]]}},"alternative-id":["rs15245764"],"URL":"https:\/\/doi.org\/10.3390\/rs15245764","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,12,17]]}}}