{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:20:15Z","timestamp":1760145615890,"version":"build-2065373602"},"reference-count":63,"publisher":"MDPI AG","issue":"16","license":[{"start":{"date-parts":[[2024,8,21]],"date-time":"2024-08-21T00:00:00Z","timestamp":1724198400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"National Natural Science Foundation of China","award":["61921001"],"award-info":[{"award-number":["61921001"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Camera arrays can enhance the signal-to-noise ratio (SNR) between dim targets and backgrounds through multi-view synthesis. This is crucial for the detection of dim targets. To this end, we design and develop an infrared camera array system with a large baseline. The multi-view synthesis of camera arrays relies heavily on the calibration accuracy of relative poses in the sub-cameras. However, the sub-cameras within a camera array lack strict geometric constraints. Therefore, most current calibration methods still consider the camera array as multiple pinhole cameras for calibration. Moreover, when detecting distant targets, the camera array usually needs to adjust the focal length to maintain a larger depth of field (DoF), so that the distant targets are located on the camera\u2019s focal plane. This means that the calibration scene should be selected within this DoF range to obtain clear images. Nevertheless, the small parallax between the distant sub-aperture views limits the calibration. To address these issues, we propose a calibration model for camera arrays in distant scenes. In this model, we first extend the parallax by employing dual-array frames (i.e., recording a scene at two spatial locations). Secondly, we investigate the linear constraints between the dual-array frames, to maintain the minimum degrees of freedom of the model. We develop a real-world light field dataset called NUDT-Dual-Array using an infrared camera array to evaluate our method. Experimental results on our self-developed datasets demonstrate the effectiveness of our method. Using the calibrated model, we improve the SNR of distant dim targets, which ultimately enhances the detection and perception of dim targets.<\/jats:p>","DOI":"10.3390\/rs16163075","type":"journal-article","created":{"date-parts":[[2024,8,22]],"date-time":"2024-08-22T04:26:57Z","timestamp":1724300817000},"page":"3075","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Infrared Camera Array System and Self-Calibration Method for Enhanced Dim Target Perception"],"prefix":"10.3390","volume":"16","author":[{"given":"Yaning","family":"Zhang","sequence":"first","affiliation":[{"name":"College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China"}]},{"given":"Tianhao","family":"Wu","sequence":"additional","affiliation":[{"name":"College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3127-8705","authenticated-orcid":false,"given":"Jungang","family":"Yang","sequence":"additional","affiliation":[{"name":"College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China"}]},{"given":"Wei","family":"An","sequence":"additional","affiliation":[{"name":"College of Electronic Science and Technology, National University of Defense Technology, Changsha 410073, China"}]}],"member":"1968","published-online":{"date-parts":[[2024,8,21]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"46","DOI":"10.1109\/MC.2006.270","article-title":"Light fields and computational imaging","volume":"39","author":"Levoy","year":"2006","journal-title":"Computer"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Wu, T., Zhang, Y., and Yang, J. (2023, January 16\u201318). Refocusing-based signal-to-noise ratio enhancement method for dim targets in infrared array cameras. Proceedings of the Third International Symposium on Computer Engineering and Intelligent Communications (ISCEIC 2022), Xi\u2019an, China.","DOI":"10.1117\/12.2660845"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Zhu, J., Xie, Z., Jiang, N., Song, Y., Han, S., Liu, W., and Huang, X. (2024). Delay-Doppler Map Shaping through Oversampled Complementary Sets for High-Speed Target Detection. Remote Sens., 16.","DOI":"10.3390\/rs16162898"},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"1004","DOI":"10.1109\/TGRS.2019.2942384","article-title":"Infrared small target detection via low-rank tensor completion with top-hat regularization","volume":"58","author":"Zhu","year":"2019","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1109\/TGRS.2023.3334492","article-title":"Infrared Small Target Detection via Nonconvex Tensor Tucker Decomposition with Factor Prior","volume":"61","author":"Liu","year":"2023","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Hao, Y., Liu, Y., Zhao, J., and Yu, C. (2023). Dual-Domain Prior-Driven Deep Network for Infrared Small-Target Detection. Remote Sens., 15.","DOI":"10.3390\/rs15153827"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"73","DOI":"10.1145\/2461912.2461926","article-title":"Scene reconstruction from high spatio-angular resolution light fields","volume":"32","author":"Kim","year":"2013","journal-title":"ACM Trans. Graph."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Dansereau, D.G., Schuster, G., Ford, J., and Wetzstein, G. (2017, January 21\u201326). A wide-field-of-view monocentric light field camera. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.400"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Taguchi, Y., Agrawal, A., Ramalingam, S., and Veeraraghavan, A. (2010, January 13\u201318). Axial light field for curved mirrors: Reflect your perspective, widen your view. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, CA, USA.","DOI":"10.1109\/CVPR.2010.5540172"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Lumsdaine, A., and Georgiev, T. (2009, January 16\u201317). The focused plenoptic camera. Proceedings of the IEEE International Conference on Computational Photography (ICCP), San Francisco, CA, USA.","DOI":"10.1109\/ICCPHOT.2009.5559008"},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/2766885","article-title":"Improving light field camera sample design with irregularity and aberration","volume":"34","author":"Wei","year":"2015","journal-title":"ACM Trans. Graph."},{"key":"ref_12","unstructured":"Ng, R., Levoy, M., Br\u00e9dif, M., Duval, G., Horowitz, M., and Hanrahan, P. (2005). Light Field Photography with a Hand-Held Plenoptic Camera. [Ph.D. Thesis, Stanford University]."},{"key":"ref_13","first-page":"2","article-title":"A real-time distributed light field camera","volume":"2002","author":"Yang","year":"2002","journal-title":"Render. Tech."},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"765","DOI":"10.1145\/1073204.1073259","article-title":"High performance imaging using large camera arrays","volume":"24","author":"Wilburn","year":"2005","journal-title":"ACM Trans. Graph."},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Zhang, C., and Chen, T. (2004). A self-reconfigurable camera array. ACM SIGGRAPH 2004 Sketches, Springer.","DOI":"10.1145\/1186223.1186412"},{"key":"ref_16","first-page":"1","article-title":"3-D refuse-derived fuel particle tracking-by-detection using a plenoptic camera system","volume":"71","author":"Zhang","year":"2022","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_17","first-page":"1","article-title":"Polarizing Camera Array System Equipment and Calibration Method","volume":"73","author":"Pu","year":"2023","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"2950","DOI":"10.1109\/TIM.2015.2440556","article-title":"Vision-based measurement for localization of objects in 3-D for robotic applications","volume":"64","author":"Lins","year":"2015","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"1197","DOI":"10.1109\/TIM.2015.2507412","article-title":"Automated robust metric calibration algorithm for multifocus plenoptic cameras","volume":"65","author":"Heinze","year":"2016","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_20","first-page":"1","article-title":"Novel precision vision measurement method between area-array imaging and linear-array imaging especially for dynamic objects","volume":"71","author":"Gao","year":"2022","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"76","DOI":"10.1109\/TIM.2019.2893010","article-title":"Virtual stereovision pose measurement of noncooperative space targets for a dual-arm space robot","volume":"69","author":"Peng","year":"2019","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_22","first-page":"1","article-title":"Camera-mirror binocular vision-based method for evaluating the performance of industrial robots","volume":"70","author":"Li","year":"2021","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Kaczmarek, A.L., and Blaschitz, B. (2021). Equal baseline camera array\u2014Calibration, testbed and applications. Appl. Sci., 11.","DOI":"10.3390\/app11188464"},{"key":"ref_24","doi-asserted-by":"crossref","first-page":"103256","DOI":"10.1016\/j.compind.2020.103256","article-title":"Simple and precise multi-view camera calibration for 3D reconstruction","volume":"123","author":"Perez","year":"2020","journal-title":"Comput. Ind."},{"key":"ref_25","unstructured":"Vaish, V., Wilburn, B., Joshi, N., and Levoy, M. (July, January 27). Using plane+ parallax for calibrating dense camera arrays. Proceedings of the PIEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Washington, DC, USA."},{"key":"ref_26","first-page":"8742920","article-title":"Literature survey on stereo vision disparity map algorithms","volume":"1","author":"Hamzah","year":"2016","journal-title":"J. Sensors"},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"1330","DOI":"10.1109\/34.888718","article-title":"A flexible new technique for camera calibration","volume":"22","author":"Zhang","year":"2000","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Schonberger, J.L., and Frahm, J.M. (2016, January 27\u201330). Structure-from-motion revisited. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.445"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Pei, Z., Li, Y., Ma, M., Li, J., Leng, C., Zhang, X., and Zhang, Y. (2019). Occluded-object 3D reconstruction using camera array synthetic aperture imaging. Sensors, 19.","DOI":"10.3390\/s19030607"},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"329","DOI":"10.1007\/s11265-021-01729-0","article-title":"Towards real-time 3D visualization with multiview RGB camera array","volume":"94","author":"Ke","year":"2022","journal-title":"J. Signal Process. Syst."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"103505","DOI":"10.1016\/j.robot.2020.103505","article-title":"Multi-camera visual SLAM for off-road navigation","volume":"128","author":"Yang","year":"2020","journal-title":"Robot. Auton. Syst."},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"174305","DOI":"10.1109\/ACCESS.2020.3026108","article-title":"Multi-view camera pose estimation for robotic arm manipulation","volume":"8","author":"Ali","year":"2020","journal-title":"IEEE Access"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"9695","DOI":"10.1109\/TIM.2020.3006681","article-title":"3-D gaze-estimation method using a multi-camera-multi-light-source system","volume":"69","author":"Chi","year":"2020","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Liu, P., Li, X., Wang, Y., and Fu, Z. (2020). Multiple object tracking for dense pedestrians by Markov random field model with improvement on potentials. Sensors, 20.","DOI":"10.3390\/s20030628"},{"key":"ref_35","doi-asserted-by":"crossref","first-page":"204","DOI":"10.1109\/LSP.2018.2885213","article-title":"Selective Light Field Refocusing for Camera Arrays Using Bokeh Rendering and Superresolution","volume":"26","author":"Wang","year":"2019","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Wang, T.C., Efros, A.A., and Ramamoorthi, R. (2015, January 7\u201313). Occlusion-aware depth estimation using light-field cameras. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile.","DOI":"10.1109\/ICCV.2015.398"},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Schilling, H., Diebold, M., Rother, C., and J\u00e4hne, B. (2018, January 18\u201323). Trust your model: Light field depth estimation with inline occlusion handling. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00476"},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"30520","DOI":"10.1109\/ACCESS.2018.2843725","article-title":"MultiDIC: An open-source toolbox for multi-view 3D digital image correlation","volume":"6","author":"Solav","year":"2018","journal-title":"IEEE Access"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"30596","DOI":"10.1364\/OE.26.030596","article-title":"Group geometric calibration and rectification for circular multi-camera imaging system","volume":"26","author":"Abedi","year":"2018","journal-title":"Opt. Express"},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"14538","DOI":"10.1364\/OE.455429","article-title":"Universal calibration for a ring camera array based on a rotational target","volume":"30","author":"Ge","year":"2022","journal-title":"Opt. Express"},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"386","DOI":"10.1038\/nature11150","article-title":"Multiscale gigapixel photography","volume":"486","author":"Brady","year":"2012","journal-title":"Nature"},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"3179","DOI":"10.1364\/BOE.6.003179","article-title":"Camera array based light field microscopy","volume":"6","author":"Lin","year":"2015","journal-title":"Biomed. Opt. Express"},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"e74988","DOI":"10.7554\/eLife.74988","article-title":"Gigapixel imaging with a novel multi-camera array microscope","volume":"11","author":"Thomson","year":"2022","journal-title":"eLife"},{"key":"ref_44","doi-asserted-by":"crossref","first-page":"1","DOI":"10.1145\/2508363.2508390","article-title":"Picam: An ultra-thin high performance monolithic camera array","volume":"32","author":"Venkataraman","year":"2013","journal-title":"ACM Trans. Graph."},{"key":"ref_45","doi-asserted-by":"crossref","first-page":"1471","DOI":"10.1109\/LSP.2014.2343251","article-title":"Separable coded aperture for depth from a single image","volume":"21","author":"Lin","year":"2014","journal-title":"IEEE Signal Process. Lett."},{"key":"ref_46","doi-asserted-by":"crossref","first-page":"021106","DOI":"10.1117\/1.3442712","article-title":"Focused plenoptic camera and rendering","volume":"19","author":"Georgiev","year":"2010","journal-title":"J. Electron. Imaging"},{"key":"ref_47","unstructured":"Pless, R. (2003, January 18\u201320). Using many cameras as one. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Madison, WI, USA."},{"key":"ref_48","unstructured":"Li, H., Hartley, R., and Kim, J.h. (2008, January 23\u201328). A linear approach to motion estimation using generalized camera models. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, USA."},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Johannsen, O., Sulc, A., and Goldluecke, B. (2015, January 7\u201313). On linear structure from motion for light field cameras. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile.","DOI":"10.1109\/ICCV.2015.89"},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Yu, P., Yang, W., Ma, Y., and Yu, J. (2017, January 22\u201329). Ray space features for plenoptic structure-from-motion. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.496"},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Nousias, S., Lourakis, M., and Bergeles, C. (2019, January 15\u201320). Large-scale, metric structure from motion for unordered light fields. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00341"},{"key":"ref_52","doi-asserted-by":"crossref","first-page":"3006","DOI":"10.1007\/s11263-021-01516-1","article-title":"3D scene reconstruction with an un-calibrated light field camera","volume":"129","author":"Zhang","year":"2021","journal-title":"Int. J. Comput. Vis."},{"key":"ref_53","doi-asserted-by":"crossref","unstructured":"Nousias, S., Lourakis, M., Keane, P., Ourselin, S., and Bergeles, C. (2020, January 25\u201328). A linear approach to absolute pose estimation for light fields. Proceedings of the International Conference on 3D Vision (3DV), Fukuoka, Japan.","DOI":"10.1109\/3DV50981.2020.00077"},{"key":"ref_54","doi-asserted-by":"crossref","first-page":"1641","DOI":"10.1109\/TIP.2022.3144891","article-title":"Relative pose estimation for light field cameras based on LF-point-LF-point correspondence model","volume":"31","author":"Zhang","year":"2022","journal-title":"IEEE Trans. Image Process."},{"key":"ref_55","doi-asserted-by":"crossref","first-page":"151","DOI":"10.1016\/j.cviu.2016.09.007","article-title":"Large-scale outdoor 3D reconstruction on a mobile device","volume":"157","author":"Sattler","year":"2017","journal-title":"Comput. Vis. Image Underst."},{"key":"ref_56","doi-asserted-by":"crossref","first-page":"91","DOI":"10.1023\/B:VISI.0000029664.99615.94","article-title":"Distinctive image features from scale-invariant keypoints","volume":"60","author":"Lowe","year":"2004","journal-title":"Int. J. Comput. Vis."},{"key":"ref_57","doi-asserted-by":"crossref","first-page":"138","DOI":"10.1049\/iet-cvi.2019.0716","article-title":"RootsGLOH2: Embedding RootSIFT \u2018square rooting\u2019 in sGLOH2","volume":"14","author":"Bellavia","year":"2020","journal-title":"IET Comput. Vis."},{"key":"ref_58","doi-asserted-by":"crossref","first-page":"756","DOI":"10.1109\/TPAMI.2004.17","article-title":"An efficient solution to the five-point relative pose problem","volume":"26","year":"2004","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_59","doi-asserted-by":"crossref","unstructured":"Fachada, S., Losfeld, A., Senoh, T., Lafruit, G., and Teratani, M. (2021, January 6\u20138). A calibration method for subaperture views of plenoptic 2.0 camera arrays. Proceedings of the IEEE 23rd International Workshop on Multimedia Signal Processing (MMSP), Tampere, Finland.","DOI":"10.1109\/MMSP53017.2021.9733556"},{"key":"ref_60","unstructured":"Adorjan, M. (2016). Opensfm: A Collaborative Structure-from-Motion System. [Ph.D. Thesis, Vienna University of Technology]."},{"key":"ref_61","doi-asserted-by":"crossref","unstructured":"Lourakis, M., and Terzakis, G. (2021, January 10\u201315). A globally optimal method for the PnP problem with MRP rotation parameterization. Proceedings of the IEEE International Conference on Pattern Recognition (ICPR), Milan, Italy.","DOI":"10.1109\/ICPR48806.2021.9412405"},{"key":"ref_62","doi-asserted-by":"crossref","first-page":"155","DOI":"10.1007\/s11263-008-0152-6","article-title":"EPnP: Efficient perspective-n-point camera pose estimation","volume":"81","author":"Lepetit","year":"2009","journal-title":"Int. J. Comput. Vis."},{"key":"ref_63","doi-asserted-by":"crossref","first-page":"3899","DOI":"10.1049\/iet-ipr.2019.0081","article-title":"High-precision refocusing method with one interpolation for camera array images","volume":"14","author":"Yang","year":"2020","journal-title":"IET Image Process."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/16\/16\/3075\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,10]],"date-time":"2025-10-10T15:40:20Z","timestamp":1760110820000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/16\/16\/3075"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,8,21]]},"references-count":63,"journal-issue":{"issue":"16","published-online":{"date-parts":[[2024,8]]}},"alternative-id":["rs16163075"],"URL":"https:\/\/doi.org\/10.3390\/rs16163075","relation":{},"ISSN":["2072-4292"],"issn-type":[{"type":"electronic","value":"2072-4292"}],"subject":[],"published":{"date-parts":[[2024,8,21]]}}}