{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2022,3,28]],"date-time":"2022-03-28T22:56:27Z","timestamp":1648508187360},"reference-count":28,"publisher":"Walter de Gruyter GmbH","issue":"1","license":[{"start":{"date-parts":[[2017,12,1]],"date-time":"2017-12-01T00:00:00Z","timestamp":1512086400000},"content-version":"unspecified","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by-nc-nd\/4.0"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2017,12,1]]},"abstract":"<jats:title>Abstract<\/jats:title>\n               <jats:p>In the present paper, a new and improved visual sensor data fusion method is proposed that uses visible and far-infrared light sensors. Additionally, lux meter data are used for decision level fusion of beliefs of recognised target classes. The database consisting of 4 ambient light condition images is created using Canon and FLIR cameras.<\/jats:p>\n               <jats:p>The developed approach has been tested using database images, neural network training and classification, particularly for low light level conditions. Enhancements of target identification precision are proved by practical implementation and testing of the proposed method.<\/jats:p>","DOI":"10.1515\/acss-2017-0015","type":"journal-article","created":{"date-parts":[[2017,12,30]],"date-time":"2017-12-30T22:15:23Z","timestamp":1514672123000},"page":"28-35","source":"Crossref","is-referenced-by-count":0,"title":["Target Identification Using Sensors of Different Nature"],"prefix":"10.1515","volume":"22","author":[{"given":"Anete","family":"Vagale","sequence":"first","affiliation":[{"name":"Riga Technical University , Latvia"}]},{"given":"Agris","family":"\u0145ikitenko","sequence":"additional","affiliation":[{"name":"Riga Technical University , Latvia"}]},{"given":"Eduards","family":"Slava","sequence":"additional","affiliation":[{"name":"Riga Technical University , Latvia"}]},{"given":"Ottar L.","family":"Osen","sequence":"additional","affiliation":[{"name":"Norwegian University of Science and Technology , Norway"}]}],"member":"374","published-online":{"date-parts":[[2017,12,27]]},"reference":[{"key":"2021040706234935099_j_acss-2017-0015_ref_001_w2aab2b8b3b1b7b1ab1ab1Aa","doi-asserted-by":"crossref","unstructured":"[1] K. Lenac, I. Maurovic, and I. Petrovic, \u201cMoving Objects Detection Using a Thermal Camera and IMU on a Vehicle,\u201d in 2015 International Conference on Electrical Drives and Power Electronics (EDPE), pp. 212\u2013219, 2015. https:\/\/doi.org\/10.1109\/edpe.2015.7325296","DOI":"10.1109\/EDPE.2015.7325296"},{"key":"2021040706234935099_j_acss-2017-0015_ref_002_w2aab2b8b3b1b7b1ab1ab2Aa","doi-asserted-by":"crossref","unstructured":"[2] S. Ghorashi, \u201cSpatial Selection and Target Identification are Separable Processes in Visual Search,\u201d Journal of Vision., vol. 10, no. 3, pp. 1\u201312, 2010. https:\/\/doi.org\/10.1167\/10.3.7","DOI":"10.1167\/10.3.7"},{"key":"2021040706234935099_j_acss-2017-0015_ref_003_w2aab2b8b3b1b7b1ab1ab3Aa","doi-asserted-by":"crossref","unstructured":"[3] B. Bhanu, \u201cAutomatic Target Recognition: State of the Art Survey,\u201d IEEE Transactions on Aerospace and Electronic Systems, vol. AES-22, no. 4, pp. 364\u2013379, 1986. https:\/\/doi.org\/10.1109\/taes.1986.310772","DOI":"10.1109\/TAES.1986.310772"},{"key":"2021040706234935099_j_acss-2017-0015_ref_004_w2aab2b8b3b1b7b1ab1ab4Aa","unstructured":"[4] R. E. Bethel, B. Shapo, and C. M. Kreucher, PDF Target Detection and Tracking, Signal Processing, vol. 90, no. 7, pp. 2164\u20132176, 2010."},{"key":"2021040706234935099_j_acss-2017-0015_ref_005_w2aab2b8b3b1b7b1ab1ab5Aa","doi-asserted-by":"crossref","unstructured":"[5] \u201cOptics and photonics. Spectral bands,\u201d 2007. https:\/\/doi.org\/10.3403\/30112984","DOI":"10.3403\/30112984"},{"key":"2021040706234935099_j_acss-2017-0015_ref_006_w2aab2b8b3b1b7b1ab1ab6Aa","unstructured":"[6] I. N. Perevoz\u010dikov, Multisensornaja Tehnologija Izmerenija Geometri\u010deskih Parametrov Tipovyh Detalej, 2009."},{"key":"2021040706234935099_j_acss-2017-0015_ref_007_w2aab2b8b3b1b7b1ab1ab7Aa","unstructured":"[7] H. Gr\u012bnbergs, 3D Punktu M\u0101ko\u0146a Apstr\u0101de Objektu Izseko\u0161anai Telp\u0101. R\u012bgas Tehnisk\u0101 Universit\u0101te, 2014."},{"key":"2021040706234935099_j_acss-2017-0015_ref_008_w2aab2b8b3b1b7b1ab1ab8Aa","doi-asserted-by":"crossref","unstructured":"[8] S. Rodr\u00edguez-Jim\u00e9nez, N. Burrus, and M. Abderrahim, \u201cA-Contrario Detection of Aerial Target Using a Time-of-Flight Camera,\u201d Sensor Signal Processing for Defence (SSPD 2012), 2012. https:\/\/doi.org\/10.1049\/ic.2012.0110","DOI":"10.1049\/ic.2012.0110"},{"key":"2021040706234935099_j_acss-2017-0015_ref_009_w2aab2b8b3b1b7b1ab1ab9Aa","unstructured":"[9] A. Canclini, L. Baroffio, M. Cesana, A. Redondi, and M. Tagliasacchi \u201cObject recognition in visual sensor networks based on compression and transmission of binary local features,\u201d [Online]. Available: http:\/\/www.icassp2014.org\/files\/ST\/1569921811%20-%20Object%20recognition%20in%20visual%20sensor%20networks.pdf"},{"key":"2021040706234935099_j_acss-2017-0015_ref_010_w2aab2b8b3b1b7b1ab1ac10Aa","doi-asserted-by":"crossref","unstructured":"[10] F. Castanedo, \u201cA Review of Data Fusion Techniques,\u201d The Scientific World Journal, vol. 2013, pp. 1\u201319, 2013. https:\/\/doi.org\/10.1155\/2013\/704504","DOI":"10.1155\/2013\/704504"},{"key":"2021040706234935099_j_acss-2017-0015_ref_011_w2aab2b8b3b1b7b1ab1ac11Aa","doi-asserted-by":"crossref","unstructured":"[11] J. Thomanek, M. Ritter, H. Lietz, and G. Wanielik, \u201cComparing Visual Data Fusion Techniques Using FIR and Visible Light Sensors to Improve Pedestrian Detection,\u201d 2011 International Conference on Digital Image Computing: Techniques and Applications, pp. 119\u2013125, Dec. 2011. https:\/\/doi.org\/10.1109\/dicta.2011.27","DOI":"10.1109\/DICTA.2011.27"},{"key":"2021040706234935099_j_acss-2017-0015_ref_012_w2aab2b8b3b1b7b1ab1ac12Aa","doi-asserted-by":"crossref","unstructured":"[12] M. Kristou, A. Ohya, and S. Yuta, \u201cTarget Person Identification and Following Based on Omnidirectional Camera and LRF Data Fusion,\u201d 2011 RO-MAN, pp. 419\u2013424, Jul. 2011. https:\/\/doi.org\/10.1109\/roman.2011.6005248","DOI":"10.1109\/ROMAN.2011.6005248"},{"key":"2021040706234935099_j_acss-2017-0015_ref_013_w2aab2b8b3b1b7b1ab1ac13Aa","doi-asserted-by":"crossref","unstructured":"[13] T. Nakamura, S. Haviland, D. Bershadsky, D. Magree, and E. N. Johnson, \u201cVision-Based Closed-Loop Tracking Using Micro Air Vehicles,\u201d 2016 IEEE Aerospace Conference, pp. 1\u201312, Mar. 2016. https:\/\/doi.org\/10.1109\/aero.2016.7500873","DOI":"10.1109\/AERO.2016.7500873"},{"key":"2021040706234935099_j_acss-2017-0015_ref_014_w2aab2b8b3b1b7b1ab1ac14Aa","doi-asserted-by":"crossref","unstructured":"[14] A. Rechy Romero, P. V. Koerich Borges, A. Elfes, and A. Pfrunder, \u201cEnvironment-Aware Sensor Fusion for Obstacle Detection,\u201d 2016 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), pp. 19\u201321, Sep. 2016. https:\/\/doi.org\/10.1109\/mfi.2016.7849476","DOI":"10.1109\/MFI.2016.7849476"},{"key":"2021040706234935099_j_acss-2017-0015_ref_015_w2aab2b8b3b1b7b1ab1ac15Aa","doi-asserted-by":"crossref","unstructured":"[15] P.-M. Hsu, M.-H. Li, and Y.-F. Su, \u201cObject detection and recognition by using sensor fusion,\u201d 11th IEEE International Conference on Control & Automation (ICCA), pp. 56\u201360, Jun. 2014. https:\/\/doi.org\/10.1109\/icca.2014.6870895","DOI":"10.1109\/ICCA.2014.6870895"},{"key":"2021040706234935099_j_acss-2017-0015_ref_016_w2aab2b8b3b1b7b1ab1ac16Aa","unstructured":"[16] S. Soldan, J. Rangel, and A. Kroll, \u201c3D Thermal Imaging: Fusion of Thermography and Depth Cameras,\u201d 2014 [Online]. Available: www.ndt.net\/?id=17665"},{"key":"2021040706234935099_j_acss-2017-0015_ref_017_w2aab2b8b3b1b7b1ab1ac17Aa","doi-asserted-by":"crossref","unstructured":"[17] A. Charbal, J. E. Dufour, F. Hild, M. Poncelet, L. Vincent, and S. Roux, \u201cHybrid Stereocorrelation Using Infrared and Visible Light Cameras,\u201d Experimental Mechanics, vol. 56, no. 5, pp. 845\u2013860, Jan. 2016. https:\/\/doi.org\/10.1007\/s11340-016-0127-4","DOI":"10.1007\/s11340-016-0127-4"},{"key":"2021040706234935099_j_acss-2017-0015_ref_018_w2aab2b8b3b1b7b1ab1ac18Aa","unstructured":"[18] S. Acharya and M. Kam, \u201cEvidence Combination for Hard and Soft Sensor Data Fusion,\u201d 14th International Conference on Information Fusion Chicago, Illinois, USA, July 5\u20138, 2011 [Online]. Available: http:\/\/fusion.isif.org\/proceedings\/Fusion_2011\/data\/papers\/133.pdf"},{"key":"2021040706234935099_j_acss-2017-0015_ref_019_w2aab2b8b3b1b7b1ab1ac19Aa","doi-asserted-by":"crossref","unstructured":"[19] X. Fan, P. Shi, J. Ni, and M. Li, \u201cA Thermal Infrared and Visible Images Fusion Based Approach for Multitarget Detection under Complex Environment,\u201d Mathematical Problems in Engineering, vol. 2015, pp. 1\u201311, 2015. https:\/\/doi.org\/10.1155\/2015\/750708","DOI":"10.1155\/2015\/750708"},{"key":"2021040706234935099_j_acss-2017-0015_ref_020_w2aab2b8b3b1b7b1ab1ac20Aa","doi-asserted-by":"crossref","unstructured":"[20] A. Wang, J. Jiang, and H. Zhang, \u201cMulti-sensor Image Decision Level Fusion Detection Algorithm Based on D-S Evidence Theory,\u201d 2014 Fourth International Conference on Instrumentation and Measurement, Computer, Communication and Control, pp. 620\u2013623, Sep. 2014. https:\/\/doi.org\/10.1109\/imccc.2014.132","DOI":"10.1109\/IMCCC.2014.132"},{"key":"2021040706234935099_j_acss-2017-0015_ref_021_w2aab2b8b3b1b7b1ab1ac21Aa","doi-asserted-by":"crossref","unstructured":"[21] J. Sun, H. Zhu, Z. Xu, and C. Han, \u201cPoisson image fusion based on Markov random field fusion model,\u201d Information Fusion, vol. 14, no. 3, pp. 241\u2013254, Jul. 2013.https:\/\/doi.org\/10.1016\/j.inffus.2012.07.003","DOI":"10.1016\/j.inffus.2012.07.003"},{"key":"2021040706234935099_j_acss-2017-0015_ref_022_w2aab2b8b3b1b7b1ab1ac22Aa","unstructured":"[22] Engineering ToolBox team, \u201cIlluminance \u2013 Recommended Light Level,\u201d 2017 [Online]. Available: www.engineeringtoolbox.com."},{"key":"2021040706234935099_j_acss-2017-0015_ref_023_w2aab2b8b3b1b7b1ab1ac23Aa","unstructured":"[23] OpenCV team, \u201cAbout OpenCV,\u201d 2017 [Online]. Available: http:\/\/opencv.org\/about.html."},{"key":"2021040706234935099_j_acss-2017-0015_ref_024_w2aab2b8b3b1b7b1ab1ac24Aa","unstructured":"[24] A. Matheus, \u201cIs It a Cat or Dog? A Neural Network Application in OpenCV,\u201d 2017 [Online]. Available: https:\/\/picoledelimao.github.io\/blog\/2016\/01\/31\/is-it-a-cat-or-dog-a-neural-network-application-in-opencv"},{"key":"2021040706234935099_j_acss-2017-0015_ref_025_w2aab2b8b3b1b7b1ab1ac25Aa","unstructured":"[25] R. Rivera, \u201cBoost C++ Libraries,\u201d 2014 [Online]. Available: http:\/\/www.boost.org"},{"key":"2021040706234935099_j_acss-2017-0015_ref_026_w2aab2b8b3b1b7b1ab1ac26Aa","doi-asserted-by":"crossref","unstructured":"[26] A. Hota, \u201cComparison of some Bag-of-Words models for image recognition,\u201d 2014 X International Symposium on Telecommunications (BIHTEL), pp. 1\u20135, Oct. 2014. https:\/\/doi.org\/10.1109\/bihtel.2014.6987648","DOI":"10.1109\/BIHTEL.2014.6987648"},{"key":"2021040706234935099_j_acss-2017-0015_ref_027_w2aab2b8b3b1b7b1ab1ac27Aa","doi-asserted-by":"crossref","unstructured":"[27] P. F. Alcantarilla, A. Bartoli, and A. J. Davison, \u201cKAZE Features,\u201d Lecture Notes in Computer Science, pp. 214\u2013227, 2012. https:\/\/doi.org\/10.1007\/978-3-642-33783-3_16","DOI":"10.1007\/978-3-642-33783-3_16"},{"key":"2021040706234935099_j_acss-2017-0015_ref_028_w2aab2b8b3b1b7b1ab1ac28Aa","unstructured":"[28] I. FLIR Systems, \u201cFLIR Vue Pro and Vue Pro R User Guide,\u201d 2016 [Online]. Available: http:\/\/www.flir.com\/uploadedFiles\/sUAS\/Products\/Vue-Pro\/FLIR-Vue-Pro-and-R-User-Guide.pdf."}],"container-title":["Applied Computer Systems"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/content.sciendo.com\/view\/journals\/acss\/22\/1\/article-p28.xml","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.sciendo.com\/article\/10.1515\/acss-2017-0015","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,4,7]],"date-time":"2021-04-07T20:58:05Z","timestamp":1617829085000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.sciendo.com\/article\/10.1515\/acss-2017-0015"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2017,12,1]]},"references-count":28,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2017,12,27]]},"published-print":{"date-parts":[[2017,12,1]]}},"alternative-id":["10.1515\/acss-2017-0015"],"URL":"https:\/\/doi.org\/10.1515\/acss-2017-0015","relation":{},"ISSN":["2255-8691"],"issn-type":[{"value":"2255-8691","type":"electronic"}],"subject":[],"published":{"date-parts":[[2017,12,1]]}}}