{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,17]],"date-time":"2025-10-17T14:07:22Z","timestamp":1760710042301,"version":"build-2065373602"},"reference-count":38,"publisher":"MDPI AG","issue":"21","license":[{"start":{"date-parts":[[2019,11,5]],"date-time":"2019-11-05T00:00:00Z","timestamp":1572912000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Regional Leading Research Center Program of NRF Korea","award":["2019R1A5A8080290"],"award-info":[{"award-number":["2019R1A5A8080290"]}]},{"name":"Brain Korea 21 Plus Program of NRF Korea","award":["22A20130012814"],"award-info":[{"award-number":["22A20130012814"]}]},{"name":"Basic Science Research Programs of the Ministry of Education of NRF Korea","award":["NRF-2018R1A2B6005105"],"award-info":[{"award-number":["NRF-2018R1A2B6005105"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>In this paper, we propose a method of generating a color image from light detection and ranging (LiDAR) 3D reflection intensity. The proposed method is composed of two steps: projection of LiDAR 3D reflection intensity into 2D intensity, and color image generation from the projected intensity by using a fully convolutional network (FCN). The color image should be generated from a very sparse projected intensity image. For this reason, the FCN is designed to have an asymmetric network structure, i.e., the layer depth of the decoder in the FCN is deeper than that of the encoder. The well-known KITTI dataset for various scenarios is used for the proposed FCN training and performance evaluation. Performance of the asymmetric network structures are empirically analyzed for various depth combinations for the encoder and decoder. Through simulations, it is shown that the proposed method generates fairly good visual quality of images while maintaining almost the same color as the ground truth image. Moreover, the proposed FCN has much higher performance than conventional interpolation methods and generative adversarial network based Pix2Pix. One interesting result is that the proposed FCN produces shadow-free and daylight color images. This result is caused by the fact that the LiDAR sensor data is produced by the light reflection and is, therefore, not affected by sunlight and shadow.<\/jats:p>","DOI":"10.3390\/s19214818","type":"journal-article","created":{"date-parts":[[2019,11,7]],"date-time":"2019-11-07T02:48:31Z","timestamp":1573094911000},"page":"4818","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":12,"title":["Asymmetric Encoder-Decoder Structured FCN Based LiDAR to Color Image Generation"],"prefix":"10.3390","volume":"19","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8464-9362","authenticated-orcid":false,"given":"Hyun-Koo","family":"Kim","sequence":"first","affiliation":[{"name":"Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38544, Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6049-1759","authenticated-orcid":false,"given":"Kook-Yeol","family":"Yoo","sequence":"additional","affiliation":[{"name":"Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38544, Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0218-2333","authenticated-orcid":false,"given":"Ju H.","family":"Park","sequence":"additional","affiliation":[{"name":"Department of Electrical Engineering, Yeungnam University, Gyeongsan 38544, Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1719-7853","authenticated-orcid":false,"given":"Ho-Youl","family":"Jung","sequence":"additional","affiliation":[{"name":"Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38544, Korea"}]}],"member":"1968","published-online":{"date-parts":[[2019,11,5]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Wurm, K.M., K\u00fcmmerle, R., Stachniss, C., and Burgard, W. (2009, January 10\u201315). Improving robot navigation in structured outdoor environments by identifying vegetation from laser data. Proceedings of the IEEE\/RSJ International Conference on Intelligent Robots and Systems, St. Louis, MO, USA.","DOI":"10.1109\/IROS.2009.5354530"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"McManus, C., Furgale, P., and Barfoot, T.D. (2011, January 9\u201313). Towards appearance-based methods for lidar sensors. Proceedings of the IEEE International Conference on Robotics and Automation, Shanghai, China.","DOI":"10.1109\/ICRA.2011.5980098"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Tatoglu, A., and Pochiraju, K. (2012, January 14\u201318). Point cloud segmentation with LIDAR reflection intensity behavior. Proceedings of the IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA.","DOI":"10.1109\/ICRA.2012.6225224"},{"key":"ref_4","unstructured":"Hall, D.S. (2014). Color LiDAR Scanner. (8,675,181), U.S. Patent."},{"key":"ref_5","unstructured":"Reymann, C., and Lacroix, S. (October, January 28). Improving LiDAR point cloud classification using intensities and multiple echoes. Proceedings of the IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Hamburg, Germany."},{"key":"ref_6","doi-asserted-by":"crossref","first-page":"085203","DOI":"10.1088\/1361-6501\/aa76a3","article-title":"Automatic extraction of pavement markings on streets from point cloud data of mobile LiDAR","volume":"28","author":"Gao","year":"2017","journal-title":"Meas. Sci. Technol."},{"key":"ref_7","first-page":"1","article-title":"Deep Learning Based Gray Image Generation from 3D LiDAR Reflection Intensity","volume":"14","author":"Kim","year":"2019","journal-title":"IEMEK J. Embed. Syst. Appl."},{"key":"ref_8","doi-asserted-by":"crossref","first-page":"8250","DOI":"10.1109\/ACCESS.2017.2699686","article-title":"An Investigation of Interpolation Techniques to Generate 2D Intensity Image From LIDAR Data","volume":"5","author":"Ashraf","year":"2017","journal-title":"IEEE Access"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Dolson, J., Baek, J., Plagemann, C., and Thrun, S. (2010, January 13\u201318). Upsampling range data in dynamic environments. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA.","DOI":"10.1109\/CVPR.2010.5540086"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Premebida, C., Carreira, J., Batista, J., and Nunes, U. (2014, January 14\u201318). Pedestrian detection combining RGB and dense LIDAR data. Proceedings of the IEEE\/RSJ International Conference on Intelligent Robots and Systems, Chicago, IL, USA.","DOI":"10.1109\/IROS.2014.6943141"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Schlosser, J., Chow, C.K., and Kira, Z. (2016, January 16\u201321). Fusing LIDAR and images for pedestrian detection using convolutional neural networks. Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden.","DOI":"10.1109\/ICRA.2016.7487370"},{"key":"ref_12","unstructured":"Chen, X., Zang, A., and Huang, X. (2018). Fusion of RGB Images and LiDAR Data for Lane Classification. (9,710,714), U.S. Patent."},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Asvadi, A., Garrote, L., Premebida, C., Peixoto, P., and Nunes, U.J. (2017, January 22\u201324). Real-Time Deep ConvNet-Based Vehicle Detection Using 3D-LIDAR Reflection Intensity Data. Proceedings of the Third Iberian Robotics Conference, Advances in Intelligent Systems and Computing, At Seville, Spain.","DOI":"10.1007\/978-3-319-70836-2_39"},{"key":"ref_14","unstructured":"Chen, L., Fan, L., Chen, J., Cao, D., and Wang, F. (2017). A Full Density Stereo Matching System Based on the Combination of CNNs and Slanted-Planes. IEEE Trans. Syst. Man Cybern. Syst., 1\u201312. (in press)."},{"key":"ref_15","doi-asserted-by":"crossref","first-page":"1065","DOI":"10.1109\/TSMC.2016.2637279","article-title":"Autoencoder With Invertible Functions for Dimension Reduction and Image Reconstruction","volume":"48","author":"Yang","year":"2018","journal-title":"IEEE Trans. Syst. Man Cybern. Syst."},{"key":"ref_16","unstructured":"Liu, P.Y., and Lam, E.Y. (2018). Image Reconstruction Using Deep Learning. arXiv."},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"1806","DOI":"10.1109\/TSMC.2018.2850149","article-title":"Deep Convolutional Neural Networks for Human Action Recognition Using Depth Maps and Postures","volume":"49","author":"Kamel","year":"2019","journal-title":"IEEE Trans. Syst. Man Cybern. Syst."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Cheng, Z., Yang, Q., and Sheng, B. (2015, January 7\u201313). Deep Colorization. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile.","DOI":"10.1109\/ICCV.2015.55"},{"key":"ref_19","unstructured":"Zhang, R., Isola, P., and Efros, A.A. Colorful image colorization. Proceedings of the European Conference on Computer Vision."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Varga, D., and Szir\u00e1nyi, T. (2016, January 4\u20138). Fully automatic image colorization based on Convolutional Neural Network. Proceedings of the 23rd International Conference on Pattern Recognition (ICPR), Cancun, Mexico.","DOI":"10.1109\/ICPR.2016.7900208"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"110","DOI":"10.1145\/2897824.2925974","article-title":"Let there be color!: Joint end-to-end learning of global and local image priors for automatic image colorization with simultaneous classification","volume":"35","author":"Iizuka","year":"2016","journal-title":"ACM Trans. Graph."},{"key":"ref_22","unstructured":"Baldassarre, F., Mor\u00edn, D.G., and Rod\u00e9s-Guirao, L. (2017). Deep Koalarization: Image Colorization using CNNs and Inception-ResNet-v2. arXiv."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J., Zhou, T., and Efros, A.A. (2016). Image-to-Image Translation with Conditional Adversarial Networks. arXiv.","DOI":"10.1109\/CVPR.2017.632"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Cao, Y., Zhou, Z., Zhang, W., and Yu, Y. (2017, January 18\u201322). Unsupervised Diverse Colorization via Generative Adversarial Networks. Proceedings of the Joint European Conference on Machine Learning and Knowledge Discovery in Databases, Skopje, Macedonia.","DOI":"10.1007\/978-3-319-71249-9_10"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Chen, W., and Hays, J. (2018, January 18\u201323). SketchyGAN: Towards Diverse and Realistic Sketch to Image Synthesis. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00981"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Su\u00e1rez, P.L., Sappa, A.D., and Vintimilla, B.X. (2017, January 21\u201326). Infrared Image Colorization Based on a Triplet DCGAN Architecture. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA.","DOI":"10.1109\/CVPRW.2017.32"},{"key":"ref_27","doi-asserted-by":"crossref","unstructured":"Qayynm, U., Ahsan, Q., Mahmood, Z., and Chcmdary, M.A. (2018, January 9\u201313). Thermal colorization using deep neural network. Proceedings of the 15th International Bhurban Conference on Applied Sciences and Technology (IBCAST), Islamabad, Pakistan.","DOI":"10.1109\/IBCAST.2018.8312243"},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Hore, A., and Ziou, D. (2010, January 23\u201326). Image Quality Metrics: PSNR vs. SSIM. Proceedings of the 20th International Conference on Pattern Recognition, Istanbul, Turkey.","DOI":"10.1109\/ICPR.2010.579"},{"key":"ref_29","doi-asserted-by":"crossref","first-page":"600","DOI":"10.1109\/TIP.2003.819861","article-title":"Image quality assessment: From error visibility to structural similarity","volume":"13","author":"Wang","year":"2004","journal-title":"IEEE Trans. Image Process."},{"key":"ref_30","doi-asserted-by":"crossref","first-page":"2481","DOI":"10.1109\/TPAMI.2016.2644615","article-title":"SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation","volume":"39","author":"Badrinarayanan","year":"2017","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Yasrab, R., Gu, N., and Zhang, X. (2017). An Encoder-Decoder Based Convolution Neural Network (CNN) for Future Advanced Driver Assistance System (ADAS). Appl. Sci., 7.","DOI":"10.3390\/app7040312"},{"key":"ref_32","unstructured":"Ioffe, S., and Szegedy, C. (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv."},{"key":"ref_33","first-page":"1929","article-title":"Dropout: A simple way to prevent neural networks from overfitting","volume":"15","author":"Srivastava","year":"2014","journal-title":"J. Mach. Learn. Res."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"1231","DOI":"10.1177\/0278364913491297","article-title":"Vision meets robotics: The KITTI dataset","volume":"32","author":"Geiger","year":"2013","journal-title":"Int. J. Robot. Res."},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Zeiler, M.D., and Fergus, R. (2014, January 6\u201312). Visualizing and understanding convolutional networks. Proceedings of the 13th European Conference on Computer Vision, Zurich, Switzerland.","DOI":"10.1007\/978-3-319-10590-1_53"},{"key":"ref_36","unstructured":"Kingma, D.P., and Ba, J. (2014). Adam: A Method for Stochastic Optimization. arXiv."},{"key":"ref_37","unstructured":"Acharya, T. (2002). Integrated Color Interpolation and Color Space Conversion Algorithm from 8-bit Bayer Pattern RGB Color Space to 12-bit YCrCb Color Space. (6,392,699), U.S. Patent."},{"key":"ref_38","doi-asserted-by":"crossref","first-page":"107","DOI":"10.1142\/S0218488598000094","article-title":"The vanishing gradient problem during learning recurrent neural nets and problem solutions","volume":"6","author":"Hochreiter","year":"1998","journal-title":"Int. J. Uncert. Fuzzi. Knowl. Based Syst."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/19\/21\/4818\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T13:32:07Z","timestamp":1760189527000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/19\/21\/4818"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,11,5]]},"references-count":38,"journal-issue":{"issue":"21","published-online":{"date-parts":[[2019,11]]}},"alternative-id":["s19214818"],"URL":"https:\/\/doi.org\/10.3390\/s19214818","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2019,11,5]]}}}