{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,4]],"date-time":"2026-03-04T17:25:22Z","timestamp":1772645122060,"version":"3.50.1"},"reference-count":54,"publisher":"MDPI AG","issue":"14","license":[{"start":{"date-parts":[[2020,7,14]],"date-time":"2020-07-14T00:00:00Z","timestamp":1594684800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"DOI":"10.13039\/501100003725","name":"National Research Foundation of Korea","doi-asserted-by":"publisher","award":["NRF-2018R1D1A1B07041921"],"award-info":[{"award-number":["NRF-2018R1D1A1B07041921"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003725","name":"National Research Foundation of Korea","doi-asserted-by":"publisher","award":["NRF-2019R1A2C1083813"],"award-info":[{"award-number":["NRF-2019R1A2C1083813"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003725","name":"National Research Foundation of Korea","doi-asserted-by":"publisher","award":["NRF-2016M3A9E1915855"],"award-info":[{"award-number":["NRF-2016M3A9E1915855"]}],"id":[{"id":"10.13039\/501100003725","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>Deep learning-based marker detection for autonomous drone landing is widely studied, due to its superior detection performance. However, no study was reported to address non-uniform motion-blurred input images, and most of the previous handcrafted and deep learning-based methods failed to operate with these challenging inputs. To solve this problem, we propose a deep learning-based marker detection method for autonomous drone landing, by (1) introducing a two-phase framework of deblurring and object detection, by adopting a slimmed version of deblur generative adversarial network (DeblurGAN) model and a You only look once version 2 (YOLOv2) detector, respectively, and (2) considering the balance between the processing time and accuracy of the system. To this end, we propose a channel-pruning framework for slimming the DeblurGAN model called SlimDeblurGAN, without significant accuracy degradation. The experimental results on the two datasets showed that our proposed method exhibited higher performance and greater robustness than the previous methods, in both deburring and marker detection.<\/jats:p>","DOI":"10.3390\/s20143918","type":"journal-article","created":{"date-parts":[[2020,7,14]],"date-time":"2020-07-14T11:03:23Z","timestamp":1594724603000},"page":"3918","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":28,"title":["SlimDeblurGAN-Based Motion Deblurring and Marker Detection for Autonomous Drone Landing"],"prefix":"10.3390","volume":"20","author":[{"given":"Noi Quang","family":"Truong","sequence":"first","affiliation":[{"name":"Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea"}]},{"given":"Young Won","family":"Lee","sequence":"additional","affiliation":[{"name":"Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-7679-081X","authenticated-orcid":false,"given":"Muhammad","family":"Owais","sequence":"additional","affiliation":[{"name":"Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea"}]},{"given":"Dat Tien","family":"Nguyen","sequence":"additional","affiliation":[{"name":"Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea"}]},{"given":"Ganbayar","family":"Batchuluun","sequence":"additional","affiliation":[{"name":"Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea"}]},{"given":"Tuyen Danh","family":"Pham","sequence":"additional","affiliation":[{"name":"Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea"}]},{"given":"Kang Ryoung","family":"Park","sequence":"additional","affiliation":[{"name":"Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro 1-gil, Jung-gu, Seoul 04620, Korea"}]}],"member":"1968","published-online":{"date-parts":[[2020,7,14]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"197","DOI":"10.1007\/s10846-013-9819-5","article-title":"Airborne vision-based navigation method for UAV accuracy landing using infrared lamps","volume":"72","author":"Gui","year":"2013","journal-title":"J. Intell. Robot. Syst."},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Forster, C., Faessler, M., Fontana, F., Werlberger, M., and Scaramuzza, D. (2015, January 26\u201330). Continuous on-board monocular-vision-based elevation mapping applied to autonomous landing of micro aerial vehicles. Proceedings of the IEEE International Conference on Robotics and Automation, Seattle, WA, USA.","DOI":"10.1109\/ICRA.2015.7138988"},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"1498","DOI":"10.1016\/j.cja.2013.07.049","article-title":"Use of land\u2019s cooperative object to estimate UAV\u2019s pose for autonomous landing","volume":"26","author":"Xu","year":"2013","journal-title":"Chin. J. Aeronaut."},{"key":"ref_4","doi-asserted-by":"crossref","first-page":"881","DOI":"10.1007\/s10514-016-9564-2","article-title":"Monocular vision-based real-time target recognition and tracking for autonomously landing an UAV in a cluttered shipboard environment","volume":"41","author":"Lin","year":"2017","journal-title":"Auton. Robots"},{"key":"ref_5","unstructured":"Lange, S., Sunderhauf, N., and Protzel, P. (2009, January 22\u201326). A vision based onboard approach for landing and position control of an autonomous multirotor UAV in GPS-denied environments. Proceedings of the IEEE International Conference on Advanced Robotics, Munich, Germany."},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Polvara, R., Sharma, S., Wan, J., Manning, A., and Sutton, R. (2017, January 6\u20138). Towards autonomous landing on a moving vessel through fiducial markers. Proceedings of the European Conference on Mobile Robots, Paris, France.","DOI":"10.1109\/ECMR.2017.8098671"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"369","DOI":"10.1007\/s10846-016-0399-z","article-title":"Vision based autonomous landing of multirotor UAV on moving platform","volume":"85","author":"Araar","year":"2017","journal-title":"J. Intell. Robot. Syst."},{"key":"ref_8","unstructured":"Bart\u00e1k, R., Hra\u0161ko, A., and Obdr\u017e\u00e1lek, D. (2014, January 21\u201323). On autonomous landing of AR. Drone: Hands-on experience. Proceedings of the 27th International Florida Artificial Intelligence Research Society Conference, Pensacola Beach, FL, USA."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Venugopalan, T.K., Taher, T., and Barbastathis, G. (2012, January 14\u201319). Autonomous landing of an unmanned aerial vehicle on an autonomous marine vehicle. Proceedings of the Oceans Conference, Hampton Roads, VA, USA.","DOI":"10.1109\/OCEANS.2012.6404893"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Wubben, J., Fabra, F., Calafate, C.T., Krzeszowski, T., Marquez-Barja, J.M., Cano, J.-C., and Manzoni, P. (2019). Accurate landing of unmanned aerial vehicles using ground pattern recognition. Electronics, 8.","DOI":"10.3390\/electronics8121532"},{"key":"ref_11","first-page":"199","article-title":"Vision analysis system for autonomous landing of micro drone","volume":"8","author":"Skoczylas","year":"2014","journal-title":"Acta Mech. Autom."},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Dotenco, S., Gallwitz, F., and Angelopoulou, E. (2014, January 6\u201312). Autonomous approach and landing for a low-cost quadrotor using monocular cameras. Proceedings of the European Conference on Computer Vision Workshops, Zurich, Switzerland.","DOI":"10.1007\/978-3-319-16178-5_14"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Nguyen, P.H., Arsalan, M., Koo, J.H., Naqvi, R.A., Truong, N.Q., and Park, K.R. (2018). LightDenseYOLO: A fast and accurate marker tracker for autonomous UAV landing by visible light camera sensor on drone. Sensors, 18.","DOI":"10.3390\/s18061703"},{"key":"ref_14","doi-asserted-by":"crossref","first-page":"171","DOI":"10.1177\/1756829318757470","article-title":"Deep learning for vision-based micro aerial vehicle autonomous landing","volume":"10","author":"Yu","year":"2018","journal-title":"Int. J. Micro Air Veh."},{"key":"ref_15","unstructured":"(2020, January 15). Autonomous Quadrotor Landing Using Deep Reinforcement Learning. Available online: https:\/\/arxiv.org\/abs\/1709.03339."},{"key":"ref_16","doi-asserted-by":"crossref","first-page":"61639","DOI":"10.1109\/ACCESS.2019.2915944","article-title":"Deep learning-based super-resolution reconstruction and marker detection for drone landing","volume":"7","author":"Truong","year":"2019","journal-title":"IEEE Access"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"1137","DOI":"10.1109\/TPAMI.2016.2577031","article-title":"Faster R-CNN: Towards real-time object detection with region proposal networks. IEEE Trans","volume":"39","author":"Ren","year":"2017","journal-title":"Pattern Anal. Mach. Intell."},{"key":"ref_18","doi-asserted-by":"crossref","unstructured":"Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.-Y., and Berg, A.C. (2016, January 11\u201314). SSD: Single shot multibox detector. Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46448-0_2"},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"3604","DOI":"10.1109\/TVT.2020.2969427","article-title":"Detecting motion blurred vehicle logo in IoV using filter-DeblurGAN and VL-YOLO","volume":"69","author":"Zhou","year":"2020","journal-title":"IEEE Trans. Veh. Technol."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Wang, R., Ma, G., Qin, Q., Shi, Q., and Huang, J. (2018). Blind UAV images deblurring based on discriminative networks. Sensors, 18.","DOI":"10.3390\/s18092874"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Wang, J., and Olson, E. (2016, January 9\u201314). AprilTag 2: Efficient and robust fiducial detection. Proceedings of the IEEE\/RSJ International Conference on Intelligent Robots and Systems, Daejeon, Korea.","DOI":"10.1109\/IROS.2016.7759617"},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Kupyn, O., Budzan, V., Mykhailych, M., Mishkin, D., and Matas, J. (2018, January 18\u201323). DeblurGAN: Blind motion deblurring using conditional adversarial networks. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00854"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Nah, S., Kim, T.H., and Lee, K.M. (2017, January 21\u201326). Deep multi-scale convolutional neural network for dynamic scene deblurring. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.35"},{"key":"ref_24","unstructured":"(2020, January 15). Conditional Generative Adversarial Nets. Available online: https:\/\/arxiv.org\/abs\/1411.1784."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Ledig, C., Theis, L., Husz\u00e1r, F., Caballero, J., Cunningham, A., Acosta, A., Aitken, A., Tejani, A., Totz, J., and Wang, Z. (2017, January 21\u201326). Photo-realistic single image super-resolution using a generative adversarial network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.19"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Li, C., and Wand, M. (2016, January 11\u201314). Precomputed real-time texture synthesis with markovian generative adversarial networks. Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46487-9_43"},{"key":"ref_27","unstructured":"(2020, January 04). Improved Training of Wasserstein GANs. Available online: https:\/\/arxiv.org\/abs\/1704.00028."},{"key":"ref_28","unstructured":"Salimans, T., Goodfellow, I., Zaremba, W., Cheung, V., Radford, A., Chen, X., and Chen, X. (2016, January 5\u201310). Improved techniques for training GANs. Proceedings of the 30th Conference on Neural Information Processing Systems, Barcelona, Spain."},{"key":"ref_29","unstructured":"(2020, January 05). Wasserstein GAN. Available online: https:\/\/arxiv.org\/abs\/1701.07875."},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Johnson, J., Alahi, A., and Fei-Fei, L. (2016, January 11\u201314). Perceptual losses for real-time style transfer and super-resolution. Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands.","DOI":"10.1007\/978-3-319-46475-6_43"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_32","unstructured":"(2020, January 05). Instance Normalization: The Missing Ingredient for Fast Stylization. Available online: https:\/\/arxiv.org\/abs\/1607.08022."},{"key":"ref_33","unstructured":"(2020, January 04). Deep Learning Using Rectified Linear Units (ReLU). Available online: https:\/\/arxiv.org\/abs\/1803.08375."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"Isola, P., Zhu, J.-Y., Zhou, T., and Efros, A.A. (2017, January 21\u201326). Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.632"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei-Fei, L. (2009, January 20\u201325). ImageNet: A large-scale hierarchical image database. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"ref_36","unstructured":"(2020, January 05). MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. Available online: https:\/\/arxiv.org\/abs\/1704.04861."},{"key":"ref_37","doi-asserted-by":"crossref","unstructured":"Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S., and Zhang, C. (2017, January 22\u201329). Learning efficient convolutional networks through network slimming. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.298"},{"key":"ref_38","unstructured":"(2020, January 07). Distilling the Knowledge in a Neural Network. Available online: https:\/\/arxiv.org\/abs\/1503.02531."},{"key":"ref_39","unstructured":"(2020, January 07). Deep Learning with Dynamic Computation Graphs. Available online: https:\/\/arxiv.org\/abs\/1702.02181."},{"key":"ref_40","unstructured":"Zhang, P., Zhong, Y., and Li, X. (November, January 27). SlimYOLOv3: Narrower, faster and better for real-time UAV applications. Proceedings of the International Conference on Computer Vision, Seoul, Korea."},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Girshick, R. (2015, January 7\u201313). Fast R-CNN. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.169"},{"key":"ref_42","unstructured":"Dai, J., Li, Y., He, K., and Sun, J. (2016, January 5\u201310). R-FCN: Object detection via region-based fully convolutional networks. Proceedings of the 30th Conference on Neural Information Processing Systems, Barcelona, Spain."},{"key":"ref_43","doi-asserted-by":"crossref","unstructured":"Lin, T.-Y., Goyal, P., Girshick, R., He, K., and Doll\u00e1r, P. (2017, January 22\u201329). Focal loss for dense object detection. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.324"},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016, January 27\u201330). You only look once: Unified, real-time object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.91"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Redmon, J., and Farhadi, A. (2017, January 21\u201326). YOLO9000: Better, faster, stronger. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.690"},{"key":"ref_46","unstructured":"(2020, January 10). YOLOv3: An Incremental Improvement. Available online: https:\/\/arxiv.org\/abs\/1804.02767."},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Lin, T.-Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., and Zitnick, C.L. (2014, January 6\u201312). Microsoft COCO: Common objects in context. Proceedings of the European Conference on Computer Vision, Zurich, Switzerland.","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"ref_48","unstructured":"Gonzalez, R.C., and Woods, R.E. (2010). Digital Image Processing, Prentice-Hall. [3rd ed.]."},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"600","DOI":"10.1109\/TIP.2003.819861","article-title":"Image quality assessment: From error visibility to structural similarity","volume":"13","author":"Wang","year":"2004","journal-title":"IEEE Trans. Image Process."},{"key":"ref_50","unstructured":"Kupyn, O., Martyniuk, T., Wu, J., and Wang, Z. (November, January 27). DeblurGAN-v2: Deblurring (orders-of-magnitude) faster and better. Proceedings of the International Conference on Computer Vision, Seoul, Korea."},{"key":"ref_51","unstructured":"(2019, December 19). Jetson TX2 Module. Available online: https:\/\/www.nvidia.com\/en-us\/autonomous-machines\/embedded-systems-dev-kits-modules\/."},{"key":"ref_52","unstructured":"(2020, January 23). TensorFlow: The Python Deep Learning library. Available online: https:\/\/www.tensorflow.org\/."},{"key":"ref_53","unstructured":"(2020, January 23). CUDA. Available online: https:\/\/developer.nvidia.com\/cuda-toolkit-archive."},{"key":"ref_54","unstructured":"(2020, January 23). CUDNN. Available online: https:\/\/developer.nvidia.com\/cudnn."}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/14\/3918\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T09:51:29Z","timestamp":1760176289000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/20\/14\/3918"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,7,14]]},"references-count":54,"journal-issue":{"issue":"14","published-online":{"date-parts":[[2020,7]]}},"alternative-id":["s20143918"],"URL":"https:\/\/doi.org\/10.3390\/s20143918","relation":{},"ISSN":["1424-8220"],"issn-type":[{"value":"1424-8220","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,7,14]]}}}