{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,9]],"date-time":"2025-11-09T03:43:33Z","timestamp":1762659813228,"version":"build-2065373602"},"reference-count":52,"publisher":"MDPI AG","issue":"9","license":[{"start":{"date-parts":[[2019,5,11]],"date-time":"2019-05-11T00:00:00Z","timestamp":1557532800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Recent deep-learning counting techniques revolve around two distinct features of data\u2014sparse data, which favors detection networks, or dense data where density map networks are used. Both techniques fail to address a third scenario, where dense objects are sparsely located. Raw aerial images represent sparse distributions of data in most situations. To address this issue, we propose a novel and exceedingly portable end-to-end model, DisCountNet, and an example dataset to test it on. DisCountNet is a two-stage network that uses theories from both detection and heat-map networks to provide a simple yet powerful design. The first stage, DiscNet, operates on the theory of coarse detection, but does so by converting a rich and high-resolution image into a sparse representation where only important information is encoded. Following this, CountNet operates on the dense regions of the sparse matrix to generate a density map, which provides fine locations and count predictions on densities of objects. Comparing the proposed network to current state-of-the-art networks, we find that we can maintain competitive performance while using a fraction of the computational complexity, resulting in a real-time solution.<\/jats:p>","DOI":"10.3390\/rs11091128","type":"journal-article","created":{"date-parts":[[2019,5,13]],"date-time":"2019-05-13T05:35:39Z","timestamp":1557725739000},"page":"1128","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":35,"title":["DisCountNet: Discriminating and Counting Network for Real-Time Counting and Localization of Sparse Objects in High-Resolution UAV Imagery"],"prefix":"10.3390","volume":"11","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-9358-2836","authenticated-orcid":false,"given":"Maryam","family":"Rahnemoonfar","sequence":"first","affiliation":[{"name":"Computer Vision and Remote Sensing Laboratory (Bina Lab), Texas A&amp;M University-Corpus Christi, Corpus Christi, TX 78412, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3524-4358","authenticated-orcid":false,"given":"Dugan","family":"Dobbs","sequence":"additional","affiliation":[{"name":"Computer Vision and Remote Sensing Laboratory (Bina Lab), Texas A&amp;M University-Corpus Christi, Corpus Christi, TX 78412, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-9949-8683","authenticated-orcid":false,"given":"Masoud","family":"Yari","sequence":"additional","affiliation":[{"name":"Computer Vision and Remote Sensing Laboratory (Bina Lab), Texas A&amp;M University-Corpus Christi, Corpus Christi, TX 78412, USA"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7996-0594","authenticated-orcid":false,"given":"Michael J.","family":"Starek","sequence":"additional","affiliation":[{"name":"Measurement Analytics Lab (MANTIS), Conrad Blucher Institute for Surveying and Science, Texas A&amp;M University-Corpus Christi, Corpus Christi, TX 78412, USA"}]}],"member":"1968","published-online":{"date-parts":[[2019,5,11]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Chan, A.B., Liang, Z.S.J., and Vasconcelos, N. (2008, January 23\u201328). Privacy preserving crowd monitoring: Counting people without people models or tracking. Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA.","DOI":"10.1109\/CVPR.2008.4587569"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Idrees, H., Saleemi, I., Seibert, C., and Shah, M. (2013, January 23\u201328). Multi-source multi-scale counting in extremely dense crowd images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA.","DOI":"10.1109\/CVPR.2013.329"},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Shen, Z., Xu, Y., Ni, B., Wang, M., Hu, J., and Yang, X. (2018, January 18\u201322). Crowd counting via adversarial cross-scale consistency pursuit. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00550"},{"key":"ref_4","doi-asserted-by":"crossref","unstructured":"Boominathan, L., Kruthiventi, S.S., and Babu, R.V. (2016, January 15\u201319). Crowdnet: A deep convolutional network for dense crowd counting. Proceedings of the 24th ACM international conference on Multimedia, Amsterdam, The Netherlands.","DOI":"10.1145\/2964284.2967300"},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Liu, J., Gao, C., Meng, D., and Hauptmann, A.G. (2018, January 18\u201322). Decidenet: Counting varying density crowds through attention guided detection and density estimation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00545"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Guerrero-G\u00f3mez-Olmedo, R., Torre-Jim\u00e9nez, B., L\u00f3pez-Sastre, R., Maldonado-Basc\u00f3n, S., and Onoro-Rubio, D. (2015). Extremely overlapping vehicle counting. Pattern Recognition and Image Analysis, Springer.","DOI":"10.1007\/978-3-319-19390-8_48"},{"key":"ref_7","doi-asserted-by":"crossref","first-page":"283","DOI":"10.1080\/21681163.2016.1149104","article-title":"Microscopy cell counting and detection with fully convolutional regression networks","volume":"6","author":"Xie","year":"2018","journal-title":"Comput. Methods Biomech. Biomed. Eng. Imaging Vis."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Rahnemoonfar, M., and Sheppard, C. (2017). Deep count: Fruit counting based on deep simulated learning. Sensors, 17.","DOI":"10.3390\/s17040905"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Rahnemoonfar, M., and Sheppard, C. (2017). Real-time yield estimation based on deep learning. Proc. SPIE, 10218.","DOI":"10.1117\/12.2263097"},{"key":"ref_10","doi-asserted-by":"crossref","unstructured":"Huang, L.C., Kulkarni, K., Jha, A., Lohit, S., Jayasuriya, S., and Turaga, P. (2018, January 7\u201310). CS-VQA: Visual Question Answering with Compressively Sensed Images. Proceedings of the 2018 25th IEEE International Conference on Image Processing (ICIP), Athens, Greece.","DOI":"10.1109\/ICIP.2018.8451445"},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Fukui, A., Park, D.H., Yang, D., Rohrbach, A., Darrell, T., and Rohrbach, M. (2016). Multimodal compact bilinear pooling for visual question answering and visual grounding. arXiv.","DOI":"10.18653\/v1\/D16-1044"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Sheppard, C., and Rahnemoonfar, M. (2017, January 23\u201328). Real-time scene understanding for UAV imagery based on deep convolutional neural networks. Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA.","DOI":"10.1109\/IGARSS.2017.8127435"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Redmon, J., and Farhadi, A. (2017, January 21\u201326). YOLO9000: Better, faster, stronger. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.690"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Goyal, P., Girshick, R., He, K., and Doll\u00e1r, P. (2017, January 22\u201329). Focal loss for dense object detection. Proceedings of the IEEE International Conference on Computer Vision (ICCV), Venice, Italy.","DOI":"10.1109\/ICCV.2017.324"},{"key":"ref_15","unstructured":"Kamangir, H., Rahnemoonfar, M., Dobbs, D., Paden, J., and Fox, G.C. (2018, January 22\u201327). Detecting ice layers in Radar images with deep hybrid networks. Proceedings of the IEEE Conference on Geoscience and Remote Sensing (IGARSS), Valencia, Spain."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Rahnemoonfar, M., Robin, M., Miguel, M.V., Dobbs, D., and Adams, A. (2017, January 23\u201328). Flooded area detection from UAV images based on densely connected recurrent neural networks. Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA.","DOI":"10.1109\/IGARSS.2018.8517946"},{"key":"ref_17","doi-asserted-by":"crossref","unstructured":"Chattopadhyay, P., Vedantam, R., Selvaraju, R.R., Batra, D., and Parikh, D. (2017, January 21\u201326). Counting everyday objects in everyday scenes. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.471"},{"key":"ref_18","unstructured":"Zhang, C., Li, H., Wang, X., and Yang, X. (2015, January 7\u201312). Cross-scene crowd counting via deep convolutional neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Hu, P., and Ramanan, D. (2017, January 21\u201326). Finding tiny faces. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.166"},{"key":"ref_20","unstructured":"Cortes, C., Lawrence, N.D., Lee, D.D., Sugiyama, M., and Garnett, R. (2015). Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. Advances in Neural Information Processing Systems 28, Curran Associates, Inc."},{"key":"ref_21","unstructured":"Zhang, Y., Zhou, D., Chen, S., Gao, S., and Ma, Y. (July, January 26). Single-image crowd counting via multi-column convolutional neural network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA."},{"key":"ref_22","unstructured":"Fiaschi, L., K\u00f6the, U., Nair, R., and Hamprecht, F.A. (2012, January 11\u201315). Learning to count with regression forest and structured labels. Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012), Tsukuba, Japan."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Sam, D.B., Surya, S., and Babu, R.V. (2017, January 21\u201326). Switching convolutional neural network for crowd counting. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.429"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Idrees, H., Tayyab, M., Athrey, K., Zhang, D., Al-Maadeed, S., Rajpoot, N., and Shah, M. (2018, January 8\u201314). Composition loss for counting, density map estimation and localization in dense crowds. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany.","DOI":"10.1007\/978-3-030-01216-8_33"},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Walach, E., and Wolf, L. (2016). Learning to count with cnn boosting. Computer Vision\u2014ECCV 2016, Springer.","DOI":"10.1007\/978-3-319-46475-6_41"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Deb, D., and Ventura, J. (2018, January 18\u201322). An aggregated multicolumn dilated convolution network for perspective-free counting. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPRW.2018.00057"},{"key":"ref_27","doi-asserted-by":"crossref","first-page":"303","DOI":"10.1007\/s00138-008-0128-0","article-title":"Bearcam: Automated wildlife monitoring at the arctic circle","volume":"20","author":"Wawerla","year":"2009","journal-title":"Mach. Vis. Appl."},{"key":"ref_28","doi-asserted-by":"crossref","unstructured":"Shang, C., Ai, H., and Bai, B. (2016, January 25\u201328). End-to-end crowd counting via joint learning local and global count. Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA.","DOI":"10.1109\/ICIP.2016.7532551"},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Liu, X., van de Weijer, J., and Bagdanov, A.D. (2018, January 18\u201322). Leveraging unlabeled data for crowd counting by learning to rank. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00799"},{"key":"ref_30","doi-asserted-by":"crossref","unstructured":"Babu Sam, D., Sajjan, N.N., Venkatesh Babu, R., and Srinivasan, M. (2018, January 18\u201322). Divide and grow: Capturing huge diversity in crowd images with incrementally growing CNN. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00381"},{"key":"ref_31","doi-asserted-by":"crossref","unstructured":"Shi, Z., Zhang, L., Liu, Y., Cao, X., Ye, Y., Cheng, M.M., and Zheng, G. (2018, January 18\u201322). Crowd counting with deep negative correlation learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00564"},{"key":"ref_32","doi-asserted-by":"crossref","first-page":"2220","DOI":"10.1109\/ACCESS.2017.2782260","article-title":"Vehicle detection and counting in high-resolution aerial images using convolutional regression neural network","volume":"6","author":"Tayara","year":"2018","journal-title":"IEEE Access"},{"key":"ref_33","doi-asserted-by":"crossref","first-page":"781","DOI":"10.1109\/LRA.2017.2651944","article-title":"Counting apples and oranges with deep learning: A data-driven approach","volume":"2","author":"Chen","year":"2017","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_34","doi-asserted-by":"crossref","first-page":"79","DOI":"10.1186\/s13007-017-0224-0","article-title":"TasselNet: Counting maize tassels in the wild via local counts regression network","volume":"13","author":"Lu","year":"2017","journal-title":"Plant Methods"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Chamoso, P., Raveane, W., Parra, V., and Gonz\u00e1lez, A. (2014). UAVs applied to the counting and monitoring of animals. Ambient Intelligence-Software and Applications, Springer.","DOI":"10.1007\/978-3-319-07596-9_8"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Rivas, A., Chamoso, P., Gonz\u00e1lez-Briones, A., and Corchado, J. (2018). Detection of Cattle Using Drones and Convolutional Neural Networks. Sensors, 18.","DOI":"10.3390\/s18072048"},{"key":"ref_37","doi-asserted-by":"crossref","first-page":"2623","DOI":"10.1080\/01431161.2017.1280639","article-title":"Adapting astronomical source detection software to help detect animals in thermal images obtained by unmanned aerial systems","volume":"38","author":"Longmore","year":"2017","journal-title":"Int. J. Remote Sens."},{"key":"ref_38","doi-asserted-by":"crossref","unstructured":"Koskowich, B.J., Rahnemoonfar, M., and Starek, M. (2018, January 22\u201327). Virtualot\u2014A Framework Enabling Real-Time Coordinate Transformation & Occlusion Sensitive Tracking Using UAS Products, Deep Learning Object Detection & Traditional Object Tracking Techniques. Proceedings of the IEEE International Geoscience and Remote Sensing Symposium(IGARSS), Valencia, Spain.","DOI":"10.1109\/IGARSS.2018.8518124"},{"key":"ref_39","doi-asserted-by":"crossref","first-page":"1519","DOI":"10.3390\/rs4061519","article-title":"Development of a UAV-LiDAR system with application to forest inventory","volume":"4","author":"Wallace","year":"2012","journal-title":"Remote Sens."},{"key":"ref_40","doi-asserted-by":"crossref","first-page":"3390","DOI":"10.3390\/rs4113390","article-title":"Unmanned aerial vehicle (UAV) for monitoring soil erosion in Morocco","volume":"4","author":"Marzolff","year":"2012","journal-title":"Remote Sens."},{"key":"ref_41","doi-asserted-by":"crossref","first-page":"137","DOI":"10.1139\/juvs-2015-0021","article-title":"Wildlife research and management methods in the 21st century: Where do unmanned aircraft fit in?","volume":"3","author":"Chabot","year":"2015","journal-title":"J. Unmanned Veh. Syst."},{"key":"ref_42","doi-asserted-by":"crossref","first-page":"214","DOI":"10.1177\/0030727018781876","article-title":"Perspectives on the use of unmanned aerial systems to monitor cattle","volume":"47","author":"Barbedo","year":"2018","journal-title":"Outlook Agric."},{"key":"ref_43","doi-asserted-by":"crossref","first-page":"1422","DOI":"10.1016\/j.imavis.2006.12.011","article-title":"Vehicle detection from high-resolution satellite imagery using morphological shared-weight neural networks","volume":"25","author":"Jin","year":"2007","journal-title":"Image Vis. Comput."},{"key":"ref_44","doi-asserted-by":"crossref","unstructured":"Jiang, Q., Cao, L., Cheng, M., Wang, C., and Li, J. (2015, January 14\u201317). Deep neural networks-based vehicle detection in satellite images. Proceedings of the 2015 International Symposium on Bioelectronics and Bioinformatics (ISBB), Beijing, China.","DOI":"10.1109\/ISBB.2015.7344954"},{"key":"ref_45","doi-asserted-by":"crossref","unstructured":"Miyamoto, H., Uehara, K., Murakawa, M., Sakanashi, H., Nasato, H., Kouyama, T., and Nakamura, R. (2018, January 22\u201327). Object Detection in Satellite Imagery Using 2-Step Convolutional Neural Networks. Proceedings of the IEEE International Geoscience and Remote Sensing Symposium(IGARSS), Valencia, Spain.","DOI":"10.1109\/IGARSS.2018.8518587"},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Starek, M.J., Davis, T., Prouty, D., and Berryhill, J. (2014, January 20\u201321). Small-scale UAS for geoinformatics applications on an island campus. Proceedings of the 2014 Ubiquitous Positioning Indoor Navigation and Location Based Service (UPINLBS), Corpus Christi, TX, USA.","DOI":"10.1109\/UPINLBS.2014.7033718"},{"key":"ref_47","doi-asserted-by":"crossref","first-page":"91","DOI":"10.1023\/B:VISI.0000029664.99615.94","article-title":"Distinctive image features from scale-invariant keypoints","volume":"60","author":"Lowe","year":"2004","journal-title":"Int. J. Comput. Vis."},{"key":"ref_48","unstructured":"United States Department of Agriculture (2009). Balancing Animals with Your Forage."},{"key":"ref_49","doi-asserted-by":"crossref","unstructured":"Ronneberger, O., Fischer, P., and Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. Medical Image Computing and Computer-Assisted Intervention\u2014MICCAI 2015, Springer.","DOI":"10.1007\/978-3-319-24574-4_28"},{"key":"ref_50","doi-asserted-by":"crossref","unstructured":"Li, Y., Zhang, X., and Chen, D. (2018, January 18\u201322). Csrnet: Dilated convolutional neural networks for understanding the highly congested scenes. Proceedings of the IEEE conference on computer vision and pattern recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00120"},{"key":"ref_51","unstructured":"Wang, Z., Simoncelli, E.P., and Bovik, A.C. (2003, January 9\u201312). Multiscale structural similarity for image quality assessment. Proceedings of the Thrity-Seventh Asilomar Conference on Signals, Systems & Computers, Pacific Grove, CA, USA."},{"key":"ref_52","unstructured":"Kingma, D.P., and Ba, J. (2014). Adam: A method for stochastic optimization. arXiv."}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/11\/9\/1128\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T12:51:04Z","timestamp":1760187064000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/11\/9\/1128"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2019,5,11]]},"references-count":52,"journal-issue":{"issue":"9","published-online":{"date-parts":[[2019,5]]}},"alternative-id":["rs11091128"],"URL":"https:\/\/doi.org\/10.3390\/rs11091128","relation":{},"ISSN":["2072-4292"],"issn-type":[{"type":"electronic","value":"2072-4292"}],"subject":[],"published":{"date-parts":[[2019,5,11]]}}}