{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,10]],"date-time":"2026-04-10T10:37:55Z","timestamp":1775817475256,"version":"3.50.1"},"reference-count":51,"publisher":"MDPI AG","issue":"15","license":[{"start":{"date-parts":[[2022,8,5]],"date-time":"2022-08-05T00:00:00Z","timestamp":1659657600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"Korea Institute of Energy Technology Evaluation and Planning (KETEP)","award":["20194010201830"],"award-info":[{"award-number":["20194010201830"]}]},{"name":"Korea Institute of Energy Technology Evaluation and Planning (KETEP)","award":["R17XA05-20"],"award-info":[{"award-number":["R17XA05-20"]}]},{"name":"Ministry of Trade, Industry and Energy, Republic of Korea","award":["20194010201830"],"award-info":[{"award-number":["20194010201830"]}]},{"name":"Ministry of Trade, Industry and Energy, Republic of Korea","award":["R17XA05-20"],"award-info":[{"award-number":["R17XA05-20"]}]},{"name":"Korea Electric Power Corporation","award":["20194010201830"],"award-info":[{"award-number":["20194010201830"]}]},{"name":"Korea Electric Power Corporation","award":["R17XA05-20"],"award-info":[{"award-number":["R17XA05-20"]}]},{"name":"Kwangwoon University","award":["20194010201830"],"award-info":[{"award-number":["20194010201830"]}]},{"name":"Kwangwoon University","award":["R17XA05-20"],"award-info":[{"award-number":["R17XA05-20"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Remote Sensing"],"abstract":"<jats:p>Object counting is an indispensable task in manufacturing and management. Recently, the development of image-processing techniques and deep learning object detection has achieved excellent performance in object-counting tasks. Accordingly, we propose a novel small-size smart counting system composed of a low-cost hardware device and a cloud-based object-counting software server to implement an accurate counting function and overcome the trade-off presented by the computing power of local hardware. The cloud-based object-counting software consists of a model adapted to the object-counting task through a novel DBC-NMS (our own technique) and hyperparameter tuning of deep-learning-based object-detection methods. With the power of DBC-NMS and hyperparameter tuning, the performance of the cloud-based object-counting software is competitive over commonly used public datasets (CARPK and SKU110K) and our custom dataset of small pills. Our cloud-based object-counting software achieves an mean absolute error (MAE) of 1.03 and a root mean squared error (RMSE) of 1.20 on the Pill dataset. These results demonstrate that the proposed smart counting system accurately detects and counts densely distributed object scenes. In addition, the proposed system shows a reasonable and efficient cost\u2013performance ratio by converging low-cost hardware and cloud-based software.<\/jats:p>","DOI":"10.3390\/rs14153761","type":"journal-article","created":{"date-parts":[[2022,8,9]],"date-time":"2022-08-09T04:16:55Z","timestamp":1660018615000},"page":"3761","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":15,"title":["Smart Count System Based on Object Detection Using Deep Learning"],"prefix":"10.3390","volume":"14","author":[{"given":"Jiwon","family":"Moon","sequence":"first","affiliation":[{"name":"Department of Electrical Engineering, Kwangwoon University, Seoul 01897, Korea"}]},{"given":"Sangkyu","family":"Lim","sequence":"additional","affiliation":[{"name":"Department of Electrical Engineering, Kwangwoon University, Seoul 01897, Korea"}]},{"given":"Hakjun","family":"Lee","sequence":"additional","affiliation":[{"name":"Department of Electrical Engineering, Kwangwoon University, Seoul 01897, Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-2427-4191","authenticated-orcid":false,"given":"Seungbum","family":"Yu","sequence":"additional","affiliation":[{"name":"Department of Electrical Engineering, Kwangwoon University, Seoul 01897, Korea"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3416-9176","authenticated-orcid":false,"given":"Ki-Baek","family":"Lee","sequence":"additional","affiliation":[{"name":"Department of Electrical Engineering, Kwangwoon University, Seoul 01897, Korea"}]}],"member":"1968","published-online":{"date-parts":[[2022,8,5]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","unstructured":"Phromlikhit, C., Cheevasuvit, F., and Yimman, S. (2012, January 5\u20137). Tablet counting machine base on image processing. Proceedings of the 5th 2012 Biomedical Engineering International Conference, Muang, Thailand.","DOI":"10.1109\/BMEiCon.2012.6465508"},{"key":"ref_2","doi-asserted-by":"crossref","unstructured":"Furferi, R., Governi, L., Puggelli, L., Servi, M., and Volpe, Y. (2019). Machine vision system for counting small metal parts in electro-deposition industry. Appl. Sci., 9.","DOI":"10.20944\/preprints201905.0243.v1"},{"key":"ref_3","unstructured":"Nudol, C. (2004, January 26\u201329). Automatic jewel counting using template matching. Proceedings of the IEEE International Symposium on Communications and Information Technology, 2004, ISCIT 2004, Sapporo, Japan."},{"key":"ref_4","first-page":"103","article-title":"Design of counting-machine based on CCD sensor and DSP","volume":"4","author":"Sun","year":"2008","journal-title":"Transducer Microsyst. Technol."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Venkatalakshmi, B., and Thilagavathi, K. (2013, January 11\u201312). Automatic red blood cell counting using hough transform. Proceedings of the 2013 IEEE Conference on Information & Communication Technologies, Thuckalay, India.","DOI":"10.1109\/CICT.2013.6558103"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Gu, Y., Li, L., Fang, F., Rice, M., Ng, J., Xiong, W., and Lim, J.H. (2019, January 22\u201325). An Adaptive Fitting Approach for the Visual Detection and Counting of Small Circular Objects in Manufacturing Applications. Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan.","DOI":"10.1109\/ICIP.2019.8803361"},{"key":"ref_7","unstructured":"Baygin, M., Karakose, M., Sarimaden, A., and Akin, E. (2018). An image processing based object counting approach for machine vision application. arXiv."},{"key":"ref_8","doi-asserted-by":"crossref","unstructured":"Wang, C., Zhang, H., Yang, L., Liu, S., and Cao, X. (2015, January 26\u201330). Deep people counting in extremely dense crowds. Proceedings of the 23rd ACM international conference on Multimedia, Brisbane, Australia.","DOI":"10.1145\/2733373.2806337"},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Xue, Y., Ray, N., Hugh, J., and Bigras, G. (2016). Cell counting by regression using convolutional neural network. Proceedings of the European Conference on Computer Vision, Springer.","DOI":"10.1007\/978-3-319-46604-0_20"},{"key":"ref_10","first-page":"1324","article-title":"Learning to count objects in images","volume":"23","author":"Lempitsky","year":"2010","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Zhang, Y., Zhou, D., Chen, S., Gao, S., and Ma, Y. (2016, January 27\u201330). Single-image crowd counting via multi-column convolutional neural network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.70"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Sindagi, V.A., and Patel, V.M. (September, January 29). Cnn-based cascaded multi-task learning of high-level prior and density estimation for crowd counting. Proceedings of the 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Lecce, Italy.","DOI":"10.1109\/AVSS.2017.8078491"},{"key":"ref_13","doi-asserted-by":"crossref","first-page":"3642","DOI":"10.1109\/TGRS.2020.3020555","article-title":"Counting From Sky: A Large-Scale Data Set for Remote Sensing Object Counting and a Benchmark Method","volume":"59","author":"Gao","year":"2020","journal-title":"IEEE Trans. Geosci. Remote Sens."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Kilic, E., and Ozturk, S. (2021). An accurate car counting in aerial images based on convolutional neural networks. J. Ambient. Intell. Humaniz. Comput., 1\u201310.","DOI":"10.1007\/s12652-021-03377-5"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Hsieh, M.R., Lin, Y.L., and Hsu, W.H. (2017, January 22\u201329). Drone-based object counting by spatially regularized regional proposal network. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.446"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Goldman, E., Herzig, R., Eisenschtat, A., Goldberger, J., and Hassner, T. (2019, January 15\u201320). Precise detection in densely packed scenes. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA.","DOI":"10.1109\/CVPR.2019.00537"},{"key":"ref_17","unstructured":"Cai, Y., Du, D., Zhang, L., Wen, L., Wang, W., Wu, Y., and Lyu, S. (2019). Guided attention network for object detection and counting on drones. arXiv."},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"2876","DOI":"10.1109\/TIP.2021.3055632","article-title":"A self-training approach for point-supervised object detection and counting in crowds","volume":"30","author":"Wang","year":"2021","journal-title":"IEEE Trans. Image Process."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"9102","DOI":"10.1109\/ACCESS.2020.2964608","article-title":"Real-time apple detection system using embedded systems with hardware accelerators: An edge AI application","volume":"8","author":"Mazzia","year":"2020","journal-title":"IEEE Access"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Adarsh, P., Rathi, P., and Kumar, M. (2020, January 6\u20137). YOLO v3-Tiny: Object Detection and Recognition using one stage improved model. Proceedings of the 2020 6th International Conference on Advanced Computing and Communication Systems (ICACCS), Coimbatore, India.","DOI":"10.1109\/ICACCS48705.2020.9074315"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"2766","DOI":"10.1109\/JSEN.2019.2954287","article-title":"The smart image recognition mechanism for crop harvesting system in intelligent agriculture","volume":"20","author":"Horng","year":"2019","journal-title":"IEEE Sensors J."},{"key":"ref_22","doi-asserted-by":"crossref","unstructured":"Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Chen, L.C. (2018, January 18\u201323). Mobilenetv2: Inverted residuals and linear bottlenecks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00474"},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., and Berg, A.C. (2016). Ssd: Single shot multibox detector. Proceedings of the European Conference on Computer Vision, Springer.","DOI":"10.1007\/978-3-319-46448-0_2"},{"key":"ref_24","first-page":"91","article-title":"Faster r-cnn: Towards real-time object detection with region proposal networks","volume":"28","author":"Ren","year":"2015","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016, January 27\u201330). You only look once: Unified, real-time object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.91"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Redmon, J., and Farhadi, A. (2017, January 21\u201326). YOLO9000: Better, faster, stronger. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.690"},{"key":"ref_27","unstructured":"Redmon, J., and Farhadi, A. (2018). Yolov3: An incremental improvement. arXiv."},{"key":"ref_28","unstructured":"Bochkovskiy, A., Wang, C.Y., and Liao, H.Y.M. (2020). Yolov4: Optimal speed and accuracy of object detection. arXiv."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Goyal, P., Girshick, R., He, K., and Doll\u00e1r, P. (2017, January 22\u201329). Focal loss for dense object detection. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.324"},{"key":"ref_30","unstructured":"Zhou, X., Wang, D., and Kr\u00e4henb\u00fchl, P. (2019). Objects as points. arXiv."},{"key":"ref_31","doi-asserted-by":"crossref","first-page":"9651","DOI":"10.1109\/TIE.2019.2899548","article-title":"Simultaneously detecting and counting dense vehicles from drone images","volume":"66","author":"Li","year":"2019","journal-title":"IEEE Trans. Ind. Electron."},{"key":"ref_32","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., and Fei-Fei, L. (2009, January 20\u201325). Imagenet: A large-scale hierarchical image database. Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"ref_33","unstructured":"Simonyan, K., and Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv."},{"key":"ref_34","doi-asserted-by":"crossref","unstructured":"He, K., Zhang, X., Ren, S., and Sun, J. (2016, January 27\u201330). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.90"},{"key":"ref_35","doi-asserted-by":"crossref","unstructured":"Huang, G., Liu, Z., Van Der Maaten, L., and Weinberger, K.Q. (2017, January 21\u201326). Densely connected convolutional networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.243"},{"key":"ref_36","doi-asserted-by":"crossref","unstructured":"Xie, S., Girshick, R., Doll\u00e1r, P., Tu, Z., and He, K. (2017, January 21\u201326). Aggregated residual transformations for deep neural networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.","DOI":"10.1109\/CVPR.2017.634"},{"key":"ref_37","unstructured":"Tan, M., and Le, Q. (2019, January 9\u201315). Efficientnet: Rethinking model scaling for convolutional neural networks. Proceedings of the International Conference on Machine Learning, PMLR, Long Beach, CA, USA."},{"key":"ref_38","unstructured":"Howard, A.G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., and Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv."},{"key":"ref_39","unstructured":"Iandola, F.N., Han, S., Moskewicz, M.W., Ashraf, K., Dally, W.J., and Keutzer, K. (2016). SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5 MB model size. arXiv."},{"key":"ref_40","doi-asserted-by":"crossref","unstructured":"Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2014, January 23\u201328). Rich feature hierarchies for accurate object detection and semantic segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA.","DOI":"10.1109\/CVPR.2014.81"},{"key":"ref_41","doi-asserted-by":"crossref","unstructured":"Girshick, R. (2015, January 7\u201313). Fast r-cnn. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile.","DOI":"10.1109\/ICCV.2015.169"},{"key":"ref_42","doi-asserted-by":"crossref","unstructured":"He, K., Gkioxari, G., Doll\u00e1r, P., and Girshick, R. (2017, January 22\u201329). Mask r-cnn. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.322"},{"key":"ref_43","unstructured":"Law, H., and Deng, J. (July, January 14). Cornernet: Detecting objects as paired keypoints. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany."},{"key":"ref_44","unstructured":"Wang, C.Y., Yeh, I.H., and Liao, H.Y.M. (2021). You Only Learn One Representation: Unified Network for Multiple Tasks. arXiv."},{"key":"ref_45","unstructured":"Zhou, X., Koltun, V., and Kr\u00e4henb\u00fchl, P. (2021). Probabilistic two-stage detection. arXiv."},{"key":"ref_46","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., and Zitnick, C.L. (2014). Microsoft coco: Common objects in context. Proceedings of the European Conference on Computer Vision, Springer.","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"ref_47","doi-asserted-by":"crossref","unstructured":"Cai, Z., and Vasconcelos, N. (2018, January 18\u201323). Cascade r-cnn: Delving into high quality object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA.","DOI":"10.1109\/CVPR.2018.00644"},{"key":"ref_48","doi-asserted-by":"crossref","unstructured":"Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021). Swin transformer: Hierarchical vision transformer using shifted windows. arXiv.","DOI":"10.1109\/ICCV48922.2021.00986"},{"key":"ref_49","doi-asserted-by":"crossref","first-page":"383","DOI":"10.1016\/S0031-3203(96)00094-5","article-title":"Perspective-transformation-invariant generalized Hough transform for perspective planar shape detection and matching","volume":"30","author":"Lo","year":"1997","journal-title":"Pattern Recognit."},{"key":"ref_50","unstructured":"Aich, S., and Stavness, I. (2018). Improving object counting with heatmap regulation. arXiv."},{"key":"ref_51","doi-asserted-by":"crossref","unstructured":"Bodla, N., Singh, B., Chellappa, R., and Davis, L.S. (2017, January 22\u201329). Soft-NMS\u2013improving object detection with one line of code. Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy.","DOI":"10.1109\/ICCV.2017.593"}],"container-title":["Remote Sensing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/2072-4292\/14\/15\/3761\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T00:04:46Z","timestamp":1760141086000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/2072-4292\/14\/15\/3761"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,8,5]]},"references-count":51,"journal-issue":{"issue":"15","published-online":{"date-parts":[[2022,8]]}},"alternative-id":["rs14153761"],"URL":"https:\/\/doi.org\/10.3390\/rs14153761","relation":{},"ISSN":["2072-4292"],"issn-type":[{"value":"2072-4292","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,8,5]]}}}