{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,25]],"date-time":"2026-02-25T18:06:46Z","timestamp":1772042806985,"version":"3.50.1"},"reference-count":29,"publisher":"MDPI AG","issue":"23","license":[{"start":{"date-parts":[[2022,11,22]],"date-time":"2022-11-22T00:00:00Z","timestamp":1669075200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"funder":[{"name":"the Polytechnic Institute of Coimbra within the scope of Regulamento de Apoio \u00e0 Publica\u00e7\u00e3o Cient\u00edfica dos Professores e Investigadores do IPC","award":["Despacho n.\u00ba 12598\/2020"],"award-info":[{"award-number":["Despacho n.\u00ba 12598\/2020"]}]}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Energies"],"abstract":"<jats:p>Counting objects in video images has been an active area of computer vision for decades. For precise counting, it is necessary to detect objects and follow them through consecutive frames. Deep neural networks have allowed great improvements in this area. Nonetheless, this task is still a challenge for edge computing, especially when low-power edge AI devices must be used. The present work describes an application where an edge device is used to run a YOLO network and V-IOU tracker to count people and bicycles in real time. A selective frame-downsampling algorithm is used to allow a larger frame rate when necessary while optimizing memory usage and energy consumption. In the experiments, the system was able to detect and count the objects with 18 counting errors in 525 objects and a mean inference time of 112.82 ms per frame. With the selective downsampling algorithm, it was also capable of recovering and reduce memory usage while maintaining its precision.<\/jats:p>","DOI":"10.3390\/en15238816","type":"journal-article","created":{"date-parts":[[2022,11,23]],"date-time":"2022-11-23T03:48:12Z","timestamp":1669175292000},"page":"8816","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":20,"title":["Counting People and Bicycles in Real Time Using YOLO on Jetson Nano"],"prefix":"10.3390","volume":"15","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-7086-4416","authenticated-orcid":false,"given":"Hugo","family":"Gomes","sequence":"first","affiliation":[{"name":"Polytechnic Institute of Coimbra, Coimbra Institute of Engineering, Rua Pedro Nunes\u2014Quinta da Nora, 3030-199 Coimbra, Portugal"},{"name":"Geologic Information Systems, Rua Pero Vaz de Caminha, 99, R\/C, 3030-200 Coimbra, Portugal"}]},{"given":"Nuno","family":"Redinha","sequence":"additional","affiliation":[{"name":"Geologic Information Systems, Rua Pero Vaz de Caminha, 99, R\/C, 3030-200 Coimbra, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-8237-3086","authenticated-orcid":false,"given":"Nuno","family":"Lavado","sequence":"additional","affiliation":[{"name":"Polytechnic Institute of Coimbra, Coimbra Institute of Engineering, Rua Pedro Nunes\u2014Quinta da Nora, 3030-199 Coimbra, Portugal"},{"name":"Research Group on Sustainability Cities and Urban Intelligence (SUScita), Polytechnic Institute of Coimbra, Rua Pedro Nunes\u2014Quinta da Nora, 3030-199 Coimbra, Portugal"}]},{"ORCID":"https:\/\/orcid.org\/0000-0003-4313-7966","authenticated-orcid":false,"given":"Mateus","family":"Mendes","sequence":"additional","affiliation":[{"name":"Polytechnic Institute of Coimbra, Coimbra Institute of Engineering, Rua Pedro Nunes\u2014Quinta da Nora, 3030-199 Coimbra, Portugal"},{"name":"Institute of Systems and Robotics, University of Coimbra, Rua Silvio Lima- Polo II, 3030-290 Coimbra, Portugal"}]}],"member":"1968","published-online":{"date-parts":[[2022,11,22]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"462","DOI":"10.29207\/resti.v4i3.1871","article-title":"A simple vehicle counting system using deep learning with YOLOv3 model","volume":"4","author":"Fachrie","year":"2020","journal-title":"J. Resti (Rekayasa Sist. Dan Teknol. Inf.)"},{"key":"ref_2","unstructured":"Redmon, J., and Farhadi, A. (2018). YOLOv3: An incremental improvement. arXiv."},{"key":"ref_3","doi-asserted-by":"crossref","unstructured":"Bharadhwaj, M., Ramadurai, G., and Ravindran, B. (2022, January 1\u201320). Detecting Vehicles on the Edge: Knowledge Distillation to Improve Performance in Heterogeneous Road Traffic. Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA.","DOI":"10.1109\/CVPRW56347.2022.00360"},{"key":"ref_4","unstructured":"Allen-Zhu, Z., and Li, Y. (2020). Towards understanding ensemble, knowledge distillation and self-distillation in deep learning. arXiv."},{"key":"ref_5","doi-asserted-by":"crossref","unstructured":"Bewley, A., Ge, Z., Ott, L., Ramos, F., and Upcroft, B. (2016, January 25\u201328). Simple online and realtime tracking. Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA.","DOI":"10.1109\/ICIP.2016.7533003"},{"key":"ref_6","unstructured":"OpenDataCam (2022, November 09). An Open Source Tool to Quantify the World (Version 3.0.2). Available online: https:\/\/github.com\/opendatacam\/opendatacam."},{"key":"ref_7","unstructured":"Bochkovskiy, A., Wang, C.Y., and Liao, H.Y.M. (2004). YOLOv4: Optimal Speed and Accuracy of Object Detection, 2020. arXiv."},{"key":"ref_8","unstructured":"Jiang, Z., Zhao, L., Li, S., and Jia, Y. (2020). Real-time object detection method based on improved YOLOv4-tiny. arXiv."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Erik Bochinski, V.E., and Sikora, T. (2017, January 19). High-Speed Tracking-by-Detection Without Using Image Information. Proceedings of the International Workshop on Traffic and Street Surveillance for Safety and Security at IEEE AVSS 2017, Lecce, Italy.","DOI":"10.1109\/AVSS.2017.8078516"},{"key":"ref_10","unstructured":"(2022, November 09). Ultralytics. YOLOv5. Available online: https:\/\/github.com\/ultralytics\/yolov5."},{"key":"ref_11","doi-asserted-by":"crossref","unstructured":"Bochinski, E., Senst, T., and Sikora, T. (2018, January 27\u201330). Extending IOU Based Multi-Object Tracking by Visual Information. Proceedings of the IEEE International Conference on Advanced Video and Signals-Based Surveillance, Auckland, New Zealand.","DOI":"10.1109\/AVSS.2018.8639144"},{"key":"ref_12","doi-asserted-by":"crossref","unstructured":"Wojke, N., Bewley, A., and Paulus, D. (2017, January 17\u201320). Simple online and realtime tracking with a deep association metric. Proceedings of the 2017 IEEE International Conference on Image Processing (ICIP), Beijing, China.","DOI":"10.1109\/ICIP.2017.8296962"},{"key":"ref_13","unstructured":"Nvidia (2022, November 09). Nvidia Jetson Nano. Available online: https:\/\/www.nvidia.com\/en-us\/autonomous-machines\/embedded-systems\/jetson-nano\/product-development."},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Doll\u00e1r, P., and Zitnick, C.L. (2014, January 6\u201312). Microsoft coco: Common objects in context. Proceedings of the European Conference on Computer Vision, Zurich, Switzerland.","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"ref_15","first-page":"1137","article-title":"Faster r-cnn: Towards real-time object detection with region proposal networks","volume":"28","author":"Ren","year":"2015","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., and Berg, A.C. (2016, January 21\u201326). Ssd: Single shot multibox detector. Proceedings of the European Conference on Computer Vision, Honolulu, HI, USA.","DOI":"10.1007\/978-3-319-46448-0_2"},{"key":"ref_17","unstructured":"Vanholder, H. (April, January 29). Efficient inference with tensorrt. Proceedings of the GPU Technology Conference, Edinburgh, UK."},{"key":"ref_18","unstructured":"Developer, N. (2022, November 09). TensorRT Open Source Software. Available online: https:\/\/github.com\/NVIDIA\/TensorRT."},{"key":"ref_19","doi-asserted-by":"crossref","unstructured":"Kumar, S., Sharma, P., and Pal, N. (2021, January 21\u201325). Object tracking and counting in a zone using YOLOv4, DeepSORT and TensorFlow. Proceedings of the 2021 International Conference on Artificial Intelligence and Smart Systems (ICAIS), Coimbatore, India.","DOI":"10.1109\/ICAIS50930.2021.9395971"},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Oltean, G., Florea, C., Orghidan, R., and Oltean, V. (2019, January 23\u201326). Towards real time vehicle counting using yolo-tiny and fast motion estimation. Proceedings of the 2019 IEEE 25th International Symposium for Design and Technology in Electronic Packaging (SIITME), Cluj-Napoca, Romania.","DOI":"10.1109\/SIITME47687.2019.8990708"},{"key":"ref_21","doi-asserted-by":"crossref","unstructured":"Girshick, R., Donahue, J., Darrell, T., and Malik, J. (2014, January 23\u201328). Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Columbus, OH, USA.","DOI":"10.1109\/CVPR.2014.81"},{"key":"ref_22","unstructured":"Gandhi, R. (2022, November 09). R-CNN, Fast R-CNN, Faster R-CNN, YOLO\u2014Object Detection Algorithms. Available online: https:\/\/www.datasciencecentral.com\/r-cnn-fast-r-cnn-faster-r-cnn-yolo-object-detection-algorithms\/."},{"key":"ref_23","unstructured":"Yelisetty, A. (2022, November 09). Understanding Fast R-CNN and Faster R-CNN for Object Detection. Available online: https:\/\/towardsdatascience.com\/understanding-fast-r-cnn-and-faster-r-cnn-for-object-detection-adbb55653d97."},{"key":"ref_24","unstructured":"Targ, S., Almeida, D., and Lyman, K. (2016). Resnet in resnet: Generalizing residual architectures. arXiv."},{"key":"ref_25","doi-asserted-by":"crossref","unstructured":"Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., and Fei-Fei, L. (2009, January 20\u201325). Imagenet: A large-scale hierarchical image database. Proceedings of the 2009 IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA.","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Redmon, J., Divvala, S., Girshick, R., and Farhadi, A. (2016, January 27\u201330). You only look once: Unified, real-time object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA.","DOI":"10.1109\/CVPR.2016.91"},{"key":"ref_27","unstructured":"Bochinski, E., Eiselein, V., Sikora, T., and Senst, T. (2022, November 09). Python Implementation of the IOU\/V-IOU Tracker. Available online: https:\/\/github.com\/bochinski\/iou-tracker."},{"key":"ref_28","unstructured":"BlueMirrors (2022, November 09). CVU: Computer Vision Utils. Available online: https:\/\/github.com\/BlueMirrors\/cvu."},{"key":"ref_29","doi-asserted-by":"crossref","unstructured":"Wang, C.Y., Bochkovskiy, A., and Liao, H.Y.M. (2022). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. arXiv.","DOI":"10.1109\/CVPR52729.2023.00721"}],"container-title":["Energies"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1996-1073\/15\/23\/8816\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T01:24:35Z","timestamp":1760145875000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1996-1073\/15\/23\/8816"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,11,22]]},"references-count":29,"journal-issue":{"issue":"23","published-online":{"date-parts":[[2022,12]]}},"alternative-id":["en15238816"],"URL":"https:\/\/doi.org\/10.3390\/en15238816","relation":{},"ISSN":["1996-1073"],"issn-type":[{"value":"1996-1073","type":"electronic"}],"subject":[],"published":{"date-parts":[[2022,11,22]]}}}