{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,3,17]],"date-time":"2026-03-17T19:29:37Z","timestamp":1773775777342,"version":"3.50.1"},"reference-count":35,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2020,10,6]],"date-time":"2020-10-06T00:00:00Z","timestamp":1601942400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2020,10,6]],"date-time":"2020-10-06T00:00:00Z","timestamp":1601942400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"south ural state university"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Big Data"],"published-print":{"date-parts":[[2020,12]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>This study deals with the problem of rea-time obtaining quality data on the road traffic parameters based on the static street video surveillance camera data. The existing road traffic monitoring solutions are based on the use of traffic cameras located directly above the carriageways, which allows one to obtain fragmentary data on the speed and movement pattern of vehicles. The purpose of the study is to develop a system of high-quality and complete collection of real-time data, such as traffic flow intensity, driving directions, and average vehicle speed. At the same time, the data is collected within the entire functional area of intersections and adjacent road sections, which fall within the street video surveillance camera angle. Our solution is based on the use of the YOLOv3 neural network architecture and SORT open-source tracker. To train the neural network, we marked 6000 images and performed augmentation, which allowed us to form a dataset of 4.3 million vehicles. The basic performance of YOLO was improved using an additional mask branch and optimizing the shape of anchors. To determine the vehicle speed, we used a method of perspective transformation of coordinates from the original image to geographical coordinates. Testing of the system at night and in the daytime at six intersections showed the absolute percentage accuracy of vehicle counting, of no less than 92%. The error in determining the vehicle speed by the projection method, taking into account the camera calibration, did not exceed 1.5\u00a0km\/h.<\/jats:p>","DOI":"10.1186\/s40537-020-00358-x","type":"journal-article","created":{"date-parts":[[2020,10,6]],"date-time":"2020-10-06T16:02:52Z","timestamp":1602000172000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":56,"title":["Real-time monitoring of traffic parameters"],"prefix":"10.1186","volume":"7","author":[{"given":"Kirill","family":"Khazukov","sequence":"first","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-1143-2031","authenticated-orcid":false,"given":"Vladimir","family":"Shepelev","sequence":"additional","affiliation":[]},{"given":"Tatiana","family":"Karpeta","sequence":"additional","affiliation":[]},{"given":"Salavat","family":"Shabiev","sequence":"additional","affiliation":[]},{"given":"Ivan","family":"Slobodin","sequence":"additional","affiliation":[]},{"given":"Irakli","family":"Charbadze","sequence":"additional","affiliation":[]},{"given":"Irina","family":"Alferova","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2020,10,6]]},"reference":[{"issue":"4","key":"358_CR1","doi-asserted-by":"publisher","first-page":"565","DOI":"10.5194\/isprs-archives-XLII-4-499-2018","volume":"42","author":"MV Peppa","year":"2018","unstructured":"Peppa MV, Bell D, Komar T, Xiao W. Urban traffic flow analysis based on deep learning car detection from cctv image series. Int Arch Photogramm Remote Sens Spat Inf Sci. 2018;42(4):565\u201372. https:\/\/doi.org\/10.5194\/isprs-archives-XLII-4-499-2018.","journal-title":"Int Arch Photogramm Remote Sens Spat Inf Sci."},{"key":"358_CR2","doi-asserted-by":"publisher","DOI":"10.1186\/s40537-019-0234-z","author":"A Fedorov","year":"2019","unstructured":"Fedorov A, Nikolskaia K, Ivanov S, Shepelev V, Minbaleev A. Traffic flow estimation with data from a video surveillance camera. J Big Data. 2019. https:\/\/doi.org\/10.1186\/s40537-019-0234-z.","journal-title":"J Big Data."},{"key":"358_CR3","unstructured":"Li C, Dobler G, Feng X, Wang Y. TrackNet: simultaneous object detection and tracking and its application in traffic video analysis. 2019; pp. 1\u201310. arxiv.org\/pdf\/1902.01466.pdf."},{"issue":"3","key":"358_CR4","doi-asserted-by":"publisher","first-page":"594","DOI":"10.3390\/s19030594","volume":"19","author":"F Zhang","year":"2019","unstructured":"Zhang F, Li C, Yang F. Vehicle detection in urban traffic surveillance images based on convolutional neural networks with feature concatenation. Sensors. 2019;19(3):594. https:\/\/doi.org\/10.3390\/s19030594.","journal-title":"Sensors."},{"key":"358_CR5","doi-asserted-by":"publisher","unstructured":"Zhang S, Wu G, Costeira JP, Moura JM. FCN-rLSTM: Deep spatio-temporal neural networks for vehicle counting in city cameras. In: Proceedings of the IEEE international conference on computer vision. 2017. https:\/\/doi.org\/10.1109\/iccv.2017.396.","DOI":"10.1109\/iccv.2017.396"},{"issue":"5","key":"358_CR6","doi-asserted-by":"publisher","first-page":"1533","DOI":"10.1007\/s00500-017-2942-7","volume":"22","author":"MM Rathore","year":"2018","unstructured":"Rathore MM, Son H, Ahmad A, Paul A. Real-time video processing for traffic control in smart city using Hadoop ecosystem with GPUs. Soft Comput. 2018;22(5):1533\u201344. https:\/\/doi.org\/10.1007\/s00500-017-2942-7.","journal-title":"Soft Comput"},{"key":"358_CR7","doi-asserted-by":"publisher","unstructured":"Sun X, Ding J, Dalla Chiara G, Cheah L, Cheung NM. A generic framework for monitoring local freight traffic movements using computer vision-based techniques. In: 5th IEEE international conference on models and technologies for intelligent transportation systems (MT-ITS). 2017. p. 63\u20138. https:\/\/doi.org\/10.1109\/mtits.2017.8005592.","DOI":"10.1109\/mtits.2017.8005592"},{"issue":"6","key":"358_CR8","doi-asserted-by":"publisher","first-page":"1137","DOI":"10.1109\/TPAMI.2016.2577031","volume":"39","author":"S Ren","year":"2017","unstructured":"Ren S, He K, Girshick R, Sun J. Faster R-CNN: towards real-time object detection with region proposal networks. IEEE Trans Pattern Anal Mach Intell. 2017;39(6):1137\u201349. https:\/\/doi.org\/10.1109\/TPAMI.2016.2577031.","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"358_CR9","doi-asserted-by":"publisher","unstructured":"Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: unified, real-time object detection. In: 2016 IEEE conference on computer vision and pattern recognition (CVPR). 2016. https:\/\/doi.org\/10.1109\/cvpr.2016.91.","DOI":"10.1109\/cvpr.2016.91"},{"key":"358_CR10","doi-asserted-by":"publisher","first-page":"21","DOI":"10.1007\/978-3-319-46448-0_2","volume":"9905","author":"W Liu","year":"2016","unstructured":"Liu W, Anguelov D, Erhan D, Szegedy C, Reed S, Fu CY, Berg AC. SSD: single shot multibox detector. Lect Notes Comput Sci. 2016;9905:21\u201337. https:\/\/doi.org\/10.1007\/978-3-319-46448-0_2.","journal-title":"Lect Notes Comput Sci."},{"issue":"3","key":"358_CR11","doi-asserted-by":"publisher","first-page":"1010","DOI":"10.1109\/TITS.2018.2838132","volume":"20","author":"X Hu","year":"2019","unstructured":"Hu X, Xu X, Xiao Y, Chen H, He S, Qin J, Heng PA. SINet: a scale-insensitive convolutional neural network for fast vehicle detection. IEEE Trans Intell Transp Syst. 2019;20(3):1010. https:\/\/doi.org\/10.1109\/TITS.2018.2838132.","journal-title":"IEEE Trans Intell Transp Syst"},{"key":"358_CR12","doi-asserted-by":"publisher","unstructured":"Jung H, Choi MK, Jung J, Lee JH, Kwon S, Jung WY. ResNet-based vehicle classification and localization in traffic surveillance systems. In: 2017 IEEE conference on computer vision and pattern recognition workshops (CVPRW). 2017. 934\u201340. https:\/\/doi.org\/10.1109\/cvprw.2017.129.","DOI":"10.1109\/cvprw.2017.129"},{"key":"358_CR13","doi-asserted-by":"publisher","first-page":"564","DOI":"10.1016\/j.procs.2018.04.281","volume":"131","author":"S Li","year":"2018","unstructured":"Li S, Lin J, Li G, Bai T, Wang H, Pang Y. Vehicle type detection based on deep learning in traffic scene. Procedia Comput Sci. 2018;131:564\u201372. https:\/\/doi.org\/10.1016\/j.procs.2018.04.281.","journal-title":"Procedia Comput Sci."},{"key":"358_CR14","doi-asserted-by":"publisher","unstructured":"Sommer L, Acatay O, Schumann A, Beyerer J. Ensemble of two-stage regression based detectors for accurate vehicle detection in traffic surveillance data. 2019. p. 1\u20136. https:\/\/doi.org\/10.1109\/avss.2018.8639149.","DOI":"10.1109\/avss.2018.8639149"},{"key":"358_CR15","doi-asserted-by":"publisher","unstructured":"Wang L, Lu Y, Wang H, Zheng Y, Ye H, Xue X. Evolving boxes for fast vehicle detection. In: 2017 IEEE international conference on multimedia and Expo (IC-ME). 2017. p. 1135\u201340. https:\/\/doi.org\/10.1109\/icme.2017.8019461.","DOI":"10.1109\/icme.2017.8019461"},{"key":"358_CR16","doi-asserted-by":"publisher","unstructured":"Zhu F, Lu Y, Ying N, Giakos G. Fast vehicle detection based on evolving convolutional neural network. In: 2017 IEEE international conference on imaging systems and techniques (IST). 2017. p. 1\u20134. https:\/\/doi.org\/10.1109\/ist.2017.8261505.","DOI":"10.1109\/ist.2017.8261505"},{"key":"358_CR17","doi-asserted-by":"publisher","unstructured":"Anisimov D, Khanova T. Towards lightweight convolutional neural networks for object detection. In: 2017 14th IEEE international conference on advanced video and signal based surveillance (AVSS). 2017; 1\u20138. https:\/\/doi.org\/10.1109\/avss.2017.8078500.","DOI":"10.1109\/avss.2017.8078500"},{"key":"358_CR18","doi-asserted-by":"crossref","unstructured":"Li S. 3D-DETNet: a single stage video-based vehicle detector. 2018. arxiv.org\/ftp\/arxiv\/papers\/1801\/1801.01769.pdf.","DOI":"10.1117\/12.2502012"},{"key":"358_CR19","doi-asserted-by":"publisher","unstructured":"Luo W, Yang B, Urtasun R. Fast and furious: real time end-to-end 3D detection, tracking and motion forecasting with a single convolutional net. In: 2018 IEEE\/CVF conference on computer vision and pattern recognition. 2018; 3569\u201377. https:\/\/doi.org\/10.1109\/cvpr.2018.00376.","DOI":"10.1109\/cvpr.2018.00376"},{"issue":"3","key":"358_CR20","doi-asserted-by":"publisher","first-page":"319","DOI":"10.1007\/s12200-015-0453-7","volume":"8","author":"Y Wu","year":"2015","unstructured":"Wu Y, Jiang S, Xu Z, Zhu S, Cao D. Lens distortion correction based on one chessboard pattern image. Front Optoelectron. 2015;8(3):319\u201328. https:\/\/doi.org\/10.1007\/s12200-015-0453-7.","journal-title":"Front Optoelectron"},{"key":"358_CR21","unstructured":"Redmon, J., Farhadi, A. YOLOv3: An Incremental Improvement. 2018. arxiv.org\/pdf\/1804.02767.pdf"},{"key":"358_CR22","doi-asserted-by":"crossref","unstructured":"Lin T-Y, Goyal P, Girshick R, He K, Dollar P. Focal loss for dense object detection. 2017. arxiv.org\/pdf\/1708.02002.pdf","DOI":"10.1109\/ICCV.2017.324"},{"key":"358_CR23","doi-asserted-by":"publisher","unstructured":"He K, Gkioxari G, Dollar P, Girshick R. Mask R-CNN. In: 2017 IEEE international conference on computer vision (ICCV). vol. 2017: 2017; 2980\u20138. https:\/\/doi.org\/10.1109\/iccv.2017.322.","DOI":"10.1109\/iccv.2017.322"},{"issue":"3","key":"358_CR24","doi-asserted-by":"publisher","first-page":"824","DOI":"10.35940\/ijrte.B1154.0782S319","volume":"8","author":"KG Shreyas Dixit","year":"2019","unstructured":"Shreyas Dixit KG, Chadaga MG, Savalgimath SS, Ragavendra Rakshith G, Naveen Kumar MR. Evaluation and evolution of object detection techniques YOLO and R-CNN. Int J Recent Technol Eng. 2019;8(3):824\u20139. https:\/\/doi.org\/10.35940\/ijrte.B1154.0782S319.","journal-title":"Int J Recent Technol Eng"},{"key":"358_CR25","doi-asserted-by":"crossref","unstructured":"He K, Zhang X, Ren S, Sun J. Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016. 770\u20138. https:\/\/arxiv.org\/pdf\/1512.03385.pdf.","DOI":"10.1109\/CVPR.2016.90"},{"key":"358_CR26","doi-asserted-by":"publisher","first-page":"238","DOI":"10.1016\/j.compeleceng.2019.04.001","volume":"76","author":"S Javadi","year":"2019","unstructured":"Javadi S, Dahl M, Pettersson MI. Vehicle speed measurement model for video-based systems. Comput Electr Eng. 2019;76:238\u201348. https:\/\/doi.org\/10.1016\/j.compeleceng.2019.04.001.","journal-title":"Comput Electr Eng"},{"issue":"17","key":"358_CR27","first-page":"2555","volume":"5","author":"A Gholami","year":"2010","unstructured":"Gholami A, Dehghani A, Karim M. Vehicle speed detection in video image sequences using CVS method. Int J Phy Sci. 2010;5(17):2555\u201363.","journal-title":"Int J Phy Sci"},{"issue":"06","key":"358_CR28","doi-asserted-by":"publisher","first-page":"1000","DOI":"10.1109\/tla.2019.8896823","volume":"17","author":"VB de Barth O","year":"2019","unstructured":"de Barth O VB, Oliveira R, de Oliveira MA, Nascimento VE. Vehicle speed monitoring using convolutional neural networks. IEEE Latin Am Trans. 2019;17(06):1000\u20138. https:\/\/doi.org\/10.1109\/tla.2019.8896823.","journal-title":"IEEE Latin Am Trans."},{"issue":"1","key":"358_CR29","doi-asserted-by":"publisher","first-page":"289","DOI":"10.1016\/j.ijleo.2013.06.036","volume":"125","author":"J Lan","year":"2014","unstructured":"Lan J, Li J, Hu G, Ran B, Wang L. Vehicle speed measurement based on gray constraint optical flow algorithm. Optik Int J Light Elect Optics. 2014;125(1):289\u201395. https:\/\/doi.org\/10.1016\/j.ijleo.2013.06.036.","journal-title":"Optik Int J Light Elect Optics."},{"key":"358_CR30","doi-asserted-by":"publisher","unstructured":"Bewley A, Ge Z, Ott L, Ramos F, Upcroft B. Simple online and realtime tracking. In: 2016 IEEE international conference on image processing (ICIP). 2016: 3464\u20138. https:\/\/doi.org\/10.1109\/icip.2016.7533003.","DOI":"10.1109\/icip.2016.7533003"},{"key":"358_CR31","unstructured":"Video observation. https:\/\/cams.is74.ru\/live. Accessed 20 May 2020."},{"key":"358_CR32","doi-asserted-by":"publisher","first-page":"35","DOI":"10.1115\/1.3662552","volume":"82","author":"R Kalman","year":"1960","unstructured":"Kalman R. A new approach to linear filtering and prediction problems. J Basic Eng. 1960;82:35\u201345. https:\/\/doi.org\/10.1115\/1.3662552.","journal-title":"J Basic Eng"},{"key":"358_CR33","doi-asserted-by":"publisher","first-page":"83","DOI":"10.1002\/nav.3800020109","volume":"2","author":"HW Kuhn","year":"1955","unstructured":"Kuhn HW. The Hungarian method for the assignment problem. Naval Res Log Quart. 1955;2:83\u201397. https:\/\/doi.org\/10.1002\/nav.3800020109.","journal-title":"Naval Res Log Quart."},{"key":"358_CR34","doi-asserted-by":"publisher","unstructured":"Bewley A, Ge Z, Ott L, Ramos F, Upcroft B. Simple online and realtime tracking. In: 2016 IEEE International Conference on Image Processing (ICIP). 2016. 3464\u20138. https:\/\/doi.org\/10.1109\/icip.2016.7533003.","DOI":"10.1109\/icip.2016.7533003"},{"key":"358_CR35","doi-asserted-by":"publisher","unstructured":"Wu W, Wu L, Li J, Wang S, Zheng G, He X. RetinaNet-based visual inspection of flexible materials. In: 2019 IEEE International Conference on Smart Internet of Things (SmartIoT). 2019; 432\u20135. https:\/\/doi.org\/10.1109\/smartiot.2019.00077.","DOI":"10.1109\/smartiot.2019.00077"}],"container-title":["Journal of Big Data"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s40537-020-00358-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s40537-020-00358-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s40537-020-00358-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2021,10,6]],"date-time":"2021-10-06T03:51:26Z","timestamp":1633492286000},"score":1,"resource":{"primary":{"URL":"https:\/\/journalofbigdata.springeropen.com\/articles\/10.1186\/s40537-020-00358-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,10,6]]},"references-count":35,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2020,12]]}},"alternative-id":["358"],"URL":"https:\/\/doi.org\/10.1186\/s40537-020-00358-x","relation":{"has-preprint":[{"id-type":"doi","id":"10.21203\/rs.3.rs-26976\/v1","asserted-by":"object"},{"id-type":"doi","id":"10.21203\/rs.3.rs-26976\/v2","asserted-by":"object"}]},"ISSN":["2196-1115"],"issn-type":[{"value":"2196-1115","type":"electronic"}],"subject":[],"published":{"date-parts":[[2020,10,6]]},"assertion":[{"value":"7 May 2020","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"11 September 2020","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"6 October 2020","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The authors declare that they have no competing interests.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"84"}}