{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,10]],"date-time":"2025-09-10T21:35:26Z","timestamp":1757540126014,"version":"3.41.0"},"reference-count":38,"publisher":"Association for Computing Machinery (ACM)","issue":"1","license":[{"start":{"date-parts":[[2020,1,17]],"date-time":"2020-01-17T00:00:00Z","timestamp":1579219200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"name":"Prime Minister's fellowship for doctoral research"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Sen. Netw."],"published-print":{"date-parts":[[2020,2,29]]},"abstract":"<jats:p>Current smartphone-based navigation applications fail to provide lane-level information due to poor GPS accuracy. Detecting and tracking a vehicle\u2019s lane position on the road assists in lane-level navigation. For instance, it would be important to know whether a vehicle is in the correct lane for safely making a turn, or whether the vehicle\u2019s speed is compliant with a lane-specific speed limit. Recent efforts have used road network information and inertial sensors to estimate lane position. While inertial sensors can detect lane shifts over short windows, it would suffer from error accumulation over time. In this article, we present DeepLane, a system that leverages the back camera of a windshield-mounted smartphone to provide an accurate estimate of the vehicle\u2019s current lane. We employ a deep learning--based technique to classify the vehicle\u2019s lane position. DeepLane does not depend on any infrastructure support such as lane markings and works even when there are no lane markings, a characteristic of many roads in developing regions. We perform extensive evaluation of DeepLane on real-world datasets collected in developed and developing regions. DeepLane can detect a vehicle\u2019s lane position with an accuracy of over 90%, and we have implemented DeepLane as an Android app.<\/jats:p>","DOI":"10.1145\/3358797","type":"journal-article","created":{"date-parts":[[2020,1,17]],"date-time":"2020-01-17T09:50:04Z","timestamp":1579254604000},"page":"1-22","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":16,"title":["Driving Lane Detection on Smartphones using Deep Neural Networks"],"prefix":"10.1145","volume":"16","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-8459-1295","authenticated-orcid":false,"given":"Ravi","family":"Bhandari","sequence":"first","affiliation":[{"name":"Indian Institute of Technology Bombay, Mumbai, Maharashtra, India"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-0921-4828","authenticated-orcid":false,"given":"Akshay Uttama","family":"Nambi","sequence":"additional","affiliation":[{"name":"Microsoft Research, Bengaluru, India"}]},{"given":"Venkata N.","family":"Padmanabhan","sequence":"additional","affiliation":[{"name":"Microsoft Research, Bengaluru, India"}]},{"given":"Bhaskaran","family":"Raman","sequence":"additional","affiliation":[{"name":"Indian Institute of Technology Bombay, Mumbai, Maharashtra, India"}]}],"member":"320","published-online":{"date-parts":[[2020,1,17]]},"reference":[{"key":"e_1_2_1_1_1","unstructured":"BMW USA. 2018. Active Lane Keeping and Traffic Jam Assistant. Retrieved from https:\/\/www.youtube.com\/watch?v&equals;w24HYJvaCl0.  BMW USA. 2018. Active Lane Keeping and Traffic Jam Assistant. Retrieved from https:\/\/www.youtube.com\/watch?v&equals;w24HYJvaCl0."},{"key":"e_1_2_1_2_1","unstructured":"DC Nation. 2018. Driver turns wrong and gets hit. Retrieved from https:\/\/tinyurl.com\/y9kkytq2.  DC Nation. 2018. Driver turns wrong and gets hit. Retrieved from https:\/\/tinyurl.com\/y9kkytq2."},{"volume-title":"FARS Encyclopedia: People\u2014Drivers.","author":"National Highway Traffic Safety Administration (NHTSA). 2018.","key":"e_1_2_1_3_1","unstructured":"National Highway Traffic Safety Administration (NHTSA). 2018. FARS Encyclopedia: People\u2014Drivers. Retrieved from https:\/\/www-fars.nhtsa.dot.gov\/People\/PeopleDrivers.aspx. National Highway Traffic Safety Administration (NHTSA). 2018. FARS Encyclopedia: People\u2014Drivers. Retrieved from https:\/\/www-fars.nhtsa.dot.gov\/People\/PeopleDrivers.aspx."},{"key":"e_1_2_1_4_1","unstructured":"Fox Van Allen. 2018. Google Maps 3.0 with Lane Assist. Retrieved from https:\/\/www.techlicious.com\/blog\/google-maps-3-0-lane-assist-uber-savable-maps\/.  Fox Van Allen. 2018. Google Maps 3.0 with Lane Assist. Retrieved from https:\/\/www.techlicious.com\/blog\/google-maps-3-0-lane-assist-uber-savable-maps\/."},{"key":"e_1_2_1_5_1","unstructured":"Stanford Vision Lab. 2018. ImageNet. Retrieved from http:\/\/www.image-net.org\/.  Stanford Vision Lab. 2018. ImageNet. Retrieved from http:\/\/www.image-net.org\/."},{"key":"e_1_2_1_6_1","unstructured":"Los Angeles Times. 2018. Lidar costs $75 000 per car. Retrieved from http:\/\/www.latimes.com\/business\/la-fi-hy-ouster-lidar-20171211-htmlstory.html.  Los Angeles Times. 2018. Lidar costs $75 000 per car. Retrieved from http:\/\/www.latimes.com\/business\/la-fi-hy-ouster-lidar-20171211-htmlstory.html."},{"key":"e_1_2_1_7_1","unstructured":"Qualcomm Technologies Inc. 2018. Qualcomm Neural Processing SDK for AI. Retrieved from https:\/\/developer.qualcomm.com\/software\/qualcomm-neural-processing-sdk.  Qualcomm Technologies Inc. 2018. Qualcomm Neural Processing SDK for AI. Retrieved from https:\/\/developer.qualcomm.com\/software\/qualcomm-neural-processing-sdk."},{"key":"e_1_2_1_8_1","volume-title":"Government of India","author":"Road Transport Ministry","year":"2018","unstructured":"Ministry of Road Transport and Highways , Government of India . 2018 . Road accidents in India, 2016. Retrieved from https:\/\/tinyurl.com\/y7j86kuh. Ministry of Road Transport and Highways, Government of India. 2018. Road accidents in India, 2016. Retrieved from https:\/\/tinyurl.com\/y7j86kuh."},{"key":"e_1_2_1_9_1","unstructured":"Shashank Agarwal. 2018. Tyre Killer. Retrieved from https:\/\/www.youtube.com\/watch?v&equals;L9EZHDYE_e8.  Shashank Agarwal. 2018. Tyre Killer. Retrieved from https:\/\/www.youtube.com\/watch?v&equals;L9EZHDYE_e8."},{"key":"e_1_2_1_10_1","unstructured":"University of Oxford. 2018. Visual Geometry Group Home Page. Retrieved from http:\/\/www.robots.ox.ac.uk\/&sim;vgg\/research\/very_deep\/.  University of Oxford. 2018. Visual Geometry Group Home Page. Retrieved from http:\/\/www.robots.ox.ac.uk\/&sim;vgg\/research\/very_deep\/."},{"key":"e_1_2_1_11_1","unstructured":"European Global Navigation Satellite Systems Agency. 2018. World\u2019s first dual-frequency GNSS smartphone hits the market. Retrieved from https:\/\/www.gsa.europa.eu\/newsroom\/news\/world-s-first-dual-frequency-gnss-smartphone-hits-market.  European Global Navigation Satellite Systems Agency. 2018. World\u2019s first dual-frequency GNSS smartphone hits the market. Retrieved from https:\/\/www.gsa.europa.eu\/newsroom\/news\/world-s-first-dual-frequency-gnss-smartphone-hits-market."},{"key":"e_1_2_1_12_1","unstructured":"Ravi Bhandari. 2018. Wrong-side Driving. Retrieved from https:\/\/youtu.be\/TfwB6kP1ByM.  Ravi Bhandari. 2018. Wrong-side Driving. Retrieved from https:\/\/youtu.be\/TfwB6kP1ByM."},{"key":"e_1_2_1_13_1","volume-title":"YOLO: Real-Time Object Detection.","author":"Redmon Joseph Chet","year":"2018","unstructured":"Joseph Chet Redmon . 2018 . YOLO: Real-Time Object Detection. Retrieved from https:\/\/tinyurl.com\/m9ml6fx. Joseph Chet Redmon. 2018. YOLO: Real-Time Object Detection. Retrieved from https:\/\/tinyurl.com\/m9ml6fx."},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2016.2644615"},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1109\/COMSNETS.2012.6151382"},{"key":"e_1_2_1_16_1","volume-title":"Proceedings of the ACML.","author":"Can\u00e9vet Olivier","year":"2014","unstructured":"Olivier Can\u00e9vet and Fran\u00e7ois Fleuret . 2014 . Efficient sample mining for object detection . In Proceedings of the ACML. Olivier Can\u00e9vet and Fran\u00e7ois Fleuret. 2014. Efficient sample mining for object detection. In Proceedings of the ACML."},{"key":"e_1_2_1_17_1","volume-title":"Shin","author":"Chen Dongyao","year":"2015","unstructured":"Dongyao Chen , Kyong-Tak Cho , Sihui Han , Zhizhuo Jin , and Kang G . Shin . 2015 . Invisible sensing of vehicle steering with smartphones. In Proceedings of the ACM MobiSys . Dongyao Chen, Kyong-Tak Cho, Sihui Han, Zhizhuo Jin, and Kang G. Shin. 2015. Invisible sensing of vehicle steering with smartphones. In Proceedings of the ACM MobiSys."},{"key":"e_1_2_1_18_1","volume-title":"Venkat Padmanabhan, and C. V. Jawahar.","author":"Dua Isha","year":"2019","unstructured":"Isha Dua , Akshay Uttama Nambi , Venkat Padmanabhan, and C. V. Jawahar. 2019 . AutoRate: How attentive is the driver? In Proceedings of the FG. IEEE. Isha Dua, Akshay Uttama Nambi, Venkat Padmanabhan, and C. V. Jawahar. 2019. AutoRate: How attentive is the driver? In Proceedings of the FG. IEEE."},{"key":"e_1_2_1_19_1","unstructured":"Google. 2018. Snap to Roads API. Retrieved from https:\/\/developers.google.com\/maps\/documentation\/roads\/snap.  Google. 2018. Snap to Roads API. Retrieved from https:\/\/developers.google.com\/maps\/documentation\/roads\/snap."},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2016.90"},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10514-009-9113-3"},{"key":"e_1_2_1_22_1","volume-title":"Efros","author":"Huh Minyoung","year":"2016","unstructured":"Minyoung Huh , Pulkit Agrawal , and Alexei A . Efros . 2016 . What makes ImageNet good for transfer learning? Retrieved from arXiv preprint arXiv:1608.08614 (2016). Minyoung Huh, Pulkit Agrawal, and Alexei A. Efros. 2016. What makes ImageNet good for transfer learning? Retrieved from arXiv preprint arXiv:1608.08614 (2016)."},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1109\/IASP.2010.5476151"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2007.908582"},{"key":"e_1_2_1_25_1","volume-title":"Hinton","author":"Krizhevsky Alex","year":"2012","unstructured":"Alex Krizhevsky , Ilya Sutskever , and Geoffrey E . Hinton . 2012 . Imagenet classification with deep convolutional neural networks. In Proceedings of the NIPS. 1097--1105. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Proceedings of the NIPS. 1097--1105."},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1093\/ietisy\/e89-d.7.2092"},{"volume-title":"Proceedings of the ACM SenSys.","author":"Mohan Prashanth","key":"e_1_2_1_27_1","unstructured":"Prashanth Mohan , V. N. Padmanabhan , and R. Ramjee . 2008. Nericell: Rich monitoring of road and traffic conditions using smartphones . In Proceedings of the ACM SenSys. Prashanth Mohan, V. N. Padmanabhan, and R. Ramjee. 2008. Nericell: Rich monitoring of road and traffic conditions using smartphones. In Proceedings of the ACM SenSys."},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.5555\/1405647.1405651"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2014.222"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2014.2321108"},{"key":"e_1_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMI.2016.2528162"},{"key":"e_1_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2015.7298594"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/IROS.2004.1389993"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/TITS.2008.2011691"},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.imavis.2003.10.003"},{"key":"e_1_2_1_36_1","volume-title":"Proceedings of the NIPS. 3320--3328","author":"Yosinski Jason","year":"2014","unstructured":"Jason Yosinski , Jeff Clune , Yoshua Bengio , and Hod Lipson . 2014 . How transferable are features in deep neural networks? In Proceedings of the NIPS. 3320--3328 . Jason Yosinski, Jeff Clune, Yoshua Bengio, and Hod Lipson. 2014. How transferable are features in deep neural networks? In Proceedings of the NIPS. 3320--3328."},{"key":"e_1_2_1_37_1","volume-title":"Lorenzo Torresani Mu Lin, and Andrew T. Campbell","author":"You Chuang-Wen","year":"2013","unstructured":"Chuang-Wen You , Nicholas D. Lane , Fanglin Chen , Rui Wang , Zhenyu Chen , Thomas J. Bao , Yuting Cheng , Lorenzo Torresani Mu Lin, and Andrew T. Campbell . 2013 . CarSafe app: Alerting drowsy and distracted drivers using dual cameras on smartphones. In Proceedings of the ACM MobiSys . Chuang-Wen You, Nicholas D. Lane, Fanglin Chen, Rui Wang, Zhenyu Chen, Thomas J. Bao, Yuting Cheng, Lorenzo Torresani Mu Lin, and Andrew T. Campbell. 2013. CarSafe app: Alerting drowsy and distracted drivers using dual cameras on smartphones. In Proceedings of the ACM MobiSys."},{"key":"e_1_2_1_38_1","volume-title":"Jain","author":"Yu Bin","year":"1997","unstructured":"Bin Yu and Anil K . Jain . 1997 . Lane boundary detection using a multiresolution Hough transform. In Proceedings of the ICIP, Vol. 2 . IEEE , 748--751. Bin Yu and Anil K. Jain. 1997. Lane boundary detection using a multiresolution Hough transform. In Proceedings of the ICIP, Vol. 2. IEEE, 748--751."}],"container-title":["ACM Transactions on Sensor Networks"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3358797","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3358797","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T23:13:26Z","timestamp":1750202006000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3358797"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,1,17]]},"references-count":38,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2020,2,29]]}},"alternative-id":["10.1145\/3358797"],"URL":"https:\/\/doi.org\/10.1145\/3358797","relation":{},"ISSN":["1550-4859","1550-4867"],"issn-type":[{"type":"print","value":"1550-4859"},{"type":"electronic","value":"1550-4867"}],"subject":[],"published":{"date-parts":[[2020,1,17]]},"assertion":[{"value":"2019-03-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2019-08-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2020-01-17","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}