{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,27]],"date-time":"2025-10-27T10:42:31Z","timestamp":1761561751922,"version":"3.41.2"},"reference-count":25,"publisher":"Emerald","issue":"3","license":[{"start":{"date-parts":[[2013,4,26]],"date-time":"2013-04-26T00:00:00Z","timestamp":1366934400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.emerald.com\/insight\/site-policies"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"published-print":{"date-parts":[[2013,4,26]]},"abstract":"<jats:sec><jats:title content-type=\"abstract-heading\">Purpose<\/jats:title><jats:p>Target tracking systems are generally computationally intensive and require expensive and power\u2010hungry visual sensors. On the other hand, the existing target tracking control approaches fail to track the target swiftly and accurately when the mobile robot moves in the diversified manoeuvre modes. The purpose of this paper is to propose a novel target tracking control method with a low cost embedded vision system to achieve high accuracy and speediness of target tracking control, regardless of the type of manoeuvre modes.<\/jats:p><\/jats:sec><jats:sec><jats:title content-type=\"abstract-heading\">Design\/methodology\/approach<\/jats:title><jats:p>The pan\/tilt angle differences are transformed from the tracking error between the image centre and the coordinates of the target centroid returned by the CMUcam3; the corresponding pan\/tilt angle variation rates are calculated based on the manoeuvre control. All of them are fed to the controller. Then the controller generates appropriate control signals to fit the changing speed of target centroid and compensate for the tracking error. The experiments are designed in a way that the CMUcam3 keeps the target centre coincident with the image centre when the mobile robot moves in the diversified manoeuvre modes.<\/jats:p><\/jats:sec><jats:sec><jats:title content-type=\"abstract-heading\">Findings<\/jats:title><jats:p>In spite of the type of manoeuvre modes, the controller responds to the tracking error instantly and actuates the pan\/tilt with suitable position and speed commands, and the target centroid remains in the bounding box during the entire movement.<\/jats:p><\/jats:sec><jats:sec><jats:title content-type=\"abstract-heading\">Originality\/value<\/jats:title><jats:p>The proposed target tracking control takes the correlation between the robot manoeuvre modes and the target tracking control into account, and particularly suits for the target tracking tasks in planetary exploration, surveillance and military applications.<\/jats:p><\/jats:sec>","DOI":"10.1108\/01439911311309979","type":"journal-article","created":{"date-parts":[[2013,4,22]],"date-time":"2013-04-22T11:22:51Z","timestamp":1366629771000},"page":"275-287","source":"Crossref","is-referenced-by-count":10,"title":["Target tracking control of mobile robot in diversified manoeuvre modes with a low cost embedded vision system"],"prefix":"10.1108","volume":"40","author":[{"given":"He","family":"Xu","sequence":"first","affiliation":[]},{"given":"Yi\u2010ping","family":"Shen","sequence":"additional","affiliation":[]}],"member":"140","reference":[{"key":"key2022031020472831500_b1","doi-asserted-by":"crossref","unstructured":"Akbar, M.A. and Qadir, A. (2012), \u201cRobotic vision in game theme\u201d, Procedia Engineering, Vol. 41, pp. 932\u2010937.","DOI":"10.1016\/j.proeng.2012.07.265"},{"key":"key2022031020472831500_b2","doi-asserted-by":"crossref","unstructured":"Bikman, J.D., Meiswinkel, T.W. and Conrad, J.M. (2009), \u201cA vehicle implementation of a color following system using the CMUcam3\u201d, Proceedings of the IEEE International Conference on Southeastcon, Atlanta, GA, pp. 30\u201033.","DOI":"10.1109\/SECON.2009.5174044"},{"key":"key2022031020472831500_b3","unstructured":"Carnegie Mellon University (2003), CMUcam2 Vision Sensor User Guide, available at: www.cs.cmu.edu\/\u223ccmucam2 (accessed 16 November 2012).."},{"key":"key2022031020472831500_b4","doi-asserted-by":"crossref","unstructured":"Chen, S.Y., Li, Y.F. and Kwok, N.M. (2011), \u201cActive vision in robotic systems: a survey of recent developments\u201d, The International Journal of Robotics Research, Vol. 30 No. 11, pp. 1343\u20101377.","DOI":"10.1177\/0278364911410755"},{"key":"key2022031020472831500_b5","doi-asserted-by":"crossref","unstructured":"De\u2010Cub, G., Berrabah, S.A. and Sahli, H. (2004), \u201cColor\u2010based visual servoing under varying illumination conditions\u201d, Robotics and Autonomous Systems, Vol. 47 No. 4, pp. 225\u2010249.","DOI":"10.1016\/j.robot.2004.03.015"},{"key":"key2022031020472831500_b6","doi-asserted-by":"crossref","unstructured":"Donato, D.P., Milella, A., Cicirelli, G. and Distante, A. (2010), \u201cAn autonomous mobile robotic system for surveillance of indoor environments\u201d, International Journal of Advanced Robotic Systems, Vol. 7 No. 1, pp. 19\u201026.","DOI":"10.5772\/7254"},{"key":"key2022031020472831500_b7","doi-asserted-by":"crossref","unstructured":"Frintrop, S. and Jensfelt, P. (2008), \u201cAttentional landmarks and active gaze control for visual SLAM\u201d, IEEE Transactions on Robotics, Vol. 24 No. 5, pp. 1054\u20101065.","DOI":"10.1109\/TRO.2008.2004977"},{"key":"key2022031020472831500_b8","doi-asserted-by":"crossref","unstructured":"Germa, T., Lerasle, F., Ouadah, N. and Cadenat, V. (2010), \u201cVision and RFID data fusion for tracking people in crowds by a mobile robot\u201d, Computer Vision and Image Understanding, Vol. 114 No. 6, pp. 641\u2010651.","DOI":"10.1016\/j.cviu.2010.01.008"},{"key":"key2022031020472831500_b9","doi-asserted-by":"crossref","unstructured":"Howard, A., Parker, L.E. and Sukhatme, G.S. (2006), \u201cExperiments with a large heterogeneous mobile robot team: exploration, mapping, deployment and detection\u201d, The International Journal of Robotics Research, Vol. 25 Nos 5\/6, pp. 431\u2010447.","DOI":"10.1177\/0278364906065378"},{"key":"key2022031020472831500_b10","doi-asserted-by":"crossref","unstructured":"Ishigami, G., Miwa, A., Nagatani, K. and Yoshida, K. (2007), \u201cTerramechanics\u2010based model for steering maneuver of planetary exploration rovers on loose soil\u201d, Journal of Field Robotics, Vol. 24 No. 3, pp. 233\u2010250.","DOI":"10.1002\/rob.20187"},{"key":"key2022031020472831500_b11","doi-asserted-by":"crossref","unstructured":"Kramer, J. and Scheutz, M. (2007), \u201cDevelopment environments for autonomous mobile robots: a survey\u201d, Autonomous Robots, Vol. 22 No. 2, pp. 101\u2010132.","DOI":"10.1007\/s10514-006-9013-8"},{"key":"key2022031020472831500_b12","unstructured":"Leonreal1974 (2010), \u201cCMUcam3 robot vision (tracking color)\u201d, available at: http:\/\/www.youtube.com\/watch?v=glrd_uqBMlQ (accessed 16 November 2012).."},{"key":"key2022031020472831500_b13","doi-asserted-by":"crossref","unstructured":"L\u00f3p\u2010Nic, G., Guerrero, J.J. and Sag\u00fc\u00e9s, C. (2010), \u201cVisual control of vehicles using two\u2010view geometry\u201d, Mechatronics, Vol. 20 No. 2, pp. 315\u2010325.","DOI":"10.1016\/j.mechatronics.2010.01.005"},{"key":"key2022031020472831500_b14","doi-asserted-by":"crossref","unstructured":"Magrini, M., Moroni, D., Nastasi, C., Pagano, P., Petracca, M., Pieri, G., Salvadori, C. and Salvetti, O. (2011), \u201cVisual sensor networks for infomobility\u201d, Pattern Recognition and Image Analysis, Vol. 21 No. 1, pp. 20\u201029.","DOI":"10.1134\/S1054661811010093"},{"key":"key2022031020472831500_b15","doi-asserted-by":"crossref","unstructured":"Mariottini, G.L., Oriolo, G. and Prattichizzo, D. (2007), \u201cImage\u2010based visual servoing for nonholonomic mobile robots using epipolar geometry\u201d, IEEE Transactions on Robotics, Vol. 23 No. 1, pp. 87\u2010100.","DOI":"10.1109\/TRO.2006.886842"},{"key":"key2022031020472831500_b16","doi-asserted-by":"crossref","unstructured":"Motai, Y., Jha, S.K. and Kruse, D. (2012), \u201cHuman tracking from a mobile agent: optical flow and Kalman filter arbitration\u201d, Signal Processing: Image Communication, Vol. 27 No. 1, pp. 83\u201095.","DOI":"10.1016\/j.image.2011.06.005"},{"key":"key2022031020472831500_b17","doi-asserted-by":"crossref","unstructured":"M\u00fcller, G. and Conradt, J. (2012), \u201cSelf\u2010calibrating marker tracking in 3D with event\u2010based vision sensors\u201d, Artificial Neural Networks and Machine Learning \u2013 ICANN 2012, pp. 313\u2010321.","DOI":"10.1007\/978-3-642-33269-2_40"},{"key":"key2022031020472831500_b18","unstructured":"Nesnas, I.A., Maimone, M.W. and Das, H. (2000), \u201cRover maneuvering for autonomous vision\u2010based dexterous manipulation\u201d, Proceedings of the IEEE International Conference on Robotics and Automation, San Francisco, CA, pp. 2296\u20102301."},{"key":"key2022031020472831500_b19","unstructured":"Punpaisarn, S. and Sarawut, S. (2008), \u201cSUT\u2010CARG car\u2010like robots: their electronics and control architecture\u201d, WSEAS Transactions on Circuits and Systems, Vol. 7 No. 6, pp. 579\u2010589."},{"key":"key2022031020472831500_b20","unstructured":"Rodi\u0107, A., Addi, K. and Jezdimirovi\u0107, M. (2010), \u201cSensor\u2010based intelligent navigation and control of autonomous mobile robots for advanced terrain missions\u201d, Scientific Technical Review, Vol. 60 No. 2, pp. 7\u201015."},{"key":"key2022031020472831500_b21","unstructured":"Rowe, A., Goode, A., Goel, D. and Nourbakhsh, I. (2007), \u201cCMUcam3: an open programmable embedded vision sensor\u201d, Technical Report RI\u2010TR\u201007\u201013, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA."},{"key":"key2022031020472831500_b22","unstructured":"Siegwart, R., Nourbakhsh, I.R. and Scaramuzza, D. (2011), Introduction to Autonomous Mobile Robots, 2nd ed., The MIT Press, Cambridge, MA."},{"key":"key2022031020472831500_b23","doi-asserted-by":"crossref","unstructured":"Sj\u00f6, K., L\u00f3pez, D.G., Paul, C., Jensfelt, P. and Kragic, D. (2009), \u201cObject search and localization for an indoor mobile robot\u201d, Journal of Computing and Information Technology, Vol. 17 No. 1, pp. 67\u201080.","DOI":"10.2498\/cit.1001182"},{"key":"key2022031020472831500_b24","doi-asserted-by":"crossref","unstructured":"Tavli, B., Bicakci, K., Zilan, R. and Barcelo\u2010Ordinas, J.M. (2012), \u201cA survey of visual sensor network platforms\u201d, Multimedia Tools and Applications, Vol. 6 No. 3, pp. 1\u201038.","DOI":"10.1007\/s11042-011-0840-z"},{"key":"key2022031020472831500_b25","doi-asserted-by":"crossref","unstructured":"Xu, H., Zhang, Z.Y., Alipour, K., Xue, K. and Gao, X.Z. (2011), \u201cPrototypes selection by multi\u2010objective optimal design: application to a reconfigurable robot in sandy terrain\u201d, Industrial Robot: An International Journal, Vol. 38 No. 6, pp. 599\u2010613.","DOI":"10.1108\/01439911111179110"}],"container-title":["Industrial Robot: An International Journal"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/www.emeraldinsight.com\/doi\/full-xml\/10.1108\/01439911311309979","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.emerald.com\/insight\/content\/doi\/10.1108\/01439911311309979\/full\/xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.emerald.com\/insight\/content\/doi\/10.1108\/01439911311309979\/full\/html","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,7,24]],"date-time":"2025-07-24T23:51:03Z","timestamp":1753401063000},"score":1,"resource":{"primary":{"URL":"http:\/\/www.emerald.com\/ir\/article\/40\/3\/275-287\/187284"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2013,4,26]]},"references-count":25,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2013,4,26]]}},"alternative-id":["10.1108\/01439911311309979"],"URL":"https:\/\/doi.org\/10.1108\/01439911311309979","relation":{},"ISSN":["0143-991X"],"issn-type":[{"type":"print","value":"0143-991X"}],"subject":[],"published":{"date-parts":[[2013,4,26]]}}}