{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,6]],"date-time":"2026-01-06T13:28:55Z","timestamp":1767706135610,"version":"build-2065373602"},"reference-count":26,"publisher":"MDPI AG","issue":"11","license":[{"start":{"date-parts":[[2021,5,31]],"date-time":"2021-05-31T00:00:00Z","timestamp":1622419200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Sensors"],"abstract":"<jats:p>The peg-in-hole task with object feature uncertain is a typical case of robotic operation in the real-world unstructured environment. It is nontrivial to realize object perception and operational decisions autonomously, under the usual visual occlusion and real-time constraints of such tasks. In this paper, a Bayesian networks-based strategy is presented in order to seamlessly combine multiple heterogeneous senses data like humans. In the proposed strategy, an interactive exploration method implemented by hybrid Monte Carlo sampling algorithms and particle filtering is designed to identify the features\u2019 estimated starting value, and the memory adjustment method and the inertial thinking method are introduced to correct the target position and shape features of the object respectively. Based on the Dempster\u2013Shafer evidence theory (D-S theory), a fusion decision strategy is designed using probabilistic models of forces and positions, which guided the robot motion after each acquisition of the estimated features of the object. It also enables the robot to judge whether the desired operation target is achieved or the feature estimate needs to be updated. Meanwhile, the pliability model is introduced into repeatedly perform exploration, planning and execution steps to reduce interaction forces, the number of exploration. The effectiveness of the strategy is validated in simulations and in a physical robot task.<\/jats:p>","DOI":"10.3390\/s21113818","type":"journal-article","created":{"date-parts":[[2021,5,31]],"date-time":"2021-05-31T21:42:06Z","timestamp":1622497326000},"page":"3818","update-policy":"https:\/\/doi.org\/10.3390\/mdpi_crossmark_policy","source":"Crossref","is-referenced-by-count":5,"title":["Multi-Sensor Perception Strategy to Enhance Autonomy of Robotic Operation for Uncertain Peg-in-Hole Task"],"prefix":"10.3390","volume":"21","author":[{"given":"Li","family":"Qin","sequence":"first","affiliation":[{"name":"School of Electrical Engineering, Yanshan University, Qinhuangdao 066012, China"}]},{"given":"Hongyu","family":"Wang","sequence":"additional","affiliation":[{"name":"School of Electrical Engineering, Yanshan University, Qinhuangdao 066012, China"}]},{"given":"Yazhou","family":"Yuan","sequence":"additional","affiliation":[{"name":"School of Electrical Engineering, Yanshan University, Qinhuangdao 066012, China"}]},{"given":"Shufan","family":"Qin","sequence":"additional","affiliation":[{"name":"School of Electrical Engineering, Yanshan University, Qinhuangdao 066012, China"}]}],"member":"1968","published-online":{"date-parts":[[2021,5,31]]},"reference":[{"key":"ref_1","doi-asserted-by":"crossref","first-page":"90","DOI":"10.1109\/MRA.2020.3044914","article-title":"A Tapered Soft Robotic Oropharyngeal Swab for Throat Testing: A New Way to Collect Sputa Samples","volume":"28","author":"Xie","year":"2021","journal-title":"IEEE Robot. Autom. Mag."},{"key":"ref_2","doi-asserted-by":"crossref","first-page":"103651","DOI":"10.1016\/j.robot.2020.103651","article-title":"Skill learning for robotic assembly based on visual perspectives and force sensing","volume":"135","author":"Song","year":"2021","journal-title":"Robot. Auton. Syst."},{"key":"ref_3","doi-asserted-by":"crossref","first-page":"4692","DOI":"10.1109\/TIE.2019.2927186","article-title":"Sensor-Based Control Using an Image Point and Distance Features for Rivet-in-Hole Insertion","volume":"67","author":"Zhu","year":"2020","journal-title":"IEEE Trans. Ind. Electron."},{"key":"ref_4","first-page":"1","article-title":"A Measurement Method for Robot Peg-in-Hole Prealignment Based on Combined Two-Level Visual Sensors","volume":"70","author":"Jiang","year":"2021","journal-title":"IEEE Trans. Instrum. Meas."},{"key":"ref_5","doi-asserted-by":"crossref","first-page":"474","DOI":"10.1016\/j.neucom.2020.10.076","article-title":"Predictive visual control framework of mobile robot for solving occlusion","volume":"423","author":"Zou","year":"2021","journal-title":"Neurocomputing"},{"key":"ref_6","doi-asserted-by":"crossref","unstructured":"Nagahama, K., and Yamazaki, K. (2019, January 3\u20138). Learning from Demonstration Based on a Mechanism to Utilize an Object\u2019s Invisibility. Proceedings of the 2019 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Macao, China.","DOI":"10.1109\/IROS40897.2019.8967917"},{"key":"ref_7","doi-asserted-by":"crossref","unstructured":"Kim, D., Lee, J., Chung, W.-Y., and Lee, J. (2020). Artificial Intelligence-Based Optimal Grasping Control. Sensors, 20.","DOI":"10.3390\/s20216390"},{"key":"ref_8","unstructured":"Okamura, A.M., Amato, N., Asfour, T., Choi, Y.J., Chong, N.Y., Ding, H., Lee, D.H., Lerma, C.C., Li, J.S., and Marchand, E. (2019, January 22\u201326). Determining Object Properties from Tactile Events During Grasp Failure. Proceedings of the IEEE 15th International Conference on Automation Science and Engineering, Vancouver, BC, Canada."},{"key":"ref_9","doi-asserted-by":"crossref","unstructured":"Tian, S., Ebert, F., Jayaraman, D., Mudigonda, M., Finn, C., Calandra, R., and Levine, S. (2019, January 20\u201324). Manipulation by Feel: Touch-Based Control with Deep Predictive Models. Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada.","DOI":"10.1109\/ICRA.2019.8794219"},{"key":"ref_10","doi-asserted-by":"crossref","first-page":"4177","DOI":"10.1109\/LRA.2021.3063925","article-title":"Generation of GelSight Tactile Images for Sim2Real Learning","volume":"6","author":"Gomes","year":"2021","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_11","doi-asserted-by":"crossref","first-page":"6467","DOI":"10.1109\/LRA.2020.3012951","article-title":"End-to-End Tactile Feedback Loop: From Soft Sensor Skin Over Deep GRU-Autoencoders to Tactile Stimulation","volume":"5","author":"Geier","year":"2020","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_12","doi-asserted-by":"crossref","first-page":"eaat8414","DOI":"10.1126\/science.aat8414","article-title":"Trends and challenges in robot manipulation","volume":"364","author":"Billard","year":"2019","journal-title":"Science"},{"key":"ref_13","doi-asserted-by":"crossref","unstructured":"Bekiroglu, Y., Detry, R., and Kragic, D. (2011, January 25\u201330). Learning tactile characterizations of object- and pose-specific grasps. Proceedings of the 2011 IEEE\/RSJ International Conference on Intelligent Robots and Systems, San Francisco, CA, USA.","DOI":"10.1109\/IROS.2011.6048518"},{"key":"ref_14","doi-asserted-by":"crossref","unstructured":"Calandra, R., Owens, A., Jayaraman, D., Lin, J., Yuan, W., Malik, J., Adelson, E., and Levine, S. (2018). More Than a Feeling: Learning to Grasp and Regrasp Using Vision and Touch. IEEE Robot. Autom. Lett.","DOI":"10.1109\/LRA.2018.2852779"},{"key":"ref_15","doi-asserted-by":"crossref","unstructured":"Watkins-Valls, D., Varley, J., and Allen, P. (2019, January 20\u201324). Multi-Modal Geometric Learning for Grasping and Manipulation. Proceedings of the 2019 International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada.","DOI":"10.1109\/ICRA.2019.8794233"},{"key":"ref_16","doi-asserted-by":"crossref","unstructured":"Lv, X., Chen, G., Hu, H., and Lou, Y. (2019, January 6\u20138). A Robotic Charging Scheme for Electric Vehicles Based on Monocular Vision and Force Perception. Proceedings of the 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), Dali, China.","DOI":"10.1109\/ROBIO49542.2019.8961689"},{"key":"ref_17","doi-asserted-by":"crossref","first-page":"14424","DOI":"10.1109\/ACCESS.2020.2966400","article-title":"A Systematic Review on Fusion Techniques and Approaches Used in Applications","volume":"8","author":"Jusoh","year":"2020","journal-title":"IEEE Access"},{"key":"ref_18","doi-asserted-by":"crossref","first-page":"582","DOI":"10.1109\/TRO.2019.2959445","article-title":"Making Sense of Vision and Touch: Learning Multimodal Representations for Contact-Rich Tasks","volume":"36","author":"Lee","year":"2020","journal-title":"IEEE Trans. Robot."},{"key":"ref_19","doi-asserted-by":"crossref","first-page":"231","DOI":"10.1109\/LRA.2020.3038377","article-title":"Bayesian and Neural Inference on LSTM-Based Object Recognition from Tactile and Kinesthetic Information","volume":"6","author":"Pastor","year":"2020","journal-title":"IEEE Robot. Autom. Lett."},{"key":"ref_20","doi-asserted-by":"crossref","unstructured":"Izatt, G., Mirano, G., Adelson, E., and Tedrake, R. (June, January 29). Tracking objects with point clouds from vision and touch. Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore.","DOI":"10.1109\/ICRA.2017.7989460"},{"key":"ref_21","doi-asserted-by":"crossref","first-page":"873","DOI":"10.1109\/TRO.2019.2904461","article-title":"Probabilistic Real-Time User Posture Tracking for Personalized Robot-Assisted Dressing","volume":"35","author":"Zhang","year":"2019","journal-title":"IEEE Trans. Robot."},{"key":"ref_22","doi-asserted-by":"crossref","first-page":"49","DOI":"10.1007\/s10846-020-01303-z","article-title":"Towards Autonomous Robotic Assembly: Using Combined Visual and Tactile Sensing for Adaptive Task Execution","volume":"101","author":"Nottensteiner","year":"2021","journal-title":"J. Intell. Robot. Syst."},{"key":"ref_23","doi-asserted-by":"crossref","unstructured":"Sachtler, A., Nottensteiner, K., Ka\u00dfecker, M., and Albu-Sch\u00e4ffer, A. (2019, January 2\u20136). Combined Visual and Touch-based Sensing for the Autonomous Registration of Objects with Circular Features. Proceedings of the 2019 19th International Conference on Advanced Robotics (ICAR), Belo Horizonte, Brazil.","DOI":"10.1109\/ICAR46387.2019.8981602"},{"key":"ref_24","doi-asserted-by":"crossref","unstructured":"Thomas, U., Molkenstruck, S., Iser, R., and Wahl, F.M. (2007, January 10\u201314). Multi Sensor Fusion in Robot Assembly Using Particle Filters. Proceedings of the Proceedings 2007 IEEE International Conference on Robotics and Automation, Rome, Italy.","DOI":"10.1109\/ROBOT.2007.364067"},{"key":"ref_25","doi-asserted-by":"crossref","first-page":"716","DOI":"10.1016\/j.cja.2014.04.014","article-title":"Combined and interactive effects of interference fit and preloads on composite joints","volume":"27","author":"Liu","year":"2014","journal-title":"Chin. J. Aeronaut."},{"key":"ref_26","doi-asserted-by":"crossref","unstructured":"Liu, Y.-T., Pal, N.R., Marathe, A.R., Wang, Y.-K., and Lin, C.-T. (2017). Fuzzy Decision-Making Fuser (FDMF) for Integrating Human-Machine Autonomous (HMA) Systems with Adaptive Evidence Sources. Front. Neurosci., 11.","DOI":"10.3389\/fnins.2017.00332"}],"container-title":["Sensors"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/11\/3818\/pdf","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,10,11]],"date-time":"2025-10-11T06:09:36Z","timestamp":1760162976000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.mdpi.com\/1424-8220\/21\/11\/3818"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,5,31]]},"references-count":26,"journal-issue":{"issue":"11","published-online":{"date-parts":[[2021,6]]}},"alternative-id":["s21113818"],"URL":"https:\/\/doi.org\/10.3390\/s21113818","relation":{},"ISSN":["1424-8220"],"issn-type":[{"type":"electronic","value":"1424-8220"}],"subject":[],"published":{"date-parts":[[2021,5,31]]}}}