{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,19]],"date-time":"2026-02-19T15:51:03Z","timestamp":1771516263236,"version":"3.50.1"},"reference-count":39,"publisher":"Cambridge University Press (CUP)","issue":"7","license":[{"start":{"date-parts":[[2024,2,21]],"date-time":"2024-02-21T00:00:00Z","timestamp":1708473600000},"content-version":"unspecified","delay-in-days":0,"URL":"https:\/\/www.cambridge.org\/core\/terms"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["Robotica"],"published-print":{"date-parts":[[2024,7]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>The rise in the number of automated robotic kitchens accelerated the need for advanced food handling system, emphasizing food analysis including ingredient classification pose recognition and assembling strategy. Selecting the optimal piece from a pile of similarly shaped food items is a challenge to automated meal assembling system. To address this, we present a constructive assembling algorithm, introducing a unique approach for food pose detection\u2013Fast Image to Pose Detection (FI2PD), and a closed-loop packing strategy. Powered by a convolutional neural network (CNN) and a pose retrieval model, FI2PD is adept at constructing a 6D pose from only RGB images. The method employs a coarse-to-fine approach, leveraging the CNN to pinpoint object orientation and position, alongside a pose retrieval process for target selection and 6D pose derivation. Our closed-loop packing strategy, aided by the Item Arrangement Verifier, ensures precise arrangement and system robustness. Additionally, we introduce our <jats:italic>FdIngred328<\/jats:italic> dataset of nine food categories ranging from fake foods to real foods, and the automatically generated data based on synthetic techniques. The performance of our method for object recognition and pose detection has been demonstrated to achieve a success rate of 97.9%. Impressively, the integration of a closed-loop strategy into our meal-assembly process resulted in a notable success rate of 90%, outperforming the results of systems lacking the closed-loop mechanism.<\/jats:p>","DOI":"10.1017\/s0263574724000122","type":"journal-article","created":{"date-parts":[[2024,2,21]],"date-time":"2024-02-21T05:25:48Z","timestamp":1708493148000},"page":"2108-2124","source":"Crossref","is-referenced-by-count":5,"title":["Vision-based food handling system for high-resemblance random food items"],"prefix":"10.1017","volume":"42","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-0731-9340","authenticated-orcid":false,"given":"Yadan","family":"Zeng","sequence":"first","affiliation":[]},{"given":"Yee Seng","family":"Teoh","sequence":"additional","affiliation":[]},{"given":"Guoniu","family":"Zhu","sequence":"additional","affiliation":[]},{"given":"Elvin","family":"Toh","sequence":"additional","affiliation":[]},{"given":"I-Ming","family":"Chen","sequence":"additional","affiliation":[]}],"member":"56","published-online":{"date-parts":[[2024,2,21]]},"reference":[{"key":"S0263574724000122_ref24","doi-asserted-by":"crossref","first-page":"542","DOI":"10.1089\/soro.2019.0140","article-title":"Circular shell gripper for handling food products","volume":"8","author":"Wang","year":"2021","journal-title":"Soft Robot"},{"key":"S0263574724000122_ref25","doi-asserted-by":"crossref","unstructured":"[25] Pavlakos, G. , Zhou, X. , Chan, A. , Derpanis, K. G. and Daniilidis, K. , \u201c6-DoF Object Pose from Semantic Keypoints,\u201d In: 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore (IEEE, 2017) pp. 2011\u20132018.","DOI":"10.1109\/ICRA.2017.7989233"},{"key":"S0263574724000122_ref39","doi-asserted-by":"crossref","first-page":"103198","DOI":"10.1016\/j.autcon.2020.103198","article-title":"Image augmentation to improve construction resource detection using generative adversarial networks, cut-and-paste, and image transformation techniques","volume":"115","author":"Bang","year":"2020","journal-title":"Automat Constr"},{"key":"S0263574724000122_ref7","doi-asserted-by":"crossref","unstructured":"[7] Periyasamy, A. S. , Schwarz, M. and Behnke, S. , \u201cRobust 6D Object Pose Estimation in Cluttered Scenes Using Semantic Segmentation and Pose Regression Networks,\u201d In: 2018 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain (IEEE, 2018) pp. 6660\u20136666.","DOI":"10.1109\/IROS.2018.8594406"},{"key":"S0263574724000122_ref16","doi-asserted-by":"crossref","first-page":"107430","DOI":"10.1016\/j.compag.2022.107430","article-title":"Fruit pose recognition and directional orderly grasping strategies for tomato harvesting robots","volume":"202","author":"Rong","year":"2022","journal-title":"Comput Electron Agr"},{"key":"S0263574724000122_ref17","doi-asserted-by":"crossref","unstructured":"[17] Costanzo, M. , De Simone, M. , Federico, S. , Natale, C. and Pirozzi, S. , \u201cEnhanced 6d pose estimation for robotic fruit picking, (2023). arXiv preprint arXiv: 2305.15856, 2023.","DOI":"10.1109\/CoDIT58514.2023.10284072"},{"key":"S0263574724000122_ref12","doi-asserted-by":"crossref","unstructured":"[12] He, Y. , Sun, W. , Huang, H. , Liu, J. , Fan, H. and Sun, J. , \u201cPVN3D: A Deep Point-Wise 3D Keypoints Voting Network for 6DoF Pose Estimation,\u201d In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA (IEEE, 2020) pp. 11632\u201311641.","DOI":"10.1109\/CVPR42600.2020.01165"},{"key":"S0263574724000122_ref11","doi-asserted-by":"crossref","unstructured":"[11] Wang, H. , Sridhar, S. , Huang, J. , Valentin, J. , Song, S. and Guibas, L. J. . Normalized Object Coordinate Space for Category-Level 6D Object Pose and Size Estimation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA (IEEE, 2019) pp. 2637\u20132646.","DOI":"10.1109\/CVPR.2019.00275"},{"key":"S0263574724000122_ref28","doi-asserted-by":"crossref","unstructured":"[28] Li, Y. , Wang, G. , Ji, X. , Xiang, Y. and Fox, D. , \u201cDeepIm: Deep Iterative Matching for 6d Pose Estimation,\u201d In: Proceedings of the European Conference on Computer Vision (ECCV), (Springer, 2018) pp. 683\u2013698.","DOI":"10.1007\/978-3-030-01231-1_42"},{"key":"S0263574724000122_ref19","unstructured":"[19] JLS Automation, Pick-and-Place Robots Designed For Agility (2002), https:\/\/www.jlsautomation.com\/talon-packaging-systems."},{"key":"S0263574724000122_ref33","doi-asserted-by":"crossref","unstructured":"[33] Bargoti, S. and Underwood, J. , \u201cDeep fruit detection in orchards, (2016). arXiv preprint arXiv: 1610.03677.","DOI":"10.1109\/ICRA.2017.7989417"},{"key":"S0263574724000122_ref9","doi-asserted-by":"crossref","unstructured":"[9] Zeng, A. , Yu, K.-T. , Song, S. , Suo, D. , Walker, E. , Rodriguez, A. and Xiao, J. , \u201cMulti-View Self-Supervised Deep Learning for 6D Pose Estimation in the Amazon Picking Challenge,\u201d In: 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore (IEEE, 2017) pp. 1386\u20131383.","DOI":"10.1109\/ICRA.2017.7989165"},{"key":"S0263574724000122_ref36","unstructured":"[36] Radford, A. , Metz, L. and Chintala, S. , \u201cUnsupervised representation learning with deep convolutional generative adversarial networks, (2015), arXiv preprint arXiv: 1511.06434."},{"key":"S0263574724000122_ref1","doi-asserted-by":"crossref","first-page":"1849","DOI":"10.1017\/S0263574721000023","article-title":"Comprehensive review on reaching and grasping of objects in robotics","volume":"39","author":"Marwan","year":"2021","journal-title":"Robotica"},{"key":"S0263574724000122_ref3","doi-asserted-by":"crossref","first-page":"470","DOI":"10.1017\/S0263574722000297","article-title":"Picking out the impurities: Attention-based push-grasping in dense clutter","volume":"41","author":"Lu","year":"2023","journal-title":"Robotica"},{"key":"S0263574724000122_ref38","doi-asserted-by":"crossref","first-page":"1651","DOI":"10.1017\/S0263574722001898","article-title":"Pythagorean-hodograph curves-based trajectory planning for pick-and-place operation of delta robot with prescribed pick and place heights","volume":"41","author":"Su","year":"2023","journal-title":"Robotica"},{"key":"S0263574724000122_ref29","doi-asserted-by":"crossref","unstructured":"[29] Lee, G. G. , Huang, C.-W. , Chen, J.-H. , Chen, S.-Y. and Chen, H.-L. , \u201cAIFood: A Large Scale Food Images Dataset for Ingredient Recognition,\u201d In: TENCON 2019-2019 IEEE Region 10 Conference (TENCON), Kochi, India (IEEE, 2019) pp. 802\u2013805.","DOI":"10.1109\/TENCON.2019.8929715"},{"key":"S0263574724000122_ref18","doi-asserted-by":"crossref","first-page":"11","DOI":"10.1016\/j.ifset.2018.05.011","article-title":"Towards realizing robotic potential in future intelligent food manufacturing systems","volume":"48","author":"Khan","year":"2018","journal-title":"Innov Food Sci Emerg"},{"key":"S0263574724000122_ref31","doi-asserted-by":"crossref","first-page":"588","DOI":"10.1109\/JBHI.2016.2636441","article-title":"Food recognition: A new dataset, experiments, and results","volume":"21","author":"Ciocca","year":"2017","journal-title":"IEEE J Biomed Health Inform"},{"key":"S0263574724000122_ref37","unstructured":"[37] Bochkovskiy, A. , Wang, C.-Y. and Liao, H.-Y. M. , \u201cYolov4: Optimal speed and accuracy of object detection, (2020), arXiv preprint arXiv: 2004.10934."},{"key":"S0263574724000122_ref15","doi-asserted-by":"crossref","first-page":"626989","DOI":"10.3389\/frobt.2021.626989","article-title":"Fruit detection and pose estimation for grape cluster\u2013harvesting robot using binocular imagery based on deep neural networks","volume":"8","author":"Yin","year":"2021","journal-title":"Front Robot AI"},{"key":"S0263574724000122_ref27","doi-asserted-by":"crossref","unstructured":"[27] Park, K. , Patten, T. and Vincze, M. , \u201cPix2Pose: Pixel-Wise Coordinate Regression of Objects for 6d Pose Estimation.\u201d In:\u00a0Proceedings of the IEEE\/CVF International Conference on Computer Vision, Seoul, South Korea (IEEE, 2019) pp. 7668\u20137677.","DOI":"10.1109\/ICCV.2019.00776"},{"key":"S0263574724000122_ref2","doi-asserted-by":"crossref","first-page":"789107","DOI":"10.3389\/frobt.2021.789107","article-title":"Challenges and opportunities in robotic food handling: A review","volume":"8","author":"Wang","year":"2022","journal-title":"Front Robo AI"},{"key":"S0263574724000122_ref32","doi-asserted-by":"crossref","unstructured":"[32] G\u00fcng\u00f6r, C. , Baltac\u0131, F. , Erdem, A. and Erdem, E. , \u201cTurkish Cuisine: A Benchmark Dataset with Turkish Meals for Food Recognition,\u201d In: 2017 25th Signal Processing and Communications Applications Conference (SIU), Antalya, Turkey (IEEE, 2017) pp. 1\u20134.","DOI":"10.1109\/SIU.2017.7960494"},{"key":"S0263574724000122_ref13","doi-asserted-by":"crossref","first-page":"62151","DOI":"10.1109\/ACCESS.2020.2984556","article-title":"Visual perception and modeling for autonomous apple harvesting","volume":"8","author":"Kang","year":"2020","journal-title":"IEEE Access"},{"key":"S0263574724000122_ref30","doi-asserted-by":"crossref","first-page":"2836","DOI":"10.1109\/TMM.2018.2814339","article-title":"Personalized classifier for food image recognition","volume":"20","author":"Horiguchi","year":"2018","journal-title":"IEEE Trans Multi"},{"key":"S0263574724000122_ref34","doi-asserted-by":"crossref","first-page":"852","DOI":"10.1109\/LRA.2020.2965061","article-title":"Minneapple: A benchmark dataset for apple detection and segmentation","volume":"5","author":"H\u00e4ni","year":"2020","journal-title":"IEEE Robot Auto Lett"},{"key":"S0263574724000122_ref14","doi-asserted-by":"crossref","first-page":"428","DOI":"10.3390\/s19020428","article-title":"Guava detection and pose estimation using a low-cost rgb-d sensor in the field","volume":"19","author":"Lin","year":"2019","journal-title":"Sensors"},{"key":"S0263574724000122_ref20","doi-asserted-by":"crossref","unstructured":"[20] Paul, H. , Qiu, Z. , Wang, Z. , Hirai, S. and Kawamura, S. , \u201cA ROS 2 Based Robotic System to Pick-and-Place Granular Food Materials,\u201d In: 2022 IEEE International Conference on Robotics and Biomimetics (ROBIO), Jinghong, China (IEEE, 2022) pp. 99\u2013104.","DOI":"10.1109\/ROBIO55434.2022.10011782"},{"key":"S0263574724000122_ref26","doi-asserted-by":"crossref","unstructured":"[26] Wu, J. , Zhou, B. , Russell, R. , Kee, V. , Wagner, S. , Hebert, M. , Torralba, A. and Johnson, D. M. ,\" Real-Time Object Pose Estimation with Pose Interpreter Networks,\u201d In: 2018 IEEE\/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain (IEEE, 2018) pp. 6798\u20136805.","DOI":"10.1109\/IROS.2018.8593662"},{"key":"S0263574724000122_ref21","doi-asserted-by":"crossref","unstructured":"[21] Takahashi, K. , Ko, W. , Ummadisingu, A. and Maeda, S.-i. . Uncertainty-Aware Self-Supervised Target-Mass Grasping of Granular Foods. In: 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi\u2018an, China (IEEE, 2021) pp. 2620\u20132626.","DOI":"10.1109\/ICRA48506.2021.9561728"},{"key":"S0263574724000122_ref5","doi-asserted-by":"crossref","first-page":"1778","DOI":"10.1109\/TMECH.2022.3227038","article-title":"A bin-picking benchmark for systematic evaluation of robotic-assisted food handling for line production","volume":"28","author":"Zhu","year":"2022","journal-title":"IEEE\/ASME Trans Mech"},{"key":"S0263574724000122_ref10","doi-asserted-by":"crossref","unstructured":"[10] Xiang, Y. , Schmidt, T. , Narayanan, V. and Fox, D. , \u201cPosecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes, (2017). arXiv preprint arXiv: 1711.00199.","DOI":"10.15607\/RSS.2018.XIV.019"},{"key":"S0263574724000122_ref23","doi-asserted-by":"crossref","first-page":"935","DOI":"10.20965\/jrm.2021.p0935","article-title":"A soft needle gripper capable of grasping and piercing for handling food materials","volume":"33","author":"Wang","year":"2021","journal-title":"J Robot Mech"},{"key":"S0263574724000122_ref35","doi-asserted-by":"crossref","unstructured":"[35] Ummadisingu, A. , Takahashi, K. and Fukaya, N. , \u201cCluttered Food Grasping with Adaptive Fingers and Synthetic-Data Trained Object Detection,\u201d In: 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA (IEEE, 2022) pp. 8290\u20138297.","DOI":"10.1109\/ICRA46639.2022.9812448"},{"key":"S0263574724000122_ref8","doi-asserted-by":"crossref","first-page":"1284","DOI":"10.1177\/0278364911401765","article-title":"The moped framework: Object recognition and pose estimation for manipulation","volume":"30","author":"Collet","year":"2011","journal-title":"Int J Rob Res"},{"key":"S0263574724000122_ref22","doi-asserted-by":"crossref","first-page":"3232","DOI":"10.1109\/TMECH.2021.3110277","article-title":"Sensorized reconfigurable soft robotic gripper system for automated food handling","volume":"27","author":"Low","year":"2022","journal-title":"IEEE\/ASME Trans Mech"},{"key":"S0263574724000122_ref4","doi-asserted-by":"crossref","unstructured":"[4] Wang, H. , Sahoo, D. , Liu, C. , Lim, E.-p. and Hoi, S. C. , \u201cLearning Cross-Modal Embeddings with Adversarial Networks for Cooking Recipes and Food Images,\u201d In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA (IEEE, 2019) pp. 11564\u201311573.","DOI":"10.1109\/CVPR.2019.01184"},{"key":"S0263574724000122_ref6","doi-asserted-by":"crossref","unstructured":"[6] Hu, Y. , Fua, P. , Wang, W. and Salzmann, M. , \u201cSingle-Stage 6D Object Pose Estimation,\u201d In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA (IEEE, 2020) pp. 2927\u20132936.","DOI":"10.1109\/CVPR42600.2020.00300"}],"container-title":["Robotica"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.cambridge.org\/core\/services\/aop-cambridge-core\/content\/view\/S0263574724000122","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,10,10]],"date-time":"2024-10-10T12:56:04Z","timestamp":1728564964000},"score":1,"resource":{"primary":{"URL":"https:\/\/www.cambridge.org\/core\/product\/identifier\/S0263574724000122\/type\/journal_article"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,2,21]]},"references-count":39,"journal-issue":{"issue":"7","published-print":{"date-parts":[[2024,7]]}},"alternative-id":["S0263574724000122"],"URL":"https:\/\/doi.org\/10.1017\/s0263574724000122","relation":{},"ISSN":["0263-5747","1469-8668"],"issn-type":[{"value":"0263-5747","type":"print"},{"value":"1469-8668","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,2,21]]}}}