{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,25]],"date-time":"2026-02-25T20:36:48Z","timestamp":1772051808328,"version":"3.50.1"},"reference-count":30,"publisher":"Emerald","issue":"2","license":[{"start":{"date-parts":[[2021,12,30]],"date-time":"2021-12-30T00:00:00Z","timestamp":1640822400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.emerald.com\/insight\/site-policies"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":["IR"],"published-print":{"date-parts":[[2022,2,11]]},"abstract":"<jats:sec>\n<jats:title content-type=\"abstract-subheading\">Purpose<\/jats:title>\n<jats:p>This paper aims to use fully convolutional network (FCN) to predict pixel-wise antipodal grasp affordances for unknown objects and improve the grasp detection performance through multi-scale feature fusion.<\/jats:p>\n<\/jats:sec>\n<jats:sec>\n<jats:title content-type=\"abstract-subheading\">Design\/methodology\/approach<\/jats:title>\n<jats:p>A modified FCN network is used as the backbone to extract pixel-wise features from the input image, which are further fused with multi-scale context information gathered by a three-level pyramid pooling module to make more robust predictions. Based on the proposed unify feature embedding framework, two head networks are designed to implement different grasp rotation prediction strategies (regression and classification), and their performances are evaluated and compared with a defined point metric. The regression network is further extended to predict the grasp rectangles for comparisons with previous methods and real-world robotic grasping of unknown objects.<\/jats:p>\n<\/jats:sec>\n<jats:sec>\n<jats:title content-type=\"abstract-subheading\">Findings<\/jats:title>\n<jats:p>The ablation study of the pyramid pooling module shows that the multi-scale information fusion significantly improves the model performance. The regression approach outperforms the classification approach based on same feature embedding framework on two data sets. The regression network achieves a state-of-the-art accuracy (up to 98.9%) and speed (4\u2009ms per image) and high success rate (97% for household objects, 94.4% for adversarial objects and 95.3% for objects in clutter) in the unknown object grasping experiment.<\/jats:p>\n<\/jats:sec>\n<jats:sec>\n<jats:title content-type=\"abstract-subheading\">Originality\/value<\/jats:title>\n<jats:p>A novel pixel-wise grasp affordance prediction network based on multi-scale feature fusion is proposed to improve the grasp detection performance. Two prediction approaches are formulated and compared based on the proposed framework. The proposed method achieves excellent performances on three benchmark data sets and real-world robotic grasping experiment.<\/jats:p>\n<\/jats:sec>","DOI":"10.1108\/ir-06-2021-0118","type":"journal-article","created":{"date-parts":[[2021,12,28]],"date-time":"2021-12-28T11:29:10Z","timestamp":1640690950000},"page":"368-381","source":"Crossref","is-referenced-by-count":6,"title":["Real-time pixel-wise grasp affordance prediction based on multi-scale context information fusion"],"prefix":"10.1108","volume":"49","author":[{"given":"Yongxiang","family":"Wu","sequence":"first","affiliation":[]},{"given":"Yili","family":"Fu","sequence":"additional","affiliation":[]},{"given":"Shuguo","family":"Wang","sequence":"additional","affiliation":[]}],"member":"140","published-online":{"date-parts":[[2021,12,30]]},"reference":[{"issue":"3","key":"key2022102108064395900_ref001","doi-asserted-by":"crossref","first-page":"547","DOI":"10.1109\/TRO.2016.2638453","article-title":"RGB-D object recognition and grasp detection using hierarchical cascaded forests","volume":"33","year":"2017","journal-title":"IEEE Transactions on Robotics"},{"key":"key2022102108064395900_ref002","first-page":"4875","article-title":"GraspNet: an efficient convolutional neural network for real-time grasp detection for low-powered devices","year":"2018"},{"key":"key2022102108064395900_ref003","first-page":"4960","article-title":"MetaGrasp: data efficient grasping by affordance interpreter network","year":"2019"},{"issue":"3","key":"key2022102108064395900_ref004","doi-asserted-by":"crossref","first-page":"57","DOI":"10.3390\/mti2030057","article-title":"Review of deep learning methods in robotic grasp detection","volume":"2","year":"2018","journal-title":"Multimodal Technologies and Interaction"},{"issue":"4","key":"key2022102108064395900_ref005","doi-asserted-by":"crossref","first-page":"3355","DOI":"10.1109\/LRA.2018.2852777","article-title":"Real-world multiobject, multigrasp detection","volume":"3","year":"2018","journal-title":"IEEE Robotics and Automation Letters"},{"key":"key2022102108064395900_ref006","first-page":"3511","article-title":"Jacquard: a large scale dataset for robotic grasp detection","year":"2018"},{"key":"key2022102108064395900_ref007","article-title":"Optimizing correlated graspability score and grasp regression for better grasp prediction","volume-title":"ArXiv Preprint ArXiv:2002.00872","year":"2020"},{"key":"key2022102108064395900_ref008","first-page":"1609","article-title":"A hybrid deep architecture for robotic grasp detection","year":"2017"},{"key":"key2022102108064395900_ref009","first-page":"770","article-title":"Deep residual learning for image recognition","year":"2016"},{"key":"key2022102108064395900_ref010","article-title":"SqueezeNet: alexNet-level accuracy with 50\u00d7 fewer parameters and <0.5MB model size","volume-title":"ArXiv Preprint ArXiv:1602.07360","year":"2016"},{"key":"key2022102108064395900_ref011","first-page":"3304","article-title":"Efficient grasping from RGBD images: learning using a new rectangle representation","year":"2011"},{"key":"key2022102108064395900_ref012","first-page":"4953","article-title":"Object detection approach for robot grasp detection","year":"2019"},{"key":"key2022102108064395900_ref013","article-title":"Adam: a method for stochastic optimization","year":"2015"},{"issue":"6","key":"key2022102108064395900_ref014","doi-asserted-by":"crossref","first-page":"84","DOI":"10.1145\/3065386","article-title":"ImageNet classification with deep convolutional neural networks","volume":"60","year":"2017","journal-title":"Communications of the ACM"},{"key":"key2022102108064395900_ref015","first-page":"769","article-title":"Robotic grasp detection using deep convolutional neural networks","year":"2017"},{"key":"key2022102108064395900_ref016","article-title":"Antipodal robotic grasping using generative residual convolutional neural network","year":"2020"},{"issue":"4\/5","key":"key2022102108064395900_ref017","first-page":"705","article-title":"Deep learning for detecting robotic grasps","volume":"34","year":"2015","journal-title":"The International Journal of Robotics Research"},{"key":"key2022102108064395900_ref018","article-title":"Dex-Net 2.0: deep learning to plan robust grasps with synthetic point clouds and analytic grasp metrics","volume":"13","year":"2017","journal-title":"Robotics: Science and Systems"},{"issue":"2\/3","key":"key2022102108064395900_ref019","first-page":"183","article-title":"Learning robust, real-time, reactive robotic grasping","volume":"39","year":"2020","journal-title":"The International Journal of Robotics Research"},{"key":"key2022102108064395900_ref020","article-title":"Closing the loop for robotic grasping: a real-time, generative grasp synthesis approach","volume":"14","year":"2018","journal-title":"Robotics: Science and Systems XIV"},{"key":"key2022102108064395900_ref021","first-page":"3406","article-title":"Supersizing self-supervision: learning to grasp from 50K tries and 700 robot hours","year":"2016"},{"key":"key2022102108064395900_ref022","first-page":"1316","article-title":"Real-time grasp detection using convolutional neural networks","year":"2015"},{"key":"key2022102108064395900_ref023","first-page":"6517","article-title":"YOLO9000: better, faster, stronger","year":"2017"},{"key":"key2022102108064395900_ref024","article-title":"Very deep convolutional networks for large-scale image recognition","year":"2015"},{"key":"key2022102108064395900_ref025","first-page":"1","article-title":"FCOS: a simple and strong anchor-free object detector","year":"2020","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"key2022102108064395900_ref026","first-page":"278364919868017","article-title":"Robotic pick-and-place of novel objects in clutter with multi-affordance grasping and cross-domain image matching","year":"2019","journal-title":"The International Journal of Robotics Research"},{"key":"key2022102108064395900_ref027","first-page":"3014","article-title":"A real-time robotic grasping approach with oriented anchor box","year":"2021"},{"key":"key2022102108064395900_ref028","first-page":"6230","article-title":"Pyramid scene parsing network","year":"2017"},{"key":"key2022102108064395900_ref029","article-title":"Object detectors emerge in deep scene CNNs","year":"2015"},{"key":"key2022102108064395900_ref030","first-page":"7223","article-title":"Fully convolutional grasp detection network with oriented anchor box","year":"2018"}],"container-title":["Industrial Robot: the international journal of robotics research and application"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/www.emerald.com\/insight\/content\/doi\/10.1108\/IR-06-2021-0118\/full\/xml","content-type":"application\/xml","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/www.emerald.com\/insight\/content\/doi\/10.1108\/IR-06-2021-0118\/full\/html","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,7,24]],"date-time":"2025-07-24T21:39:22Z","timestamp":1753393162000},"score":1,"resource":{"primary":{"URL":"http:\/\/www.emerald.com\/ir\/article\/49\/2\/368-381\/186233"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,12,30]]},"references-count":30,"journal-issue":{"issue":"2","published-online":{"date-parts":[[2021,12,30]]},"published-print":{"date-parts":[[2022,2,11]]}},"alternative-id":["10.1108\/IR-06-2021-0118"],"URL":"https:\/\/doi.org\/10.1108\/ir-06-2021-0118","relation":{},"ISSN":["0143-991X","0143-991X"],"issn-type":[{"value":"0143-991X","type":"print"},{"value":"0143-991X","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,12,30]]}}}