{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,9,29]],"date-time":"2025-09-29T08:14:51Z","timestamp":1759133691142,"version":"3.41.0"},"reference-count":37,"publisher":"Association for Computing Machinery (ACM)","issue":"4","license":[{"start":{"date-parts":[[2018,7,30]],"date-time":"2018-07-30T00:00:00Z","timestamp":1532908800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100012245","name":"Guangdong Science and Technology Program","doi-asserted-by":"crossref","award":["2015A030312015"],"award-info":[{"award-number":["2015A030312015"]}],"id":[{"id":"10.13039\/501100012245","id-type":"DOI","asserted-by":"crossref"}]},{"DOI":"10.13039\/501100012166","name":"973 Program","doi-asserted-by":"crossref","award":["2015CB352501"],"award-info":[{"award-number":["2015CB352501"]}],"id":[{"id":"10.13039\/501100012166","id-type":"DOI","asserted-by":"crossref"}]},{"name":"Shenzhen Science and Technology Program","award":["JCYJ20170302153208613, JCYJ20151015151249564"],"award-info":[{"award-number":["JCYJ20170302153208613, JCYJ20151015151249564"]}]},{"DOI":"10.13039\/501100001809","name":"NSFC","doi-asserted-by":"crossref","award":["61602311, 61528208"],"award-info":[{"award-number":["61602311, 61528208"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"crossref"}]},{"name":"NSERC Canada","award":["611370, 611649, 2015-05407"],"award-info":[{"award-number":["611370, 611649, 2015-05407"]}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2018,8,31]]},"abstract":"<jats:p>\n            Humans can predict the functionality of an object even without any surroundings, since their knowledge and experience would allow them to \"hallucinate\" the interaction or usage scenarios involving the object. We develop\n            <jats:italic>predictive<\/jats:italic>\n            and\n            <jats:italic>generative<\/jats:italic>\n            deep convolutional neural networks to replicate this feat. Specifically, our work focuses on functionalities of man-made 3D objects characterized by human-object or object-object interactions. Our networks are trained on a database of scene contexts, called\n            <jats:italic>interaction contexts<\/jats:italic>\n            , each consisting of a central object and one or more surrounding objects, that represent object functionalities. Given a 3D object\n            <jats:italic>in isolation<\/jats:italic>\n            , our\n            <jats:italic>functional similarity network<\/jats:italic>\n            (fSIM-NET), a variation of the triplet network, is trained to predict the functionality of the object by inferring functionality-revealing interaction contexts. fSIM-NET is complemented by a generative network (iGEN-NET) and a segmentation network (iSEG-NET). iGEN-NET takes a single voxelized 3D object with a functionality label and synthesizes a voxelized surround, i.e., the interaction context which visually demonstrates the corresponding functionality. iSEG-NET further separates the interacting objects into different groups according to their interaction types.\n          <\/jats:p>","DOI":"10.1145\/3197517.3201287","type":"journal-article","created":{"date-parts":[[2018,7,31]],"date-time":"2018-07-31T15:56:23Z","timestamp":1533052583000},"page":"1-13","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":12,"title":["Predictive and generative neural networks for object functionality"],"prefix":"10.1145","volume":"37","author":[{"given":"Ruizhen","family":"Hu","sequence":"first","affiliation":[{"name":"Shenzhen University"}]},{"given":"Zihao","family":"Yan","sequence":"additional","affiliation":[{"name":"Shenzhen University"}]},{"given":"Jingwen","family":"Zhang","sequence":"additional","affiliation":[{"name":"Shenzhen University"}]},{"given":"Oliver","family":"Van Kaick","sequence":"additional","affiliation":[{"name":"Carleton University"}]},{"given":"Ariel","family":"Shamir","sequence":"additional","affiliation":[{"name":"The Interdisciplinary Center"}]},{"given":"Hao","family":"Zhang","sequence":"additional","affiliation":[{"name":"Simon Fraser University"}]},{"given":"Hui","family":"Huang","sequence":"additional","affiliation":[{"name":"Shenzhen University"}]}],"member":"320","published-online":{"date-parts":[[2018,7,30]]},"reference":[{"key":"e_1_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2004.60"},{"key":"e_1_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2017.264"},{"key":"e_1_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/2366145.2366154"},{"key":"e_1_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.1145\/3130800.3130805"},{"key":"e_1_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-46466-4_29"},{"key":"e_1_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2011.5995327"},{"key":"e_1_2_2_7_1","doi-asserted-by":"publisher","DOI":"10.1037\/xge0000129"},{"key":"e_1_2_2_8_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925939"},{"volume-title":"Proc. Int. Conf. on Computer Vision. 85--93","author":"Han X.","key":"e_1_2_2_9_1"},{"key":"e_1_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1145\/3130800.3130811"},{"key":"e_1_2_2_11_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925870"},{"key":"e_1_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766914"},{"key":"e_1_2_2_13_1","unstructured":"Max Jaderberg Karen Simonyan Andrew Zisserman and koray kavukcuoglu. 2015. Spatial Transformer Networks. In NIPS. 2017--2025.   Max Jaderberg Karen Simonyan Andrew Zisserman and koray kavukcuoglu. 2015. Spatial Transformer Networks. In NIPS. 2017--2025."},{"key":"e_1_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2013.385"},{"key":"e_1_2_2_15_1","doi-asserted-by":"publisher","DOI":"10.1145\/2601097.2601117"},{"key":"e_1_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073637"},{"key":"e_1_2_2_17_1","doi-asserted-by":"publisher","DOI":"10.1145\/2766929"},{"key":"e_1_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/2980179.2980223"},{"key":"e_1_2_2_19_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCVW.2015.112"},{"key":"e_1_2_2_20_1","doi-asserted-by":"crossref","unstructured":"Niloy Mitra Michael Wand Hao Zhang Daniel Cohen-Or and Martin Bokeloh. 2013. Structure-aware shape processing. In Eurographics State-of-the-art Report (STAR). 175--197.  Niloy Mitra Michael Wand Hao Zhang Daniel Cohen-Or and Martin Bokeloh. 2013. Structure-aware shape processing. In Eurographics State-of-the-art Report (STAR). 175--197.","DOI":"10.1145\/2542266.2542267"},{"key":"e_1_2_2_21_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cviu.2007.06.002"},{"key":"e_1_2_2_22_1","doi-asserted-by":"publisher","DOI":"10.1145\/3083725"},{"volume-title":"Guibas","year":"2017","author":"Qi Charles R.","key":"e_1_2_2_23_1"},{"key":"e_1_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/2661229.2661230"},{"key":"e_1_2_2_25_1","doi-asserted-by":"publisher","DOI":"10.1145\/2897824.2925867"},{"key":"e_1_2_2_26_1","doi-asserted-by":"publisher","DOI":"10.1109\/34.99242"},{"key":"e_1_2_2_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3130800.3130821"},{"key":"e_1_2_2_28_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2017.319"},{"key":"e_1_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2014.180"},{"key":"e_1_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1111\/j.1467-8659.2011.01885.x"},{"volume-title":"Tenenbaum","year":"2016","author":"Wu Jiajun","key":"e_1_2_2_31_1"},{"volume-title":"Proc. IEEE Conf. on Computer Vision & Pattern Recognition. 1912--1920","author":"Wu Z.","key":"e_1_2_2_32_1"},{"key":"e_1_2_2_33_1","doi-asserted-by":"publisher","DOI":"10.1145\/2980179.2982410"},{"key":"e_1_2_2_34_1","doi-asserted-by":"publisher","DOI":"10.1145\/2574860"},{"volume-title":"Mitra","year":"2013","author":"Zheng Youyi","key":"e_1_2_2_35_1"},{"key":"e_1_2_2_36_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-10605-2_27"},{"key":"e_1_2_2_37_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2015.7298903"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3197517.3201287","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3197517.3201287","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T01:39:22Z","timestamp":1750210762000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3197517.3201287"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2018,7,30]]},"references-count":37,"journal-issue":{"issue":"4","published-print":{"date-parts":[[2018,8,31]]}},"alternative-id":["10.1145\/3197517.3201287"],"URL":"https:\/\/doi.org\/10.1145\/3197517.3201287","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"type":"print","value":"0730-0301"},{"type":"electronic","value":"1557-7368"}],"subject":[],"published":{"date-parts":[[2018,7,30]]},"assertion":[{"value":"2018-07-30","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}