{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,14]],"date-time":"2026-02-14T05:23:40Z","timestamp":1771046620860,"version":"3.50.1"},"reference-count":35,"publisher":"Association for Computing Machinery (ACM)","issue":"6","license":[{"start":{"date-parts":[[2015,11,2]],"date-time":"2015-11-02T00:00:00Z","timestamp":1446422400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100002920","name":"Research Grants Council, University Grants Committee, Hong Kong","doi-asserted-by":"publisher","award":["412911"],"award-info":[{"award-number":["412911"]}],"id":[{"id":"10.13039\/501100002920","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Graph."],"published-print":{"date-parts":[[2015,11,4]]},"abstract":"<jats:p>Photos compress 3D visual data to 2D. However, it is still possible to infer depth information even without sophisticated object learning. We propose a solution based on small-scale defocus blur inherent in optical lens and tackle the estimation problem by proposing a non-parametric matching scheme for natural images. It incorporates a matching prior with our newly constructed edgelet dataset using a non-local scheme, and includes semantic depth order cues for physically based inference. Several applications are enabled on natural images, including geometry based rendering and editing.<\/jats:p>","DOI":"10.1145\/2816795.2818136","type":"journal-article","created":{"date-parts":[[2015,10,27]],"date-time":"2015-10-27T12:36:39Z","timestamp":1445949399000},"page":"1-11","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":41,"title":["Break Ames room illusion"],"prefix":"10.1145","volume":"34","author":[{"given":"Jianping","family":"Shi","sequence":"first","affiliation":[{"name":"The Chinese University of Hong Kong"}]},{"given":"Xin","family":"Tao","sequence":"additional","affiliation":[{"name":"The Chinese University of Hong Kong"}]},{"given":"Li","family":"Xu","sequence":"additional","affiliation":[{"name":"SenseTime Group Limited"}]},{"given":"Jiaya","family":"Jia","sequence":"additional","affiliation":[{"name":"The Chinese University of Hong Kong"}]}],"member":"320","published-online":{"date-parts":[[2015,11,2]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2010.2047910"},{"key":"e_1_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1111\/j.1467-8659.2007.01080.x"},{"key":"e_1_2_1_3_1","doi-asserted-by":"crossref","unstructured":"Chakrabarti A. Zickler T. and Freeman W. T. 2010. Analyzing spatially-varying blur. In CVPR 2512--2519. Chakrabarti A. Zickler T. and Freeman W. T. 2010. Analyzing spatially-varying blur. In CVPR 2512--2519.","DOI":"10.1109\/CVPR.2010.5539954"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICCV.2013.37"},{"key":"e_1_2_1_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/1778765.1778768"},{"key":"e_1_2_1_6_1","unstructured":"Eigen D. Puhrsch C. and Fergus R. 2014. Depth map prediction from a single image using a multi-scale deep network. In NIPS 2366--2374. Eigen D. Puhrsch C. and Fergus R. 2014. Depth map prediction from a single image using a multi-scale deep network. In NIPS 2366--2374."},{"key":"e_1_2_1_7_1","doi-asserted-by":"publisher","DOI":"10.1109\/34.689301"},{"key":"e_1_2_1_8_1","doi-asserted-by":"crossref","unstructured":"Hoiem D. Stein A. N. Efros A. A. and Hebert M. 2007. Recovering occlusion boundaries from a single image. In ICCV 1--8. Hoiem D. Stein A. N. Efros A. A. and Hebert M. 2007. Recovering occlusion boundaries from a single image. In ICCV 1--8.","DOI":"10.1109\/ICCV.2007.4408985"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/1141911.1141934"},{"key":"e_1_2_1_10_1","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-642-33715-4_56"},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.3390\/s120201437"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2014.19"},{"key":"e_1_2_1_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/1276377.1276464"},{"key":"e_1_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1145\/1360612.1360654"},{"key":"e_1_2_1_15_1","first-page":"39","article-title":"Performance evaluation of prewitt edge detector for noisy images","volume":"6","author":"Maini R.","year":"2006","unstructured":"Maini , R. , and Sohal , J. 2006 . Performance evaluation of prewitt edge detector for noisy images . GVIP Journal 6 , 3, 39 -- 46 . Maini, R., and Sohal, J. 2006. Performance evaluation of prewitt edge detector for noisy images. GVIP Journal 6, 3, 39--46.","journal-title":"GVIP Journal"},{"key":"e_1_2_1_16_1","volume-title":"International Conference on Computer Vision Theory and Application, 331--340","author":"Muja M.","unstructured":"Muja , M. , and Lowe , D. G . 2009. Fast approximate nearest neighbors with automatic algorithm configuration . In International Conference on Computer Vision Theory and Application, 331--340 . Muja, M., and Lowe, D. G. 2009. Fast approximate nearest neighbors with automatic algorithm configuration. In International Conference on Computer Vision Theory and Application, 331--340."},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2011.5995372"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1145\/1015706.1015720"},{"key":"e_1_2_1_19_1","unstructured":"Saxena A. Chung S. H. and Ng A. Y. 2005. Learning depth from single monocular images. In NIPS 1--8. Saxena A. Chung S. H. and Ng A. Y. 2005. Learning depth from single monocular images. In NIPS 1--8."},{"key":"e_1_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2008.132"},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1023\/A:1014573219977"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1023\/A:1008175127327"},{"key":"e_1_2_1_23_1","doi-asserted-by":"crossref","unstructured":"Shi J. Xu L. and Jia J. 2015. Just noticeable defocus blur detection and estimation. In CVPR 1--8. Shi J. Xu L. and Jia J. 2015. Just noticeable defocus blur detection and estimation. In CVPR 1--8.","DOI":"10.1109\/CVPR.2015.7298665"},{"key":"e_1_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/2601097.2601159"},{"key":"e_1_2_1_25_1","doi-asserted-by":"publisher","DOI":"10.1007\/BF02028349"},{"key":"e_1_2_1_26_1","doi-asserted-by":"crossref","unstructured":"Tai Y.-W. and Brown M. S. 2009. Single image defocus map estimation using local contrast prior. In ICIP 1797--1800. Tai Y.-W. and Brown M. S. 2009. Single image defocus map estimation using local contrast prior. In ICIP 1797--1800.","DOI":"10.1109\/ICIP.2009.5414620"},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/1276377.1276463"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1023\/A:1007905828438"},{"key":"e_1_2_1_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/1409060.1409072"},{"key":"e_1_2_1_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/2508363.2508404"},{"key":"e_1_2_1_31_1","doi-asserted-by":"crossref","unstructured":"Zhou C. and Nayar S. 2009. What are good apertures for defocus deblurring? In ICCP 1--8. Zhou C. and Nayar S. 2009. What are good apertures for defocus deblurring? In ICCP 1--8.","DOI":"10.1109\/ICCPHOT.2009.5559018"},{"key":"e_1_2_1_32_1","doi-asserted-by":"crossref","unstructured":"Zhou C. Lin S. and Nayar S. 2009. Coded aperture pairs for depth from defocus. In ICCV 325--332. Zhou C. Lin S. and Nayar S. 2009. Coded aperture pairs for depth from defocus. In ICCV 325--332.","DOI":"10.1109\/ICCV.2009.5459268"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2013.2279316"},{"key":"e_1_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2011.03.009"},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1006\/cviu.2000.0899"}],"container-title":["ACM Transactions on Graphics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/2816795.2818136","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/2816795.2818136","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T05:48:19Z","timestamp":1750225699000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/2816795.2818136"}},"subtitle":["depth from general single images"],"short-title":[],"issued":{"date-parts":[[2015,11,2]]},"references-count":35,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2015,11,4]]}},"alternative-id":["10.1145\/2816795.2818136"],"URL":"https:\/\/doi.org\/10.1145\/2816795.2818136","relation":{},"ISSN":["0730-0301","1557-7368"],"issn-type":[{"value":"0730-0301","type":"print"},{"value":"1557-7368","type":"electronic"}],"subject":[],"published":{"date-parts":[[2015,11,2]]},"assertion":[{"value":"2015-11-02","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}