{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,16]],"date-time":"2025-06-16T10:33:16Z","timestamp":1750069996298,"version":"3.37.3"},"reference-count":45,"publisher":"Springer Science and Business Media LLC","issue":"12","license":[{"start":{"date-parts":[[2021,10,7]],"date-time":"2021-10-07T00:00:00Z","timestamp":1633564800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2021,10,7]],"date-time":"2021-10-07T00:00:00Z","timestamp":1633564800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61801391"],"award-info":[{"award-number":["61801391"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"name":"Open Project Program of the National Laboratory of Pattern Recognition","award":["202000025"],"award-info":[{"award-number":["202000025"]}]},{"DOI":"10.13039\/501100002858","name":"China Postdoctoral Science Foundation","doi-asserted-by":"crossref","award":["2018M631193"],"award-info":[{"award-number":["2018M631193"]}],"id":[{"id":"10.13039\/501100002858","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Vis Comput"],"published-print":{"date-parts":[[2022,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>In applications of augmented reality or mixed reality, rendering virtual objects in real scenes with consistent illumination is crucial for realistic visualization experiences. Prior learning-based methods reported in the literature usually attempt to reconstruct complicated high dynamic range environment maps from limited input, and rely on a separate rendering pipeline to light up the virtual object. In this paper, an object-based illumination transferring and rendering algorithm is proposed to tackle this problem within a unified framework. Given a single low dynamic range image, instead of recovering lighting environment of the entire scene, the proposed algorithm directly infers the relit virtual object. It is achieved by transferring implicit illumination features which are extracted from its nearby planar surfaces. A generative adversarial network is adopted in the proposed algorithm for implicit illumination features extraction and transferring. Compared to previous works in the literature, the proposed algorithm is more robust, as it is able to efficiently recover spatially varying illumination in both indoor and outdoor scene environments. Experiments have been conducted. It is observed that notable experiment results and comparison outcomes have been obtained quantitatively and qualitatively by the proposed algorithm in different environments. It shows the effectiveness and robustness for realistic virtual object insertion and improved realism.<\/jats:p>","DOI":"10.1007\/s00371-021-02292-2","type":"journal-article","created":{"date-parts":[[2021,10,7]],"date-time":"2021-10-07T04:56:07Z","timestamp":1633582567000},"page":"4251-4265","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":7,"title":["Object-based illumination transferring and rendering for applications of mixed reality"],"prefix":"10.1007","volume":"38","author":[{"given":"Di","family":"Xu","sequence":"first","affiliation":[]},{"given":"Zhen","family":"Li","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0003-3243-5693","authenticated-orcid":false,"given":"Qi","family":"Cao","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2021,10,7]]},"reference":[{"key":"2292_CR1","unstructured":"Azure spatial anchors. https:\/\/azure.microsoft.com\/en-us\/services\/spatial-anchors\/"},{"key":"2292_CR2","doi-asserted-by":"crossref","unstructured":"Barron, J.T., Malik, J.: Intrinsic scene properties from a single RGB-D image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 17\u201324 (2013)","DOI":"10.1109\/CVPR.2013.10"},{"key":"2292_CR3","doi-asserted-by":"publisher","first-page":"829","DOI":"10.1007\/s00371-018-1550-6","volume":"34","author":"G Bui","year":"2018","unstructured":"Bui, G., Le, T., Morago, B., Duan, Y.: Point-based rendering enhancement via deep learning. Vis. Comput. 34, 829\u2013841 (2018). https:\/\/doi.org\/10.1007\/s00371-018-1550-6","journal-title":"Vis. Comput."},{"key":"2292_CR4","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.13341","author":"DA Calian","year":"2018","unstructured":"Calian, D.A., Lalonde, J.-F., Gotardo, P., Simon, T., Matthews, I., Mitchell, K.: From faces to outdoor light probes. Comput. Graph. Forum (2018). https:\/\/doi.org\/10.1111\/cgf.13341","journal-title":"Comput. Graph. Forum"},{"key":"2292_CR5","doi-asserted-by":"crossref","unstructured":"Chauve, A.-L., Labatut, P., Pons, J.-P.: Robust piecewise-planar 3D reconstruction and completion from large-scale unstructured point data. In: 2010 IEEE computer society conference on computer vision and pattern recognition, pp. 1261\u20131268. IEEE (2010)","DOI":"10.1109\/CVPR.2010.5539824"},{"key":"2292_CR6","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.13561","author":"D Cheng","year":"2018","unstructured":"Cheng, D., Shi, J., Chen, Y., Deng, X., Zhang, X.: Learning scene illumination by pairwise photos from rear and front mobile cameras. Comput. Graph. Forum (2018). https:\/\/doi.org\/10.1111\/cgf.13561","journal-title":"Comput. Graph. Forum"},{"key":"2292_CR7","doi-asserted-by":"crossref","unstructured":"Debevec, P.: A median cut algorithm for light probe sampling. In: ACM SIGGRAPH 2008 Classes, pp. 1\u20133 (2008)","DOI":"10.1145\/1401132.1401176"},{"key":"2292_CR8","doi-asserted-by":"crossref","unstructured":"Debevec, P.: Rendering synthetic objects into real scenes: bridging traditional and image-based graphics with global illumination and high dynamic range photography. In: ACM SIGGRAPH 2008 Classes, p.\u00a032. ACM (2008)","DOI":"10.1145\/1401132.1401175"},{"key":"2292_CR9","doi-asserted-by":"publisher","unstructured":"Debevec, P., Graham, P., Busch, J., Bolas, M.: A single-shot light probe, pp. 10:1\u201310:1 (2012). https:\/\/doi.org\/10.1145\/2343045.2343058","DOI":"10.1145\/2343045.2343058"},{"issue":"2","key":"2292_CR10","doi-asserted-by":"publisher","first-page":"335","DOI":"10.1109\/TMM.2017.2740025","volume":"20","author":"Y Gao","year":"2017","unstructured":"Gao, Y., Hu, H.-M., Li, B., Guo, Q.: Naturalness preserved nonuniform illumination estimation for image enhancement based on retinex. IEEE Trans. Multimed. 20(2), 335\u2013344 (2017)","journal-title":"IEEE Trans. Multimed."},{"key":"2292_CR11","doi-asserted-by":"crossref","unstructured":"Gardner, M.-A., Hold-Geoffroy, Y., Sunkavalli, K., Gagn\u00e9, C., Lalonde, J.-F.: Deep parametric indoor lighting estimation. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 7175\u20137183 (2019)","DOI":"10.1109\/ICCV.2019.00727"},{"key":"2292_CR12","unstructured":"Gardner, M.-A., Sunkavalli, K., Yumer, E., Shen, X., Gambaretto, E., Gagn\u00e9, C., Lalonde, J.-F.: Learning to predict indoor illumination from a single image. ACM Trans Graph (SIGGRAPH Asia)"},{"key":"2292_CR13","doi-asserted-by":"crossref","unstructured":"Garon, M., Sunkavalli, K., Hadap, S., Carr, N., Lalonde, J.-F.: Fast spatially-varying indoor lighting estimation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)","DOI":"10.1109\/CVPR.2019.00707"},{"key":"2292_CR14","doi-asserted-by":"crossref","unstructured":"Georgoulis, S., Rematas, K., Ritschel, T., Fritz, M., Tuytelaars, T., Gool, L.\u00a0Van.: What is around the camera? In: Proceedings of the IEEE International Conference on Computer Vision, pp. 5170\u20135178 (2017)","DOI":"10.1109\/ICCV.2017.553"},{"key":"2292_CR15","doi-asserted-by":"crossref","unstructured":"Gkitsas, V., Zioulis, N., Alvarez, F., Zarpalas, D., Daras, P.: Deep lighting environment map estimation from spherical panoramas. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition Workshops, pp. 640\u2013641 (2020)","DOI":"10.1109\/CVPRW50498.2020.00328"},{"issue":"6","key":"2292_CR16","doi-asserted-by":"publisher","first-page":"1619","DOI":"10.1109\/TMM.2019.2945197","volume":"22","author":"X Han","year":"2019","unstructured":"Han, X., Yang, H., Xing, G., Liu, Y.: Asymmetric joint GANs for normalizing face illumination from a single image. IEEE Trans. Multimed. 22(6), 1619\u20131633 (2019)","journal-title":"IEEE Trans. Multimed."},{"key":"2292_CR17","doi-asserted-by":"crossref","unstructured":"Hold-Geoffroy, Y., Athawale, A., Lalonde, J.-F.: Deep sky modeling for single image outdoor lighting estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 6927\u20136935 (2019)","DOI":"10.1109\/CVPR.2019.00709"},{"key":"2292_CR18","doi-asserted-by":"crossref","unstructured":"Hold-Geoffroy, Y., Sunkavalli, K., Hadap, S., Gambaretto, E., Lalonde, J.-F.: Deep outdoor illumination estimation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7312\u20137321 (2017)","DOI":"10.1109\/CVPR.2017.255"},{"key":"2292_CR19","doi-asserted-by":"publisher","first-page":"171","DOI":"10.1007\/s00371-009-0360-2","volume":"26","author":"K Jacobs","year":"2010","unstructured":"Jacobs, K., Nielsen, A.H., Vesterbaek, J., Loscos, C.: Coherent radiance capture of scenes under changing illumination conditions for relighting applications. Vis. Comput. 26, 171\u2013185 (2010). https:\/\/doi.org\/10.1007\/s00371-009-0360-2","journal-title":"Vis. Comput."},{"key":"2292_CR20","doi-asserted-by":"crossref","unstructured":"Johnson, M.K., Adelson, E.H.: Shape estimation in natural illumination. In: CVPR 2011, pp. 2553\u20132560. IEEE (2011)","DOI":"10.1109\/CVPR.2011.5995510"},{"key":"2292_CR21","doi-asserted-by":"crossref","unstructured":"Karsch, K., Hedau, V., Forsyth, D., Forsyth, D.:Hoiem. Rendering synthetic objects into legacy photographs. In: ACM Transactions on Graphics (TOG), vol.\u00a030, p. 157. ACM (2011)","DOI":"10.1145\/2070781.2024191"},{"key":"2292_CR22","unstructured":"Kipf, T., Welling, M.: Semi-supervised classification with graph convolutional networks (2017)"},{"key":"2292_CR23","doi-asserted-by":"crossref","unstructured":"LeGendre, C., Ma, W.-C., Fyffe, G., Flynn, J., Charbonnel, L., Busch, J., Debevec, P.: Deeplight: learning illumination for unconstrained mobile mixed reality. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 5918\u20135928 (2019)","DOI":"10.1109\/CVPR.2019.00607"},{"key":"2292_CR24","doi-asserted-by":"crossref","unstructured":"Liu, C., Kim, K., Gu, J., Furukawa, Y., Kautz, J.: Planercnn: 3D plane detection and reconstruction from a single image. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4450\u20134459 (2019)","DOI":"10.1109\/CVPR.2019.00458"},{"key":"2292_CR25","doi-asserted-by":"crossref","unstructured":"Maier, R., Kim, K., Cremers, D., Kautz, J., Nie\u00dfner, M.: Intrinsic3d: high-quality 3D reconstruction by joint appearance and geometry optimization with spatially-varying lighting. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 3114\u20133122 (2017)","DOI":"10.1109\/ICCV.2017.338"},{"key":"2292_CR26","doi-asserted-by":"crossref","unstructured":"Mao, X., Li, Q., Xie, H., Lau, R.Y.K., Wang, Z., Smolley,S.P.: Least squares generative adversarial networks. In: 2017 IEEE International Conference on Computer Vision (ICCV), pp. 2813\u20132821 (2017)","DOI":"10.1109\/ICCV.2017.304"},{"key":"2292_CR27","unstructured":"Metz, L., Poole, B., Pfau, D., Sohldickstein, J.: Unrolled generative adversarial networks. arXiv: Learning (2016)"},{"issue":"8","key":"2292_CR28","doi-asserted-by":"publisher","first-page":"1956","DOI":"10.1109\/TMM.2017.2688924","volume":"19","author":"S-C Pei","year":"2017","unstructured":"Pei, S.-C., Shen, C.-T.: Color enhancement with adaptive illumination estimation for low-backlighted displays. IEEE Trans. Multimed. 19(8), 1956\u20131961 (2017)","journal-title":"IEEE Trans. Multimed."},{"key":"2292_CR29","volume-title":"High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting","author":"E Reinhard","year":"2010","unstructured":"Reinhard, E., Heidrich, W., Debevec, P., Pattanaik, S., Ward, G., Myszkowski, K.: High Dynamic Range Imaging: Acquisition, Display, and Image-Based Lighting. Morgan Kaufmann, Burlington (2010)"},{"key":"2292_CR30","doi-asserted-by":"publisher","first-page":"927","DOI":"10.1007\/s00371-013-0853-x","volume":"29","author":"Z Ren","year":"2013","unstructured":"Ren, Z., Gai, W., Zhong, F., Pettr\u00e9, J., Peng, Q.: Inserting virtual pedestrians into pedestrian groups video with behavior consistency. Vis. Comput. 29, 927\u2013936 (2013). https:\/\/doi.org\/10.1007\/s00371-013-0853-x","journal-title":"Vis. Comput."},{"key":"2292_CR31","doi-asserted-by":"crossref","unstructured":"Song, S., Funkhouser, T.: Neural illumination: lighting prediction for indoor environments. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)","DOI":"10.1109\/CVPR.2019.00708"},{"key":"2292_CR32","doi-asserted-by":"crossref","unstructured":"Srinivasan, P.P., Mildenhall, B., Tancik, M., Barron, J.T., Tucker, R., Snavely, N.: Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 8080\u20138089 (2020)","DOI":"10.1109\/CVPR42600.2020.00810"},{"key":"2292_CR33","doi-asserted-by":"crossref","unstructured":"Tarko, J., Tompkin, J., Richardt, C.: Omnimr: omnidirectional mixed reality with spatially-varying environment reflections from moving 360 video cameras. In: 2019 IEEE conference on virtual reality and 3D user interfaces (VR), pp. 1177\u20131178. IEEE (2019)","DOI":"10.1109\/VR.2019.8798067"},{"key":"2292_CR34","doi-asserted-by":"crossref","unstructured":"Tsai, G., Xu, C., Liu, J., Kuipers, B.: Real-time indoor scene understanding using bayesian filtering with motion cues. In: ICCV, pp. 121\u2013128 (2011)","DOI":"10.1109\/ICCV.2011.6126233"},{"key":"2292_CR35","doi-asserted-by":"crossref","unstructured":"Weber, H., Pr\u00e9vost, D., Lalonde, J.-F.: Learning to estimate indoor lighting from 3D objects. In: 2018 International Conference on 3D Vision (3DV), pp. 199\u2013207. IEEE (2018)","DOI":"10.1109\/3DV.2018.00032"},{"key":"2292_CR36","doi-asserted-by":"crossref","unstructured":"Wei, X., Chen, G., Dong, Y., Lin, S., Tong, X.: Object-based illumination estimation with rendering-aware neural networks. arXiv preprint arXiv:2008.02514 (2020)","DOI":"10.1007\/978-3-030-58555-6_23"},{"key":"2292_CR37","doi-asserted-by":"crossref","unstructured":"Wu, C., Wilburn, B., Matsushita, Y., Theobalt, C.: High-quality shape from multi-view stereo and shading under general illumination. In: CVPR 2011, pp. 969\u2013976. IEEE (2011)","DOI":"10.1109\/CVPR.2011.5995388"},{"issue":"6","key":"2292_CR38","doi-asserted-by":"publisher","first-page":"200","DOI":"10.1145\/2661229.2661232","volume":"33","author":"C Wu","year":"2014","unstructured":"Wu, C., Zollh\u00f6fer, M., Nie\u00dfner, M., Stamminger, M., Izadi, S., Theobalt, C.: Real-time shading-based refinement for consumer depth cameras. ACM Trans. Graph. (ToG) 33(6), 200 (2014)","journal-title":"ACM Trans. Graph. (ToG)"},{"key":"2292_CR39","doi-asserted-by":"crossref","unstructured":"Xu, D., Duan, Q., Zheng, J., Zhang, J., Cai, J., Cham, T.-J.: Recovering surface details under general unknown illumination using shading and coarse multi-view stereo. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1526\u20131533 (2014)","DOI":"10.1109\/CVPR.2014.198"},{"issue":"2","key":"2292_CR40","doi-asserted-by":"publisher","first-page":"423","DOI":"10.1109\/TPAMI.2017.2671458","volume":"40","author":"D Xu","year":"2018","unstructured":"Xu, D., Duan, Q., Zheng, J., Zhang, J., Cai, J., Cham, T.-J.: Shading-based surface detail recovery under general unknown illumination. IEEE Trans. Pattern Anal. Mach. Intell. 40(2), 423\u2013436 (2018)","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"2292_CR41","doi-asserted-by":"crossref","unstructured":"Xu, D., Li, Z., Zhang, Y.: Real-time illumination estimation for mixed reality on mobile devices. In: 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), pp. 703\u2013704. IEEE (2020)","DOI":"10.1109\/VRW50115.2020.00202"},{"key":"2292_CR42","doi-asserted-by":"crossref","unstructured":"Yi, R., Zhu, C., Tan, P., Lin, S.: Faces as lighting probes via unsupervised deep highlight extraction. In: The European Conference on Computer Vision (ECCV) (2018)","DOI":"10.1007\/978-3-030-01240-3_20"},{"key":"2292_CR43","doi-asserted-by":"crossref","unstructured":"Zhang, J., Lalonde, J.-F.: Learning high dynamic range from outdoor panoramas. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 4519\u20134528 (2017)","DOI":"10.1109\/ICCV.2017.484"},{"key":"2292_CR44","doi-asserted-by":"crossref","unstructured":"Zhang, J., Sunkavalli, K., Hold-Geoffroy, Y., Hadap, S., Eisenman, J., Lalonde, J.-F.: All-weather deep outdoor lighting estimation. In: The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2019)","DOI":"10.1109\/CVPR.2019.01040"},{"key":"2292_CR45","doi-asserted-by":"publisher","first-page":"1385","DOI":"10.1007\/s00371-016-1286-0","volume":"33","author":"M Zhu","year":"2017","unstructured":"Zhu, M., Morin, G., Charvillat, V., Ooi, W.T.: Sprite tree: an efficient image-based representation for networked virtual environments. Vis. Comput. 33, 1385\u20131402 (2017). https:\/\/doi.org\/10.1007\/s00371-016-1286-0","journal-title":"Vis. Comput."}],"container-title":["The Visual Computer"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00371-021-02292-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00371-021-02292-2\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00371-021-02292-2.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,12,14]],"date-time":"2022-12-14T15:10:14Z","timestamp":1671030614000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00371-021-02292-2"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,10,7]]},"references-count":45,"journal-issue":{"issue":"12","published-print":{"date-parts":[[2022,12]]}},"alternative-id":["2292"],"URL":"https:\/\/doi.org\/10.1007\/s00371-021-02292-2","relation":{},"ISSN":["0178-2789","1432-2315"],"issn-type":[{"type":"print","value":"0178-2789"},{"type":"electronic","value":"1432-2315"}],"subject":[],"published":{"date-parts":[[2021,10,7]]},"assertion":[{"value":"19 August 2021","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"7 October 2021","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}