{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,10]],"date-time":"2025-12-10T09:03:56Z","timestamp":1765357436410,"version":"3.37.3"},"reference-count":17,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2023,7,28]],"date-time":"2023-07-28T00:00:00Z","timestamp":1690502400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2023,7,28]],"date-time":"2023-07-28T00:00:00Z","timestamp":1690502400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/100010665","name":"H2020 Marie Sklodowska-Curie Actions","doi-asserted-by":"publisher","award":["813789"],"award-info":[{"award-number":["813789"]}],"id":[{"id":"10.13039\/100010665","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Vis Comput"],"published-print":{"date-parts":[[2024,3]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>In this paper, we propose a methodology for the fusion of shape from focus and reflectance transformation imaging. This fusion of two seemingly disparate methods of computational imaging is proposed with the purpose of leveraging their strengths in understanding overall surface structure (low-frequency detail) and surface texture\/micro-geometry (high-frequency detail), respectively. This fusion is achieved by our new proposal of the integration of varying light images at different focus distances. We compare three methods of integration: the mean gradient response, the maximum gradient response, and the full vector gradient (FVG). The validation of the tested methods was conducted using different focus measure window sizes and multi-light integration methods to provide a clear demonstration of the effectiveness of the proposed method. The FVG is determined to provide a higher-quality shape recovery of a complex object with the trade-off of increasing the scope of the image acquisition.\n<\/jats:p>","DOI":"10.1007\/s00371-023-02902-1","type":"journal-article","created":{"date-parts":[[2023,7,28]],"date-time":"2023-07-28T13:01:47Z","timestamp":1690549307000},"page":"2067-2079","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["SFF-RTI: an active multi-light approach to shape from focus"],"prefix":"10.1007","volume":"40","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-3277-6260","authenticated-orcid":false,"given":"David A.","family":"Lewis","sequence":"first","affiliation":[]},{"given":"Hermine","family":"Chatoux","sequence":"additional","affiliation":[]},{"given":"Alamin","family":"Mansouri","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2023,7,28]]},"reference":[{"issue":"8","key":"2902_CR1","doi-asserted-by":"publisher","first-page":"824","DOI":"10.1109\/34.308479","volume":"16","author":"SK Nayar","year":"1994","unstructured":"Nayar, S.K., Nakagawa, Y.: Shape from focus. IEEE Trans. Pattern Anal. Mach. Intell. 16(8), 824\u2013831 (1994). https:\/\/doi.org\/10.1109\/34.308479","journal-title":"IEEE Trans. Pattern Anal. Mach. Intell."},{"key":"2902_CR2","doi-asserted-by":"publisher","unstructured":"Malzbender, T., Gelb, D., Wolters H.: Polynomial texture maps. In: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques\u2014SIGGRAPH \u201901, pp. 519\u2013528. ACM Press (2001). https:\/\/doi.org\/10.1145\/383259.383320","DOI":"10.1145\/383259.383320"},{"key":"2902_CR3","first-page":"321","volume":"2004","author":"P Gautron","year":"2004","unstructured":"Gautron, P., Kriv\u00e1nek, J., Pattanaik, S.N., Bouatouch, K.: A novel hemispherical basis for accurate and efficient rendering. Render. Tech. 2004, 321\u2013330 (2004)","journal-title":"Render. Tech."},{"issue":"5","key":"2902_CR4","doi-asserted-by":"publisher","first-page":"607","DOI":"10.1007\/s00138-017-0856-0","volume":"28","author":"G Pitard","year":"2017","unstructured":"Pitard, G., Le Go\u00efc, G., Mansouri, A., Favreli\u00e8re, H., Desage, S.-F., Samper, S., Pillet, M.: Discrete modal decomposition: a new approach for the reflectance modeling and rendering of real surfaces. Mach. Vis. Appl. 28(5), 607\u2013621 (2017)","journal-title":"Mach. Vis. Appl."},{"issue":"6","key":"2902_CR5","doi-asserted-by":"publisher","DOI":"10.1117\/1.1925119","volume":"44","author":"MA Bueno-Ibarra","year":"2005","unstructured":"Bueno-Ibarra, M.A., Borrego, J.\u00c1., Acho, L., Ch\u00e1vez-S\u00e1nchez, M.C.: Fast autofocus algorithm for automated microscopes. Opt. Eng. 44(6), 063601 (2005). https:\/\/doi.org\/10.1117\/1.1925119","journal-title":"Opt. Eng."},{"issue":"14","key":"2902_CR6","doi-asserted-by":"publisher","first-page":"1785","DOI":"10.1016\/S0167-8655(02)00152-6","volume":"23","author":"J Kautsky","year":"2002","unstructured":"Kautsky, J., Flusser, J., Zitov\u00e1, B., \u0160imberov\u00e1, S.: A new wavelet-based measure of image focus. Pattern Recognit. Lett. 23(14), 1785\u20131794 (2002). https:\/\/doi.org\/10.1016\/S0167-8655(02)00152-6","journal-title":"Pattern Recognit. Lett."},{"issue":"9","key":"2902_CR7","doi-asserted-by":"publisher","first-page":"1295","DOI":"10.1016\/j.patrec.2008.02.002","volume":"29","author":"S Li","year":"2008","unstructured":"Li, S., Yang, B.: Multifocus image fusion by combining curvelet and wavelet transform. Pattern Recognit. Lett. 29(9), 1295\u20131301 (2008). https:\/\/doi.org\/10.1016\/j.patrec.2008.02.002","journal-title":"Pattern Recognit. Lett."},{"issue":"11","key":"2902_CR8","doi-asserted-by":"publisher","first-page":"1670","DOI":"10.1109\/83.967395","volume":"10","author":"M Asif","year":"2001","unstructured":"Asif, M.: Shape from focus using multilayer feedforward neural networks. IEEE Trans. Image Process. 10(11), 1670\u20131675 (2001). https:\/\/doi.org\/10.1109\/83.967395","journal-title":"IEEE Trans. Image Process."},{"issue":"9","key":"2902_CR9","doi-asserted-by":"publisher","first-page":"1648","DOI":"10.3390\/app8091648","volume":"8","author":"H-J Kim","year":"2018","unstructured":"Kim, H.-J., Mahmood, M., Choi, T.-S.: An efficient neural network for shape from focus with weight passing method. Appl. Sci. 8(9), 1648 (2018). https:\/\/doi.org\/10.3390\/app8091648","journal-title":"Appl. Sci."},{"issue":"4","key":"2902_CR10","doi-asserted-by":"publisher","first-page":"656","DOI":"10.1002\/jemt.23623","volume":"84","author":"H Mutahira","year":"2021","unstructured":"Mutahira, H., Muhammad, M.S., Li, M., Shin, D.R.: A simplified approach using deep neural network for fast and accurate shape from focus. Microsc. Res. Tech. 84(4), 656\u2013667 (2021). https:\/\/doi.org\/10.1002\/jemt.23623","journal-title":"Microsc. Res. Tech."},{"key":"2902_CR11","doi-asserted-by":"publisher","DOI":"10.1117\/12.7972479","author":"RJ Woodham","year":"1980","unstructured":"Woodham, R.J.: Photometric method for determining surface orientation from multiple images. Opt. Eng. (1980). https:\/\/doi.org\/10.1117\/12.7972479","journal-title":"Opt. Eng."},{"key":"2902_CR12","doi-asserted-by":"publisher","unstructured":"Fattal, R., Agrawala, M., Szymon, R.: Multiscale shape and detail enhancement from multi-light image collections. ACM Trans. Graph. 26, 3 (2007). https:\/\/doi.org\/10.1145\/1276377.1276441","DOI":"10.1145\/1276377.1276441"},{"key":"2902_CR13","doi-asserted-by":"publisher","unstructured":"Raskar, R., Tan, K.-H., Feris, R., Yu, J., Matthew, Y.: Non-photorealistic camera: depth edge detection and stylized rendering using multi-flash imaging. ACM Trans. Graph. 23(3), 679\u2013688 (2004). https:\/\/doi.org\/10.1145\/1015706.1015779","DOI":"10.1145\/1015706.1015779"},{"issue":"11","key":"2902_CR14","doi-asserted-by":"publisher","first-page":"C154","DOI":"10.1364\/JOSAA.36.00C154","volume":"36","author":"H Chatoux","year":"2019","unstructured":"Chatoux, H., Richard, N., Lecellier, F., Fernandez-Maloigne, C.: Gradient in spectral and color images: from the di zenzo initial construction to a generic proposition. JOSA A 36(11), C154\u2013C165 (2019)","journal-title":"JOSA A"},{"key":"2902_CR15","unstructured":"The Blender Foundation (2021) Blender. https:\/\/www.blender.org\/"},{"key":"2902_CR16","unstructured":"Arch\u00e9omatique: Statue du parc d\u2019austerlitz, ajaccio (2a) (2021). https:\/\/sketchfab.com\/3d-models\/statue-du-parc-dausterlitz-ajaccio-2a-49737f2f578a43c29aa47d268c027ec2"},{"key":"2902_CR17","unstructured":"Muzeum Pa\u0142acu Kr\u00f3la Jana III w Wilanowie (Museum of King Jan III\u2019s Palace at Wilanow): Wycisk gemmy (Wil.3083) 2 II 167 (2021). https:\/\/sketchfab.com\/3d-models\/wycisk-gemmy-wil3083-2-ii-167-a07fb23f2d91439ea0c658b1c1a44440"}],"container-title":["The Visual Computer"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00371-023-02902-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00371-023-02902-1\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00371-023-02902-1.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,2,18]],"date-time":"2024-02-18T23:17:01Z","timestamp":1708298221000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00371-023-02902-1"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,7,28]]},"references-count":17,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2024,3]]}},"alternative-id":["2902"],"URL":"https:\/\/doi.org\/10.1007\/s00371-023-02902-1","relation":{},"ISSN":["0178-2789","1432-2315"],"issn-type":[{"type":"print","value":"0178-2789"},{"type":"electronic","value":"1432-2315"}],"subject":[],"published":{"date-parts":[[2023,7,28]]},"assertion":[{"value":"25 April 2023","order":1,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"28 July 2023","order":2,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare that they have no conflict of interest.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}