{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,20]],"date-time":"2025-11-20T06:26:40Z","timestamp":1763620000503,"version":"3.45.0"},"reference-count":49,"publisher":"Springer Science and Business Media LLC","issue":"6","license":[{"start":{"date-parts":[[2025,9,25]],"date-time":"2025-09-25T00:00:00Z","timestamp":1758758400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,9,25]],"date-time":"2025-09-25T00:00:00Z","timestamp":1758758400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100002911","name":"Universidad Complutense de Madrid","doi-asserted-by":"crossref","id":[{"id":"10.13039\/501100002911","id-type":"DOI","asserted-by":"crossref"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J Vis"],"published-print":{"date-parts":[[2025,12]]},"abstract":"<jats:title>Abstract<\/jats:title>\n                  <jats:p>The camera position both reveals and hides parts of a 3D object. Additionally, the shape and discernible information of the object vary significantly with the camera\u2019s point of view. While existing research has focused on identifying the best point of view for solid objects, the additional information provided by semi-transparent\/translucent objects remains underexplored. This paper introduces a new approach that, without prior knowledge of a polygonal 3D object (whether solid or translucent), can automatically determine its best viewpoint by analyzing what the user sees on the screen: the object projected in 2D. Although identifying the best view may initially seem subjective, this paper demonstrates how this decision can be effectively systematized by a computer. Our method leverages an unsupervised learning approach to discover optimal viewpoints without requiring labeled datasets, thereby automating the process and reducing the need for human intervention. The optimal viewpoints identified by our approach were experimentally compared with those selected by users. Our results indicate that, on average, our method selects viewpoints of higher perceived quality than the users\u2019 manual selections.<\/jats:p>\n                  <jats:p>\n                    <jats:bold>Graphic abstract<\/jats:bold>\n                  <\/jats:p>","DOI":"10.1007\/s12650-025-01079-0","type":"journal-article","created":{"date-parts":[[2025,9,25]],"date-time":"2025-09-25T05:58:35Z","timestamp":1758779915000},"page":"1143-1167","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":0,"title":["Automatic viewpoint selection for polygonal objects through projected mesh analysis"],"prefix":"10.1007","volume":"28","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-9328-115X","authenticated-orcid":false,"given":"Fernando Carlos","family":"L\u00f3pez Hern\u00e1ndez","sequence":"first","affiliation":[]},{"given":"Jenaro","family":"S\u00e1nchez Monz\u00f3n","sequence":"additional","affiliation":[]},{"given":"Javier","family":"Rainer Granados","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,9,25]]},"reference":[{"key":"1079_CR1","doi-asserted-by":"publisher","DOI":"10.1016\/B978-0-32-391755-1.00024-9","author":"E Alexiou","year":"2023","unstructured":"Alexiou E et al (2023) Subjective and objective quality assessment for volumetric video. Academic Press. https:\/\/doi.org\/10.1016\/B978-0-32-391755-1.00024-9","journal-title":"Academic Press"},{"key":"1079_CR2","doi-asserted-by":"crossref","unstructured":"Alexiou E, Ebrahimi T (2019) Exploiting user interactivity in quality assessment of point cloud imaging. In: 2019 Eleventh International conference on quality of multimedia experience (QoMEX), pp 1\u20136","DOI":"10.1109\/QoMEX.2019.8743277"},{"issue":"3","key":"1079_CR3","doi-asserted-by":"publisher","first-page":"499","DOI":"10.1111\/j.1467-8659.2004.00781.x","volume":"23","author":"C And\u00fajar","year":"2004","unstructured":"And\u00fajar C, V\u00e1zquez P, Fair\u00e9n M (2004) Way-finder: guided tours through complex walkthrough models. Comput Graph Forum 23(3):499\u2013508","journal-title":"Comput Graph Forum"},{"issue":"1","key":"1079_CR4","doi-asserted-by":"publisher","first-page":"195","DOI":"10.1007\/S10489-021-02415-1","volume":"52","author":"J Bi","year":"2021","unstructured":"Bi J, Zhou Y, Tang Z, Luo Q (2021) Artificial electric field algorithm with inertia and repulsion for spherical minimum spanning tree. Appl Intell 52(1):195\u2013214. https:\/\/doi.org\/10.1007\/S10489-021-02415-1","journal-title":"Appl Intell"},{"key":"1079_CR5","doi-asserted-by":"publisher","DOI":"10.1007\/S11042-023-16169-0\/FIGURES\/11","author":"S Biswas","year":"2023","unstructured":"Biswas S, Kruijff E, Veas E (2023) View recommendation for multi-camera demonstration-based training. Multimed Tools Appl. https:\/\/doi.org\/10.1007\/S11042-023-16169-0\/FIGURES\/11","journal-title":"Multimed Tools Appl"},{"issue":"5","key":"1079_CR6","doi-asserted-by":"publisher","first-page":"370","DOI":"10.3390\/e20050370","volume":"20","author":"X Bonaventura","year":"2018","unstructured":"Bonaventura X, Feixas M, Sbert M, Chuang L, Wallraven C (2018) A survey of viewpoint selection methods for polygonal models. Entropy 20(5):370","journal-title":"Entropy"},{"key":"1079_CR7","doi-asserted-by":"publisher","unstructured":"Bordoloi UD, Han-Wei Shen (2006) View Selection for Volume Rendering. In: IEEE conference on visualization, pp 487\u2013494. https:\/\/doi.org\/10.1109\/visual.2005.1532833","DOI":"10.1109\/visual.2005.1532833"},{"key":"1079_CR8","unstructured":"Chang AX et al. (2024) ShapeNet: An Information-Rich 3D Model Repository, 2015, Accessed 04 Jan 2024. [Online]. Available: https:\/\/arxiv.org\/abs\/1512.03012v1"},{"issue":"11","key":"1079_CR9","doi-asserted-by":"publisher","first-page":"3703","DOI":"10.1007\/S00371-021-02203-5\/FIGURES\/10","volume":"38","author":"H Chu","year":"2022","unstructured":"Chu H, Le C, Wang R, Li X, Ma H (2022) Learning representative viewpoints in 3D shape recognition. Vis Comput 38(11):3703\u20133718. https:\/\/doi.org\/10.1007\/S00371-021-02203-5\/FIGURES\/10","journal-title":"Vis Comput"},{"issue":"4","key":"1079_CR10","doi-asserted-by":"publisher","first-page":"324","DOI":"10.1109\/THMS.2021.3090765","volume":"51","author":"J Dufek","year":"2021","unstructured":"Dufek J, Xiao X, Murphy RR (2021) Best viewpoints for external robots or sensors assisting other robots. IEEE Trans Human-Mach Syst 51(4):324\u2013334. https:\/\/doi.org\/10.1109\/THMS.2021.3090765","journal-title":"IEEE Trans Human-Mach Syst"},{"key":"1079_CR11","doi-asserted-by":"crossref","unstructured":"Dutagaci H, Cheung CP, Godil A (2010) A benchmark for best view selection of 3D objects. In: Proceedings of the ACM workshop on 3D object retrieval, pp 45\u201350","DOI":"10.1145\/1877808.1877819"},{"key":"1079_CR12","doi-asserted-by":"crossref","unstructured":"Freitag S, Weyers B, Kuhlen TW (2017) Efficient approximate computation of scene visibility based on navigation meshes and applications for navigation and scene analysis. In: 2017 IEEE symposium on 3D User Interfaces (3DUI), 2017, pp 134\u2013143","DOI":"10.1109\/3DUI.2017.7893330"},{"key":"1079_CR13","doi-asserted-by":"crossref","unstructured":"Freitag S, Weyers B, Kuhlen TW (2018) Interactive exploration assistance for immersive virtual environments based on object visibility and viewpoint quality. In: 2018 IEEE conference on virtual reality and 3D user interfaces (VR), pp 355\u2013362","DOI":"10.1109\/VR.2018.8447553"},{"issue":"1","key":"1079_CR14","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/S10846-023-02024-9","volume":"110","author":"G Fu","year":"2023","unstructured":"Fu G, Wang Y, Yang J, Wang S, Yang G (2023) Monocular visual navigation algorithm for nursing robots via deep learning oriented to dynamic object goal. J Intell Robot Syst 110(1):1\u201318. https:\/\/doi.org\/10.1007\/S10846-023-02024-9","journal-title":"J Intell Robot Syst"},{"key":"1079_CR15","unstructured":"Gallagher G (2023) Automating user-preferred camera placement for volume rendered scientific visualization. In: University of oregon"},{"key":"1079_CR16","unstructured":"Genova K, Savva M, Chang AX, Funkhouser T (2017) Learning Where to Look: Data-Driven Viewpoint Set Selection for 3D Scenes\u201d arXiv Prepr. arXiv1704.02393"},{"key":"1079_CR17","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.14613","author":"S Hartwig","year":"2022","unstructured":"Hartwig S, Schelling M, Onzenoodt C, V\u00e1zquez P, Hermosilla P, Ropinski T (2022) Learning human viewpoint preferences from sparsely annotated models. Comput Graph Forum. https:\/\/doi.org\/10.1111\/cgf.14613","journal-title":"Comput Graph Forum"},{"key":"1079_CR18","doi-asserted-by":"crossref","unstructured":"He J, Wang L, Zhou W, Zhang H, Cui X, Guo Y (2018) Viewpoint Assessment and Recommendation for Photographing Architectures. In: IEEE Trans. Vis. Comput. Graph.","DOI":"10.1109\/TVCG.2018.2853751"},{"key":"1079_CR19","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2022.3153871","author":"P Hu","year":"2022","unstructured":"Hu P, Boorboor S, Marino J, Kaufman AE (2022) Geometry-aware planar embedding of treelike structures. IEEE Trans vis Comput Graph. https:\/\/doi.org\/10.1109\/TVCG.2022.3153871","journal-title":"IEEE Trans vis Comput Graph"},{"key":"1079_CR20","doi-asserted-by":"publisher","unstructured":"Jayasuriya M, Ranasinghe R, Dissanayake G (2020) Active perception for outdoor localisation with an omnidirectional camera. In: IEEE international conference on intelligent robots and systems, pp 4567\u20134574. https:\/\/doi.org\/10.1109\/IROS45743.2020.9340974.","DOI":"10.1109\/IROS45743.2020.9340974"},{"key":"1079_CR21","doi-asserted-by":"publisher","unstructured":"Kara PA, Tamboli R, Cserkaszky A, Barsi A, Martini M, Jana S (2018) Canonical 3D object orientation for interactive light-field visualization. In: Applications of Digital Image Processing XLI, p 10. https:\/\/doi.org\/10.1117\/12.2320556","DOI":"10.1117\/12.2320556"},{"key":"1079_CR22","doi-asserted-by":"publisher","unstructured":"Keinert J et al. (2023) Light field processing for media applications. In: Immersive Video Technologies, Academic Press, pp 227\u2013264. https:\/\/doi.org\/10.1016\/B978-0-32-391755-1.00015-8","DOI":"10.1016\/B978-0-32-391755-1.00015-8"},{"issue":"2","key":"1079_CR23","doi-asserted-by":"publisher","first-page":"190","DOI":"10.1080\/15502724.2022.2077753","volume":"19","author":"MG Kent","year":"2023","unstructured":"Kent MG, Schiavon S (2023) Predicting window view preferences using the environmental information criteria. LEUKOS 19(2):190\u2013209. https:\/\/doi.org\/10.1080\/15502724.2022.2077753","journal-title":"LEUKOS"},{"issue":"2","key":"1079_CR24","doi-asserted-by":"publisher","first-page":"361","DOI":"10.1111\/cgf.12317","volume":"33","author":"S Lienhard","year":"2014","unstructured":"Lienhard S, Specht M, Neubert B, Pauly M, M\u00fcller P (2014) Thumbnail galleries for procedural models. Comput Graph Forum 33(2):361\u2013370","journal-title":"Comput Graph Forum"},{"key":"1079_CR25","doi-asserted-by":"crossref","unstructured":"Liu D, Puri R, Kamath N, Bhattacharya S (2019) Modeling Image Composition for Visual Aesthetic Assessment. In: Proceedings of the IEEE conference on computer vision and pattern recognition workshops","DOI":"10.1109\/CVPRW.2019.00043"},{"key":"1079_CR26","doi-asserted-by":"publisher","first-page":"1804","DOI":"10.1109\/LSP.2022.3198601","volume":"29","author":"Z Lu","year":"2022","unstructured":"Lu Z, Huang H, Zeng H, Hou J, Ma KK (2022) Point cloud quality assessment via 3D edge similarity measurement. IEEE Signal Process Lett 29:1804\u20131808. https:\/\/doi.org\/10.1109\/LSP.2022.3198601","journal-title":"IEEE Signal Process Lett"},{"key":"1079_CR27","unstructured":"Majumder S et al. (2024) Which Viewpoint Shows it Best? Language for Weakly Supervising View Selection in Multi-view Videos. https:\/\/arxiv.org\/abs\/2411.08753v2"},{"key":"1079_CR28","unstructured":"Marsaglia H (2022) Nicole and Mathai, Manish and Fields, Stefan and Childs, \u201cAutomatic In Situ Camera Placement for Isosurfaces of Large-Scale Scientific Simulations. In: Eurographics Symposium on Parallel Graphics and Visualization"},{"key":"1079_CR29","doi-asserted-by":"publisher","unstructured":"Marsaglia N, Kawakami Y, Schwartz SD, Fields S, Childs H (2021) An Entropy-Based Approach for Identifying User-Preferred Camera Positions, Proc. - 2021 IEEE 11th Symp. Large Data Anal. Vis. LDAV pp 73\u201383, 2021, https:\/\/doi.org\/10.1109\/LDAV53230.2021.00015","DOI":"10.1109\/LDAV53230.2021.00015"},{"issue":"6","key":"1079_CR30","doi-asserted-by":"publisher","first-page":"464","DOI":"10.3390\/E26060464","volume":"26","author":"MY Martin","year":"2024","unstructured":"Martin MY, Sbert M, Chover M (2024) Viewpoint Selection for 3D-Games with f-Divergences. Entropy 26(6):464. https:\/\/doi.org\/10.3390\/E26060464","journal-title":"Entropy"},{"issue":"4","key":"1079_CR31","doi-asserted-by":"publisher","first-page":"591","DOI":"10.1162\/EVCO_a_00099","volume":"21","author":"A Moraglio","year":"2013","unstructured":"Moraglio A, Togelius J, Silva S (2013) Geometric differential evolution for combinatorial and programs spaces. Evol Comput 21(4):591\u2013624","journal-title":"Evol Comput"},{"key":"1079_CR32","doi-asserted-by":"crossref","unstructured":"Morrison D, Corke P, Leitner J (2019) Multi-view picking: next-best-view reaching for improved grasping in clutter. In: IEEE international conference on robotics and automation (ICRA)","DOI":"10.1109\/ICRA.2019.8793805"},{"issue":"no. 3","key":"1079_CR33","doi-asserted-by":"publisher","first-page":"280","DOI":"10.1016\/j.cag.2009.03.003","volume":"33","author":"M Mortara","year":"2009","unstructured":"Mortara M, Spagnuolo M (2009) Semantics-driven best view of 3D shapes. Comput Graph 33(3):280\u2013290","journal-title":"Comput Graph"},{"key":"1079_CR34","unstructured":"Murphy R (2019) Introduction to AI robotics. MIT Press"},{"key":"1079_CR35","unstructured":"Neumann L et al. (2005) Viewpoint Quality: Measures and Applications. In: Computational aesthetics\u201905: proceedings of the first eurographics conference on computational aesthetics in graphics, visualization and imaging, pp 185\u2013192"},{"key":"1079_CR36","first-page":"1","volume":"96","author":"D Plemenos","year":"1996","unstructured":"Plemenos D, Benayada M (1996) Intelligent display in scene modeling. new techniques to automatically compute good views. International Conference GraphiCon 96:1\u20135","journal-title":"International Conference GraphiCon"},{"issue":"8\u201310","key":"1079_CR37","doi-asserted-by":"publisher","first-page":"840","DOI":"10.1007\/s00371-005-0326-y","volume":"21","author":"O Polonsky","year":"2005","unstructured":"Polonsky O, Patan\u00e9 G, Biasotti S, Gotsman C, Spagnuolo M (2005a) What\u2019s in an image? Vis Comput 21(8\u201310):840\u2013847","journal-title":"Vis Comput"},{"issue":"8","key":"1079_CR38","doi-asserted-by":"publisher","first-page":"840","DOI":"10.1007\/S00371-005-0326-Y","volume":"21","author":"O Polonsky","year":"2005","unstructured":"Polonsky O, Patan\u00e9 G, Biasotti S, Gotsman C, Spagnuolo M (2005b) What\u2019s in an image? Vis Comput 21(8):840\u2013847. https:\/\/doi.org\/10.1007\/S00371-005-0326-Y","journal-title":"Vis Comput"},{"issue":"5","key":"1079_CR39","doi-asserted-by":"publisher","first-page":"109","DOI":"10.1145\/2019627.2019628","volume":"30","author":"A Secord","year":"2011","unstructured":"Secord A, Lu J, Finkelstein A, Singh M, Nealen A (2011) Perceptual models of viewpoint preference. ACM Trans Graph 30(5):109","journal-title":"ACM Trans Graph"},{"issue":"3","key":"1079_CR40","doi-asserted-by":"publisher","first-page":"173","DOI":"10.1007\/s00371-007-0182-z","volume":"24","author":"D Sokolov","year":"2008","unstructured":"Sokolov D, Plemenos D (2008) Virtual world explorations by using topological and semantic knowledge. Vis Comput 24(3):173\u2013185. https:\/\/doi.org\/10.1007\/s00371-007-0182-z","journal-title":"Vis Comput"},{"issue":"6","key":"1079_CR41","doi-asserted-by":"publisher","DOI":"10.2307\/2323479","volume":"93","author":"MG Stone","year":"1986","unstructured":"Stone MG (1986) A mnemonic for areas of polygons. Am Math Mon 93(6):479. https:\/\/doi.org\/10.2307\/2323479","journal-title":"Am Math Mon"},{"key":"1079_CR42","doi-asserted-by":"publisher","DOI":"10.1142\/s2424905x21400031","author":"Y-H Su","year":"2021","unstructured":"Su Y-H, Huang K, Hannaford B (2021) Multicamera 3D viewpoint adjustment for robotic surgery via deep reinforcement learning. J Med Robot Res. https:\/\/doi.org\/10.1142\/s2424905x21400031","journal-title":"J Med Robot Res"},{"key":"1079_CR43","doi-asserted-by":"publisher","unstructured":"Tamboli RR, Appina B, Kara PA, Martini MG, Channappayya SS, Jana S (2018) Effect of Primitive Features of Content on Perceived Quality of Light Field Visualization. In: 2018 10th International Conference on Quality of Multimedia Experience, QoMEX 2018. https:\/\/doi.org\/10.1109\/QoMEX.2018.8463421.","DOI":"10.1109\/QoMEX.2018.8463421"},{"key":"1079_CR44","unstructured":"V\u00e1zquez Alcocer PP, Feixas M, Sbert M, Heidrich W (2001) Viewpoint Selection using Viewpoint Entropy. In: Proc. Vision, Modeling and Visualization Conference, 2001, pp 273\u2013280"},{"issue":"2","key":"1079_CR45","doi-asserted-by":"publisher","first-page":"717","DOI":"10.1111\/j.1467-8659.2009.01412.x","volume":"28","author":"T Vieira","year":"2009","unstructured":"Vieira T et al (2009) Learning good views through intelligent galleries. Comput Graph Forum 28(2):717\u2013726","journal-title":"Comput Graph Forum"},{"issue":"no. 7","key":"1079_CR46","doi-asserted-by":"publisher","first-page":"1531","DOI":"10.1109\/TPAMI.2018.2840724","volume":"41","author":"W Wang","year":"2018","unstructured":"Wang W, Shen J, Ling H (2018) A deep network solution for attention and aesthetics aware photo cropping. IEEE Trans Pattern Anal Mach Intell 41(7):1531\u20131544","journal-title":"IEEE Trans Pattern Anal Mach Intell"},{"key":"1079_CR47","doi-asserted-by":"crossref","unstructured":"Wang W, Gao T (2016) Constructing canonical regions for fast and effective view selection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 4114\u20134122","DOI":"10.1109\/CVPR.2016.446"},{"key":"1079_CR48","doi-asserted-by":"publisher","unstructured":"Xu Q, Fang F, Gauthier N, Li L, Lim JH (2020) Active Image Sampling on Canonical Views for Novel Object Detection. In: Proceedings - International Conference on Image Processing, ICIP, vol. 2020-Octob, pp 2241\u20132245. https:\/\/doi.org\/10.1109\/ICIP40778.2020.9190661","DOI":"10.1109\/ICIP40778.2020.9190661"},{"key":"1079_CR49","doi-asserted-by":"publisher","first-page":"108602","DOI":"10.1109\/ACCESS.2020.3001230","volume":"8","author":"Y Zhang","year":"2020","unstructured":"Zhang Y, Fei G, Yang G (2020) 3D viewpoint estimation based on aesthetics. IEEE Access 8:108602\u2013108621. https:\/\/doi.org\/10.1109\/ACCESS.2020.3001230","journal-title":"IEEE Access"}],"container-title":["Journal of Visualization"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s12650-025-01079-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s12650-025-01079-0\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s12650-025-01079-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,11,20]],"date-time":"2025-11-20T05:50:20Z","timestamp":1763617820000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s12650-025-01079-0"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,9,25]]},"references-count":49,"journal-issue":{"issue":"6","published-print":{"date-parts":[[2025,12]]}},"alternative-id":["1079"],"URL":"https:\/\/doi.org\/10.1007\/s12650-025-01079-0","relation":{},"ISSN":["1343-8875","1875-8975"],"issn-type":[{"type":"print","value":"1343-8875"},{"type":"electronic","value":"1875-8975"}],"subject":[],"published":{"date-parts":[[2025,9,25]]},"assertion":[{"value":"8 August 2024","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"4 April 2025","order":2,"name":"revised","label":"Revised","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"1 July 2025","order":3,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"25 September 2025","order":4,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no financial interests or personal relationships that could have influenced the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflicts of interest"}}]}}