{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,13]],"date-time":"2026-04-13T16:55:52Z","timestamp":1776099352753,"version":"3.50.1"},"publisher-location":"New York, NY, USA","reference-count":69,"publisher":"ACM","license":[{"start":{"date-parts":[[2022,10,28]],"date-time":"2022-10-28T00:00:00Z","timestamp":1666915200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2022,10,29]]},"DOI":"10.1145\/3526113.3545633","type":"proceedings-article","created":{"date-parts":[[2022,10,28]],"date-time":"2022-10-28T16:37:41Z","timestamp":1666975061000},"page":"1-15","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":31,"title":["Look over there! Investigating Saliency Modulation for Visual Guidance with Augmented Reality Glasses"],"prefix":"10.1145","author":[{"given":"Jonathan","family":"Sutton","sequence":"first","affiliation":[{"name":"University of Otago, New Zealand and University of Otago, New Zealand"}]},{"given":"Tobias","family":"Langlotz","sequence":"additional","affiliation":[{"name":"University of Otago, New Zealand and University of Otago, New Zealand"}]},{"given":"Alexander","family":"Plopski","sequence":"additional","affiliation":[{"name":"Graz University of Technology, Austria and Otago Business School, University of Otago, New Zealand"}]},{"given":"Stefanie","family":"Zollmann","sequence":"additional","affiliation":[{"name":"Computer Science, University of Otago, New Zealand and Computer Science, University of Otago, New Zealand"}]},{"given":"Yuta","family":"Itoh","sequence":"additional","affiliation":[{"name":"The University of Tokyo, Japan and Tokyo Institute of Technology, Japan"}]},{"given":"Holger","family":"Regenbrecht","sequence":"additional","affiliation":[{"name":"Department of Information Science, University of Otago, New Zealand and Department of Information Science, University of Otago, New Zealand"}]}],"member":"320","published-online":{"date-parts":[[2022,10,28]]},"reference":[{"key":"e_1_3_2_2_1_1","doi-asserted-by":"publisher","DOI":"10.1007\/s10055-017-0319-y"},{"key":"e_1_3_2_2_2_1","doi-asserted-by":"publisher","DOI":"10.1145\/3206505.3206517"},{"key":"e_1_3_2_2_3_1","doi-asserted-by":"publisher","DOI":"10.1145\/1559755.1559757"},{"key":"e_1_3_2_2_4_1","doi-asserted-by":"publisher","DOI":"10.2753\/mis0742-1222230408"},{"key":"e_1_3_2_2_5_1","doi-asserted-by":"publisher","DOI":"10.1145\/1124772.1124939"},{"key":"e_1_3_2_2_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/2492494.2492508"},{"key":"e_1_3_2_2_7_1","first-page":"0 (2015), 4\u00a0pag","volume-title":"CAT2000: A Large Scale Fixation Dataset for Boosting Saliency Research. CVPR 2015 workshop on \u201dFuture of Datasets\u201d 0","author":"Borji Ali","year":"2015","unstructured":"Ali Borji and Laurent Itti . 2015 . CAT2000: A Large Scale Fixation Dataset for Boosting Saliency Research. CVPR 2015 workshop on \u201dFuture of Datasets\u201d 0 , 0 (2015), 4\u00a0pages. arXiv preprint arXiv:1505.03581 . Ali Borji and Laurent Itti. 2015. CAT2000: A Large Scale Fixation Dataset for Boosting Saliency Research. CVPR 2015 workshop on \u201dFuture of Datasets\u201d 0, 0 (2015), 4\u00a0pages. arXiv preprint arXiv:1505.03581."},{"key":"e_1_3_2_2_8_1","first-page":"2906","article-title":"FocusAR: Auto-focus Augmented Reality Eyeglasses for both Real World and Virtual Imagery","volume":"24","author":"Chakravarthula P.","year":"2018","unstructured":"P. Chakravarthula , D. Dunn , K. Ak\u015fit , and H. Fuchs . 2018 . FocusAR: Auto-focus Augmented Reality Eyeglasses for both Real World and Virtual Imagery . IEEE TVCG 24 , 11 (2018), 2906 \u2013 2916 . https:\/\/doi.org\/10.1109\/TVCG.2018.2868532 10.1109\/TVCG.2018.2868532 P. Chakravarthula, D. Dunn, K. Ak\u015fit, and H. Fuchs. 2018. FocusAR: Auto-focus Augmented Reality Eyeglasses for both Real World and Virtual Imagery. IEEE TVCG 24, 11 (2018), 2906\u20132916. https:\/\/doi.org\/10.1109\/TVCG.2018.2868532","journal-title":"IEEE TVCG"},{"key":"e_1_3_2_2_9_1","doi-asserted-by":"publisher","DOI":"10.1145\/3414685.3417846"},{"key":"e_1_3_2_2_10_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2018.2851672"},{"key":"e_1_3_2_2_11_1","volume-title":"More than meets the eye: An engineering study to empirically examine the blending of real and virtual color spaces","author":"Gabbard L.","year":"2010","unstructured":"J.\u00a0 L. Gabbard , J.\u00a0 E. Swan , J. Zedlitz , and W.\u00a0 W. Winchester . 2010. More than meets the eye: An engineering study to empirically examine the blending of real and virtual color spaces . In IEEE VR. IEEE , Boston, MA, USA , 79\u201386. https:\/\/doi.org\/10.1109\/VR. 2010 .5444808 10.1109\/VR.2010.5444808 J.\u00a0L. Gabbard, J.\u00a0E. Swan, J. Zedlitz, and W.\u00a0W. Winchester. 2010. More than meets the eye: An engineering study to empirically examine the blending of real and virtual color spaces. In IEEE VR. IEEE, Boston, MA, USA, 79\u201386. https:\/\/doi.org\/10.1109\/VR.2010.5444808"},{"key":"e_1_3_2_2_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISMAR.2012.6402555"},{"key":"e_1_3_2_2_13_1","doi-asserted-by":"publisher","DOI":"10.1145\/3119881.3119890"},{"key":"e_1_3_2_2_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2016.2543720"},{"key":"e_1_3_2_2_15_1","volume-title":"Saliency-Based Image Processing for Guiding Visual Attention","author":"Hagiwara Akira","unstructured":"Akira Hagiwara , Akihiro Sugimoto , and Kazuhiko Kawamoto . 2011. Saliency-Based Image Processing for Guiding Visual Attention . In PETMEI. Association for Computing Machinery , New York, NY, USA , 1\u20138. Akira Hagiwara, Akihiro Sugimoto, and Kazuhiko Kawamoto. 2011. Saliency-Based Image Processing for Guiding Visual Attention. In PETMEI. Association for Computing Machinery, New York, NY, USA, 1\u20138."},{"key":"e_1_3_2_2_16_1","doi-asserted-by":"publisher","DOI":"10.1145\/2909132.2909254"},{"key":"e_1_3_2_2_17_1","volume-title":"Saliency Detection: A Spectral Residual Approach. In 2007 IEEE Conference on Computer Vision and Pattern Recognition. IEEE","author":"Hou X.","unstructured":"X. Hou and L. Zhang . 2007 . Saliency Detection: A Spectral Residual Approach. In 2007 IEEE Conference on Computer Vision and Pattern Recognition. IEEE , New York, NY, USA, 1\u20138. X. Hou and L. Zhang. 2007. Saliency Detection: A Spectral Residual Approach. In 2007 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, New York, NY, USA, 1\u20138."},{"key":"e_1_3_2_2_18_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2015.2459892"},{"key":"e_1_3_2_2_19_1","first-page":"2463","article-title":"Occlusion Leak Compensation for Optical See-Through Displays Using a Single-Layer Transmissive Spatial Light Modulator","volume":"23","author":"Itoh Y.","year":"2017","unstructured":"Y. Itoh , T. Hamasaki , and M. Sugimoto . 2017 . Occlusion Leak Compensation for Optical See-Through Displays Using a Single-Layer Transmissive Spatial Light Modulator . IEEE TVCG 23 , 11 (2017), 2463 \u2013 2473 . https:\/\/doi.org\/10.1109\/TVCG.2017.2734427 10.1109\/TVCG.2017.2734427 Y. Itoh, T. Hamasaki, and M. Sugimoto. 2017. Occlusion Leak Compensation for Optical See-Through Displays Using a Single-Layer Transmissive Spatial Light Modulator. IEEE TVCG 23, 11 (2017), 2463\u20132473. https:\/\/doi.org\/10.1109\/TVCG.2017.2734427","journal-title":"IEEE TVCG"},{"key":"e_1_3_2_2_20_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2019.2899229"},{"key":"e_1_3_2_2_21_1","first-page":"5","article-title":"Light Attenuation Display: Subtractive See-Through Near-Eye Display via Spatial Color Filtering","volume":"25","author":"Itoh Y.","year":"2019","unstructured":"Y. Itoh , T. Langlotz , D. Iwai , K. Kiyokawa , and T. Amano . 2019 . Light Attenuation Display: Subtractive See-Through Near-Eye Display via Spatial Color Filtering . IEEE TVCG 25 , 5 (May 2019), 1951\u20131960. https:\/\/doi.org\/10.1109\/TVCG.2019.2899229 10.1109\/TVCG.2019.2899229 Y. Itoh, T. Langlotz, D. Iwai, K. Kiyokawa, and T. Amano. 2019. Light Attenuation Display: Subtractive See-Through Near-Eye Display via Spatial Color Filtering. IEEE TVCG 25, 5 (May 2019), 1951\u20131960. https:\/\/doi.org\/10.1109\/TVCG.2019.2899229","journal-title":"IEEE TVCG"},{"key":"e_1_3_2_2_22_1","volume-title":"Article 120 (jul","author":"Itoh Yuta","year":"2021","unstructured":"Yuta Itoh , Tobias Langlotz , Jonathan Sutton , and Alexander Plopski . 2021. Towards Indistinguishable Augmented Reality: A Survey on Optical See-through Head-Mounted Displays. ACM Comput. Surv. 54, 6 , Article 120 (jul 2021 ), 36\u00a0pages. https:\/\/doi.org\/10.1145\/3453157 10.1145\/3453157 Yuta Itoh, Tobias Langlotz, Jonathan Sutton, and Alexander Plopski. 2021. Towards Indistinguishable Augmented Reality: A Survey on Optical See-through Head-Mounted Displays. ACM Comput. Surv. 54, 6, Article 120 (jul 2021), 36\u00a0pages. https:\/\/doi.org\/10.1145\/3453157"},{"key":"e_1_3_2_2_23_1","doi-asserted-by":"publisher","DOI":"10.1109\/34.730558"},{"key":"e_1_3_2_2_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/3355089.3356557"},{"key":"e_1_3_2_2_25_1","doi-asserted-by":"crossref","first-page":"4","DOI":"10.1109\/TVCG.2007.70624","article-title":"Persuading Visual Attention Through Geometry","volume":"14","author":"Kim Youngmin","year":"2008","unstructured":"Youngmin Kim and Amitabh Varshney . 2008 . Persuading Visual Attention Through Geometry . IEEE Transactions on Visualization and Computer Graphics 14 , 4 (July 2008), 772\u2013782. https:\/\/doi.org\/10.1109\/TVCG.2007.70624 10.1109\/TVCG.2007.70624 Youngmin Kim and Amitabh Varshney. 2008. Persuading Visual Attention Through Geometry. IEEE Transactions on Visualization and Computer Graphics 14, 4 (July 2008), 772\u2013782. https:\/\/doi.org\/10.1109\/TVCG.2007.70624","journal-title":"IEEE Transactions on Visualization and Computer Graphics"},{"key":"e_1_3_2_2_26_1","first-page":"219","article-title":"Shifts in Selective Visual Attention: Towards the Underlying Neural Circuitry","volume":"4","author":"Koch C.","year":"1987","unstructured":"C. Koch and S. Ullman . 1987 . Shifts in Selective Visual Attention: Towards the Underlying Neural Circuitry . Matters of Intelligence 4 , 4 (1987), 219 \u2013\u2013 227 . C. Koch and S. Ullman. 1987. Shifts in Selective Visual Attention: Towards the Underlying Neural Circuitry. Matters of Intelligence 4, 4 (1987), 219\u2013\u2013227.","journal-title":"Matters of Intelligence"},{"key":"e_1_3_2_2_27_1","volume-title":"Color image modification based on visual saliency for guiding visual attention. In 2013 IEEE RO-MAN","author":"Kokui Tatsuhiko","year":"2013","unstructured":"Tatsuhiko Kokui , Hironori Takimoto , Yasue Mitsukura , Mitsuyoshi Kishihara , and Kensuke Okubo . 2013. Color image modification based on visual saliency for guiding visual attention. In 2013 IEEE RO-MAN . IEEE , New York, NY, USA , 467\u2013472. https:\/\/doi.org\/10.1109\/ROMAN. 2013 .6628548 10.1109\/ROMAN.2013.6628548 Tatsuhiko Kokui, Hironori Takimoto, Yasue Mitsukura, Mitsuyoshi Kishihara, and Kensuke Okubo. 2013. Color image modification based on visual saliency for guiding visual attention. In 2013 IEEE RO-MAN. IEEE, New York, NY, USA, 467\u2013472. https:\/\/doi.org\/10.1109\/ROMAN.2013.6628548"},{"key":"e_1_3_2_2_28_1","unstructured":"Alexander Kroner Mario Senden Kurt Driessens and Rainer Goebel. 2019. Contextual Encoder-Decoder Network for Visual Saliency Prediction. CoRR abs\/1902.06634(2019) 261\u2013270. arxiv:1902.06634  Alexander Kroner Mario Senden Kurt Driessens and Rainer Goebel. 2019. Contextual Encoder-Decoder Network for Visual Saliency Prediction. CoRR abs\/1902.06634(2019) 261\u2013270. arxiv:1902.06634"},{"key":"e_1_3_2_2_29_1","doi-asserted-by":"publisher","DOI":"10.1145\/3173574.3173964"},{"key":"e_1_3_2_2_30_1","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025757"},{"key":"e_1_3_2_2_31_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISMAR.2012.6402553"},{"key":"e_1_3_2_2_32_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2013.241"},{"key":"e_1_3_2_2_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISMAR.2013.6671800"},{"key":"e_1_3_2_2_34_1","first-page":"307","article-title":"Wearable camera system with viewfinder means","volume":"6","author":"Mann G","year":"2001","unstructured":"W\u00a0Steve\u00a0 G Mann . 2001 . Wearable camera system with viewfinder means . US Patent 6 , 307 ,526. W\u00a0Steve\u00a0G Mann. 2001. Wearable camera system with viewfinder means. US Patent 6,307,526.","journal-title":"US Patent"},{"key":"e_1_3_2_2_35_1","doi-asserted-by":"publisher","DOI":"10.1145\/2662996.2663009"},{"key":"e_1_3_2_2_36_1","volume-title":"Proceedings of the 1st International Workshop on Perception Inspired Video Processing","author":"A.","unstructured":"Victor\u00a0 A. Mateescu and Ivan\u00a0V. Baji\u0107. 2014. Can Subliminal Flicker Guide Attention in Natural Images? . In Proceedings of the 1st International Workshop on Perception Inspired Video Processing ( Orlando, Florida, USA) (PIVP \u201914). Association for Computing Machinery, New York, NY, USA, 33\u201334. https:\/\/doi.org\/10.1145\/2662996.2663012 10.1145\/2662996.2663012 Victor\u00a0A. Mateescu and Ivan\u00a0V. Baji\u0107. 2014. Can Subliminal Flicker Guide Attention in Natural Images?. In Proceedings of the 1st International Workshop on Perception Inspired Video Processing (Orlando, Florida, USA) (PIVP \u201914). Association for Computing Machinery, New York, NY, USA, 33\u201334. https:\/\/doi.org\/10.1145\/2662996.2663012"},{"key":"e_1_3_2_2_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/1394281.1394289"},{"key":"#cr-split#-e_1_3_2_2_38_1.1","doi-asserted-by":"crossref","unstructured":"Erick Mendez Steven Feiner and Dieter Schmalstieg. 2010. Focus and context in mixed reality by modulating first order salient features. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 6133 LNCS (2010) 232-243. https:\/\/doi.org\/10.1007\/978-3-642-13544-6_22 10.1007\/978-3-642-13544-6_22","DOI":"10.1007\/978-3-642-13544-6_22"},{"key":"#cr-split#-e_1_3_2_2_38_1.2","doi-asserted-by":"crossref","unstructured":"Erick Mendez Steven Feiner and Dieter Schmalstieg. 2010. Focus and context in mixed reality by modulating first order salient features. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 6133 LNCS (2010) 232-243. https:\/\/doi.org\/10.1007\/978-3-642-13544-6_22","DOI":"10.1007\/978-3-642-13544-6_22"},{"key":"e_1_3_2_2_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3203199"},{"key":"e_1_3_2_2_40_1","doi-asserted-by":"publisher","DOI":"10.1145\/3281505.3281537"},{"key":"e_1_3_2_2_41_1","doi-asserted-by":"publisher","DOI":"10.1109\/TMM.2013.2272919"},{"key":"e_1_3_2_2_42_1","first-page":"1","article-title":"The Tobii I-VT fixation filter","volume":"0","author":"Olsen Anneli","year":"2012","unstructured":"Anneli Olsen . 2012 . The Tobii I-VT fixation filter . Tobii Technology 0 , 0 (2012), 1 \u2013 21 . Anneli Olsen. 2012. The Tobii I-VT fixation filter. Tobii Technology 0, 0 (2012), 1\u201321.","journal-title":"Tobii Technology"},{"key":"e_1_3_2_2_43_1","volume-title":"Proceedings - 2017 14th Conference on Computer and Robot Vision, CRV 2017 2018-January(2018)","author":"Pal Rajarshi","year":"2018","unstructured":"Rajarshi Pal and Dipanjan Roy . 2018 . Enhancing Saliency of an Object Using Genetic Algorithm . Proceedings - 2017 14th Conference on Computer and Robot Vision, CRV 2017 2018-January(2018) , 337\u2013344. https:\/\/doi.org\/10.1109\/CRV.2017.33 10.1109\/CRV.2017.33 Rajarshi Pal and Dipanjan Roy. 2018. Enhancing Saliency of an Object Using Genetic Algorithm. Proceedings - 2017 14th Conference on Computer and Robot Vision, CRV 2017 2018-January(2018), 337\u2013344. https:\/\/doi.org\/10.1109\/CRV.2017.33"},{"key":"e_1_3_2_2_44_1","doi-asserted-by":"publisher","DOI":"10.1145\/2980179.2982422"},{"key":"e_1_3_2_2_45_1","doi-asserted-by":"publisher","DOI":"10.1016\/S0042-6989(01)00250-4"},{"key":"e_1_3_2_2_46_1","volume-title":"Applications of Augmented Vision Head-Mounted Systems in Vision Rehabilitation.Journal of the Society for Information Display 15, 12","author":"Peli Eli","year":"2007","unstructured":"Eli Peli , Gang Luo , Alex Bowers , and Noa Rensing . 2007. Applications of Augmented Vision Head-Mounted Systems in Vision Rehabilitation.Journal of the Society for Information Display 15, 12 ( 2007 ), 1037\u20131045. https:\/\/doi.org\/10.1889\/1.2825088 arxiv:NIHMS150003 10.1889\/1.2825088 Eli Peli, Gang Luo, Alex Bowers, and Noa Rensing. 2007. Applications of Augmented Vision Head-Mounted Systems in Vision Rehabilitation.Journal of the Society for Information Display 15, 12 (2007), 1037\u20131045. https:\/\/doi.org\/10.1889\/1.2825088 arxiv:NIHMS150003"},{"key":"e_1_3_2_2_47_1","doi-asserted-by":"crossref","unstructured":"E. Ragan C. Wilkes D.\u00a0A. Bowman and T. Hollerer. 2009. Simulation of Augmented Reality Systems in Purely Virtual Environments. In 2009 IEEE VR. IEEE New York NY USA 287\u2013288.  E. Ragan C. Wilkes D.\u00a0A. Bowman and T. Hollerer. 2009. Simulation of Augmented Reality Systems in Purely Virtual Environments. In 2009 IEEE VR. IEEE New York NY USA 287\u2013288.","DOI":"10.1109\/VR.2009.4811058"},{"key":"e_1_3_2_2_48_1","volume-title":"5555. Pervasive Augmented Reality - Technology and Ethics","author":"Regenbrecht H.","year":"2022","unstructured":"H. Regenbrecht , S. Zwanenburg , and T. Langlotz . 5555. Pervasive Augmented Reality - Technology and Ethics . IEEE Pervasive Computing 1, 1 (mar 5555), 1\u20138. https:\/\/doi.org\/10.1109\/MPRV. 2022 .3152993 10.1109\/MPRV.2022.3152993 H. Regenbrecht, S. Zwanenburg, and T. Langlotz. 5555. Pervasive Augmented Reality - Technology and Ethics. IEEE Pervasive Computing 1, 1 (mar 5555), 1\u20138. https:\/\/doi.org\/10.1109\/MPRV.2022.3152993"},{"key":"e_1_3_2_2_49_1","volume-title":"MUM\u201918","author":"Rothe Sylvia","unstructured":"Sylvia Rothe , Felix Althammer , and Mohamed Khamis . 2018. GazeRecall: Using Gaze Direction to Increase Recall of Details in Cinematic Virtual Reality . In MUM\u201918 . Association for Computing Machinery , New York, NY, USA , 115\u2013119. https:\/\/doi.org\/10.1145\/3282894.3282903 10.1145\/3282894.3282903 Sylvia Rothe, Felix Althammer, and Mohamed Khamis. 2018. GazeRecall: Using Gaze Direction to Increase Recall of Details in Cinematic Virtual Reality. In MUM\u201918. Association for Computing Machinery, New York, NY, USA, 115\u2013119. https:\/\/doi.org\/10.1145\/3282894.3282903"},{"key":"e_1_3_2_2_50_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISMAR.2008.4637331"},{"key":"e_1_3_2_2_51_1","volume-title":"Video Saliency Modulation in the HSI Color Space for Drawing Gaze. PSIVT","author":"Shi Tao","year":"2015","unstructured":"Tao Shi and Akihiro Sugimoto . 2015. Video Saliency Modulation in the HSI Color Space for Drawing Gaze. PSIVT 8333, July 2015 (2015), 206\u2013219. https:\/\/doi.org\/10.1007\/978-3-642-53842-1 10.1007\/978-3-642-53842-1 Tao Shi and Akihiro Sugimoto. 2015. Video Saliency Modulation in the HSI Color Space for Drawing Gaze. PSIVT 8333, July 2015 (2015), 206\u2013219. https:\/\/doi.org\/10.1007\/978-3-642-53842-1"},{"key":"e_1_3_2_2_52_1","doi-asserted-by":"publisher","DOI":"10.1145\/2168556.2168568"},{"key":"e_1_3_2_2_53_1","doi-asserted-by":"publisher","DOI":"10.1145\/1080402.1080445"},{"key":"e_1_3_2_2_54_1","volume-title":"Computational Glasses: Vision augmentations using computational near-eye optics and displays. In 2019 IEEE ISMAR-Adjunct","author":"Sutton Jonathan","year":"2019","unstructured":"Jonathan Sutton , Tobias Langlotz , and Yuta Itoh . 2019 . Computational Glasses: Vision augmentations using computational near-eye optics and displays. In 2019 IEEE ISMAR-Adjunct . IEEE, IEEE , New York, U.S. , 438\u2013442. Jonathan Sutton, Tobias Langlotz, and Yuta Itoh. 2019. Computational Glasses: Vision augmentations using computational near-eye optics and displays. In 2019 IEEE ISMAR-Adjunct. IEEE, IEEE, New York, U.S., 438\u2013442."},{"key":"e_1_3_2_2_55_1","volume-title":"2017 IEEE Symposium Series on Computational Intelligence, SSCI 2017 - Proceedings 2018-January(2018)","author":"Suzuki Natsumi","year":"2018","unstructured":"Natsumi Suzuki and Yohei Nakada . 2018 . Effects selection technique for improving visual attraction via visual saliency map . 2017 IEEE Symposium Series on Computational Intelligence, SSCI 2017 - Proceedings 2018-January(2018) , 1\u20138. https:\/\/doi.org\/10.1109\/SSCI.2017.8280808 10.1109\/SSCI.2017.8280808 Natsumi Suzuki and Yohei Nakada. 2018. Effects selection technique for improving visual attraction via visual saliency map. 2017 IEEE Symposium Series on Computational Intelligence, SSCI 2017 - Proceedings 2018-January(2018), 1\u20138. https:\/\/doi.org\/10.1109\/SSCI.2017.8280808"},{"key":"#cr-split#-e_1_3_2_2_56_1.1","doi-asserted-by":"crossref","unstructured":"Hironori Takimoto Syuhei Hitomi Hitoshi Yamauchi Mitsuyoshi Kishihara and Kensuke Okubo. 2017. Image modification based on spatial frequency components for visual attention retargeting. IEICE Transactions on Information and Systems E100D 6(2017) 1339-1349. https:\/\/doi.org\/10.1587\/transinf.2016EDP7413 10.1587\/transinf.2016EDP7413","DOI":"10.1587\/transinf.2016EDP7413"},{"key":"#cr-split#-e_1_3_2_2_56_1.2","doi-asserted-by":"crossref","unstructured":"Hironori Takimoto Syuhei Hitomi Hitoshi Yamauchi Mitsuyoshi Kishihara and Kensuke Okubo. 2017. Image modification based on spatial frequency components for visual attention retargeting. IEICE Transactions on Information and Systems E100D 6(2017) 1339-1349. https:\/\/doi.org\/10.1587\/transinf.2016EDP7413","DOI":"10.1587\/transinf.2016EDP7413"},{"key":"e_1_3_2_2_57_1","volume-title":"Image modification based on a visual saliency map for guiding visual attention. IEICE Transactions on Information and Systems E98D, 11(2015)","author":"Takimoto Hironori","year":"1967","unstructured":"Hironori Takimoto , Tatsuhiko Kokui , Hitoshi Yamauchi , Mitsuyoshi Kishihara , and Kensuke Okubo . 2015. Image modification based on a visual saliency map for guiding visual attention. IEICE Transactions on Information and Systems E98D, 11(2015) , 1967 \u20131975. https:\/\/doi.org\/10.1587\/transinf.2015EDP7087 10.1587\/transinf.2015EDP7087 Hironori Takimoto, Tatsuhiko Kokui, Hitoshi Yamauchi, Mitsuyoshi Kishihara, and Kensuke Okubo. 2015. Image modification based on a visual saliency map for guiding visual attention. IEICE Transactions on Information and Systems E98D, 11(2015), 1967\u20131975. https:\/\/doi.org\/10.1587\/transinf.2015EDP7087"},{"key":"e_1_3_2_2_58_1","doi-asserted-by":"publisher","DOI":"10.1002\/tee.22874"},{"key":"e_1_3_2_2_59_1","doi-asserted-by":"publisher","DOI":"10.1037\/0033-295X.113.4.766"},{"key":"e_1_3_2_2_60_1","doi-asserted-by":"publisher","DOI":"10.1016\/0010-0285(80)90005-5"},{"key":"e_1_3_2_2_61_1","doi-asserted-by":"publisher","DOI":"10.1162\/105474602317473213"},{"key":"e_1_3_2_2_62_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2020.2973496"},{"key":"e_1_3_2_2_63_1","doi-asserted-by":"publisher","DOI":"10.1145\/1978942.1979158"},{"key":"e_1_3_2_2_64_1","doi-asserted-by":"publisher","DOI":"10.1111\/cgf.13141"},{"key":"e_1_3_2_2_65_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2014.2346352"},{"key":"e_1_3_2_2_66_1","doi-asserted-by":"publisher","DOI":"10.1109\/TIP.2017.2754941"},{"key":"e_1_3_2_2_67_1","volume-title":"What attributes guide the deployment of visual attention and how do they do it?Nature Reviews Neuroscience 5, 6 (June","author":"Wolfe M.","year":"2004","unstructured":"Jeremy\u00a0 M. Wolfe and Todd\u00a0 S. Horowitz . 2004. What attributes guide the deployment of visual attention and how do they do it?Nature Reviews Neuroscience 5, 6 (June 2004 ), 495\u2013501. https:\/\/doi.org\/10.1038\/nrn1411 10.1038\/nrn1411 Jeremy\u00a0M. Wolfe and Todd\u00a0S. Horowitz. 2004. What attributes guide the deployment of visual attention and how do they do it?Nature Reviews Neuroscience 5, 6 (June 2004), 495\u2013501. https:\/\/doi.org\/10.1038\/nrn1411"}],"event":{"name":"UIST '22: The 35th Annual ACM Symposium on User Interface Software and Technology","location":"Bend OR USA","acronym":"UIST '22","sponsor":["SIGGRAPH ACM Special Interest Group on Computer Graphics and Interactive Techniques","SIGCHI ACM Special Interest Group on Computer-Human Interaction"]},"container-title":["Proceedings of the 35th Annual ACM Symposium on User Interface Software and Technology"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3526113.3545633","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3526113.3545633","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T19:00:23Z","timestamp":1750186823000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3526113.3545633"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,10,28]]},"references-count":69,"alternative-id":["10.1145\/3526113.3545633","10.1145\/3526113"],"URL":"https:\/\/doi.org\/10.1145\/3526113.3545633","relation":{},"subject":[],"published":{"date-parts":[[2022,10,28]]},"assertion":[{"value":"2022-10-28","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}