{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,11,8]],"date-time":"2025-11-08T13:09:44Z","timestamp":1762607384055,"version":"3.41.0"},"publisher-location":"New York, NY, USA","reference-count":45,"publisher":"ACM","license":[{"start":{"date-parts":[[2018,12,13]],"date-time":"2018-12-13T00:00:00Z","timestamp":1544659200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"funder":[{"DOI":"10.13039\/501100001602","name":"Science Foundation Ireland","doi-asserted-by":"publisher","award":["15\/RP\/2776"],"award-info":[{"award-number":["15\/RP\/2776"]}],"id":[{"id":"10.13039\/501100001602","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":[],"published-print":{"date-parts":[[2018,12,13]]},"DOI":"10.1145\/3278471.3278472","type":"proceedings-article","created":{"date-parts":[[2018,11,27]],"date-time":"2018-11-27T13:19:22Z","timestamp":1543324762000},"page":"1-10","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":31,"title":["Director's cut"],"prefix":"10.1145","author":[{"given":"Sebastian","family":"Knorr","sequence":"first","affiliation":[{"name":"The University of Dublin, Ireland"}]},{"given":"Cagri","family":"Ozcinar","sequence":"additional","affiliation":[{"name":"The University of Dublin, Ireland"}]},{"given":"Colm O","family":"Fearghail","sequence":"additional","affiliation":[{"name":"The University of Dublin, Ireland"}]},{"given":"Aljosa","family":"Smolic","sequence":"additional","affiliation":[{"name":"The University of Dublin, Ireland"}]}],"member":"320","published-online":{"date-parts":[[2018,12,13]]},"reference":[{"volume-title":"WebVR: Bringing Virtual Reality to the Web. https:\/\/webvr.info\/. (Feb","year":"2017","key":"e_1_3_2_1_1_1","unstructured":"2017. WebVR: Bringing Virtual Reality to the Web. https:\/\/webvr.info\/. (Feb 2017 ). 2017. WebVR: Bringing Virtual Reality to the Web. https:\/\/webvr.info\/. (Feb 2017)."},{"key":"e_1_3_2_1_2_1","doi-asserted-by":"publisher","DOI":"10.1109\/TVCG.2012.276"},{"key":"e_1_3_2_1_3_1","doi-asserted-by":"crossref","unstructured":"Paulo Bala Mara Dionisio Valentina Nisi and Nuno Nunes. 2016. IVRUX: A tool for analyzing immersive narratives in virtual reality. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Frank Nack and Andrew S. Gordon (Eds.). Springer International Publishing Cham 3--11. arXiv:9780201398298  Paulo Bala Mara Dionisio Valentina Nisi and Nuno Nunes. 2016. IVRUX: A tool for analyzing immersive narratives in virtual reality. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) Frank Nack and Andrew S. Gordon (Eds.). Springer International Publishing Cham 3--11. arXiv:9780201398298","DOI":"10.1007\/978-3-319-48279-8_1"},{"key":"e_1_3_2_1_4_1","volume-title":"Headset Attentional Synchrony: Tracking the Gaze of Viewers Watching Narrative Virtual Reality. Media Practice and Education (May","author":"Bender Stuart Marshall","year":"2018","unstructured":"Stuart Marshall Bender . 2018. Headset Attentional Synchrony: Tracking the Gaze of Viewers Watching Narrative Virtual Reality. Media Practice and Education (May 2018 ), 1--20. Stuart Marshall Bender. 2018. Headset Attentional Synchrony: Tracking the Gaze of Viewers Watching Narrative Virtual Reality. Media Practice and Education (May 2018), 1--20."},{"key":"e_1_3_2_1_5_1","volume-title":"State-of-the-Art of Visualization for Eye Tracking Data. In Eurographics Conference on Visualization (EuroVis).","author":"Blascheck Tanja","year":"2014","unstructured":"Tanja Blascheck , Kuno Kurzhals , Michael Raschke , Michael Burch , Daniel Weiskopf , and Thomas Ertl . 2014 . State-of-the-Art of Visualization for Eye Tracking Data. In Eurographics Conference on Visualization (EuroVis). Tanja Blascheck, Kuno Kurzhals, Michael Raschke, Michael Burch, Daniel Weiskopf, and Thomas Ertl. 2014. State-of-the-Art of Visualization for Eye Tracking Data. In Eurographics Conference on Visualization (EuroVis)."},{"key":"e_1_3_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cviu.2009.09.003"},{"volume-title":"Computer Vision --- ACCV'98, Roland Chin and Ting-Chuen Pong (Eds.)","author":"Bolle Ruud","key":"e_1_3_2_1_7_1","unstructured":"Ruud Bolle , Yiannis Aloimonos , and Cornelia Ferm\u00fcller . 1997. Toward motion picture grammars . In Computer Vision --- ACCV'98, Roland Chin and Ting-Chuen Pong (Eds.) . Springer Berlin Heidelberg , Berlin, Heidelberg , 283--290. Ruud Bolle, Yiannis Aloimonos, and Cornelia Ferm\u00fcller. 1997. Toward motion picture grammars. In Computer Vision --- ACCV'98, Roland Chin and Ting-Chuen Pong (Eds.). Springer Berlin Heidelberg, Berlin, Heidelberg, 283--290."},{"key":"e_1_3_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/TPAMI.2012.89"},{"key":"e_1_3_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1016\/0022-1236(92)90072-Q"},{"key":"e_1_3_2_1_10_1","unstructured":"Ricardo Cabello et al. 2017. JavaScript 3D library. https:\/\/threejs.org\/. https:\/\/github.com\/mrdoob\/three.js\/. (Feb 2017).  Ricardo Cabello et al. 2017. JavaScript 3D library. https:\/\/threejs.org\/. https:\/\/github.com\/mrdoob\/three.js\/. (Feb 2017)."},{"key":"e_1_3_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.visres.2011.04.012"},{"key":"e_1_3_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1145\/3083187.3083215"},{"key":"e_1_3_2_1_13_1","volume-title":"Gilchrist","author":"Cristino Filipe","year":"2010","unstructured":"Filipe Cristino , Sebastiaan Math\u00f4t , Jan Theeuwes , and Iain D . Gilchrist . 2010 . Scan-Match: A novel method for comparing fixation sequences. Behavior Research Methods 42, 3 (01 Aug 2010), 692--700. Filipe Cristino, Sebastiaan Math\u00f4t, Jan Theeuwes, and Iain D. Gilchrist. 2010. Scan-Match: A novel method for comparing fixation sequences. Behavior Research Methods 42, 3 (01 Aug 2010), 692--700."},{"key":"e_1_3_2_1_14_1","doi-asserted-by":"publisher","DOI":"10.1109\/IC3D.2017.8251907"},{"volume-title":"Attention guidance for immersive video content in head-mounted displays. In 2017 IEEE Virtual Reality (VR)","author":"Danieau Fabien","key":"e_1_3_2_1_15_1","unstructured":"Fabien Danieau , Antoine Guillo , and Renaud Dore . 2017. Attention guidance for immersive video content in head-mounted displays. In 2017 IEEE Virtual Reality (VR) . IEEE , Los Angeles, CA, USA , 205--206. Fabien Danieau, Antoine Guillo, and Renaud Dore. 2017. Attention guidance for immersive video content in head-mounted displays. In 2017 IEEE Virtual Reality (VR). IEEE, Los Angeles, CA, USA, 205--206."},{"key":"e_1_3_2_1_16_1","volume-title":"Proceedings of the 9th International Conference on Quality of Multimedia Experience (QoMEX). IEEE","author":"Abreu Ana De","year":"2017","unstructured":"Ana De Abreu , Cagri Ozcinar , and Aljosa Smolic . 2017 . Look around you: Saliency maps for omnidirectional images in VR applications . In Proceedings of the 9th International Conference on Quality of Multimedia Experience (QoMEX). IEEE , Erfurt, Germany, 1--6. Ana De Abreu, Cagri Ozcinar, and Aljosa Smolic. 2017. Look around you: Saliency maps for omnidirectional images in VR applications. In Proceedings of the 9th International Conference on Quality of Multimedia Experience (QoMEX). IEEE, Erfurt, Germany, 1--6."},{"key":"e_1_3_2_1_17_1","volume-title":"It depends on how you look at it: Scanpath comparison in multiple dimensions with MultiMatch, a vector-based approach. Behavior Research Methods 44, 4 (01","author":"Dewhurst Richard","year":"2012","unstructured":"Richard Dewhurst , Marcus Nystr\u00f6m , Halszka Jarodzka , Tom Foulsham , Roger Johansson , and Kenneth Holmqvist . 2012. It depends on how you look at it: Scanpath comparison in multiple dimensions with MultiMatch, a vector-based approach. Behavior Research Methods 44, 4 (01 Dec 2012 ), 1079--1100. Richard Dewhurst, Marcus Nystr\u00f6m, Halszka Jarodzka, Tom Foulsham, Roger Johansson, and Kenneth Holmqvist. 2012. It depends on how you look at it: Scanpath comparison in multiple dimensions with MultiMatch, a vector-based approach. Behavior Research Methods 44, 4 (01 Dec 2012), 1079--1100."},{"key":"e_1_3_2_1_18_1","unstructured":"Justin Lin (Director). 2015. Help (2015). http:\/\/www.imdb.com\/title\/tt4794550\/. (2015).  Justin Lin (Director). 2015. Help (2015). http:\/\/www.imdb.com\/title\/tt4794550\/. (2015)."},{"key":"e_1_3_2_1_19_1","article-title":"Functionally sequenced scanpath similarity method (FuncSim): Comparing and evaluating scanpath similarity based on a task's inherent sequence of functional (action) units","volume":"6","author":"Foerster Rebecca M","year":"2013","unstructured":"Rebecca M Foerster and Werner X Schneider . 2013 . Functionally sequenced scanpath similarity method (FuncSim): Comparing and evaluating scanpath similarity based on a task's inherent sequence of functional (action) units . Journal of Eye Movement Research 6 , 5 (2013). Rebecca M Foerster and Werner X Schneider. 2013. Functionally sequenced scanpath similarity method (FuncSim): Comparing and evaluating scanpath similarity based on a task's inherent sequence of functional (action) units. Journal of Eye Movement Research 6, 5 (2013).","journal-title":"Journal of Eye Movement Research"},{"key":"e_1_3_2_1_20_1","doi-asserted-by":"publisher","DOI":"10.1145\/2984511.2984539"},{"key":"e_1_3_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1109\/MCG.2008.113"},{"key":"e_1_3_2_1_22_1","volume-title":"Computational models: Bottom-up and top-down aspects. The Oxford Handbook of Attention","author":"Itti Laurent","year":"2014","unstructured":"Laurent Itti and Ali Borji . 2014. Computational models: Bottom-up and top-down aspects. The Oxford Handbook of Attention ( 2014 ), 1--20. Laurent Itti and Ali Borji. 2014. Computational models: Bottom-up and top-down aspects. The Oxford Handbook of Attention (2014), 1--20."},{"key":"e_1_3_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1145\/3139131.3139166"},{"key":"e_1_3_2_1_24_1","doi-asserted-by":"publisher","DOI":"10.1145\/2578153.2578206"},{"key":"e_1_3_2_1_25_1","volume-title":"Segmentation in the perception and memory of events. Trends in cognitive sciences 12, 2","author":"Kurby Christopher A","year":"2008","unstructured":"Christopher A Kurby and Jeffrey M Zacks . 2008. Segmentation in the perception and memory of events. Trends in cognitive sciences 12, 2 ( 2008 ), 72--79. Christopher A Kurby and Jeffrey M Zacks. 2008. Segmentation in the perception and memory of events. Trends in cognitive sciences 12, 2 (2008), 72--79."},{"key":"e_1_3_2_1_26_1","first-page":"8","article-title":"Binary codes capable of correcting deletions, insertions, and reversals","volume":"10","author":"Levenshtein Vladimir I","year":"1966","unstructured":"Vladimir I Levenshtein . 1966 . Binary codes capable of correcting deletions, insertions, and reversals . Soviet Physics Doklady 10 , 8 (Feb 1966), 707--710. Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions, and reversals. Soviet Physics Doklady 10, 8 (Feb 1966), 707--710.","journal-title":"Soviet Physics Doklady"},{"key":"e_1_3_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/3025453.3025757"},{"key":"e_1_3_2_1_28_1","volume-title":"Proceedings of the Workshop on Eye Tracking and Visualization (ETVIS)","volume":"1","author":"L\u00f6we Thomas","year":"2015","unstructured":"Thomas L\u00f6we , Michael Stengel , Emmy-Charlotte F\u00f6rster , Steve Grogorick , and Marcus Magnor . 2015 . Visualization and analysis of head movement and gaze data for immersive video in head-mounted displays . In Proceedings of the Workshop on Eye Tracking and Visualization (ETVIS) , vol. 1 . Thomas L\u00f6we, Michael Stengel, Emmy-Charlotte F\u00f6rster, Steve Grogorick, and Marcus Magnor. 2015. Visualization and analysis of head movement and gaze data for immersive video in head-mounted displays. In Proceedings of the Workshop on Eye Tracking and Visualization (ETVIS), vol. 1."},{"key":"e_1_3_2_1_29_1","article-title":"Directing for Cinematic Virtual Reality: how traditional film director's craft applies to immersive environments and notions of presence","volume":"18","author":"Mateer John William","year":"2017","unstructured":"John William Mateer . 2017 . Directing for Cinematic Virtual Reality: how traditional film director's craft applies to immersive environments and notions of presence . Journal of Media Practice (author-produced version) 18 , 1 (5 2017), 14--25. John William Mateer. 2017. Directing for Cinematic Virtual Reality: how traditional film director's craft applies to immersive environments and notions of presence. Journal of Media Practice (author-produced version) 18, 1 (5 2017), 14--25.","journal-title":"Journal of Media Practice (author-produced version)"},{"key":"e_1_3_2_1_30_1","volume-title":"SalNet360: Saliency maps for omni-directional images with CNN","author":"Monroy Rafael","year":"2018","unstructured":"Rafael Monroy , Sebastian Lutz , Tejo Chalasani , and Aljosa Smolic . 2018. SalNet360: Saliency maps for omni-directional images with CNN . Signal Processing : Image Communication ( 2018 ). Rafael Monroy, Sebastian Lutz, Tejo Chalasani, and Aljosa Smolic. 2018. SalNet360: Saliency maps for omni-directional images with CNN. Signal Processing: Image Communication (2018)."},{"key":"e_1_3_2_1_31_1","doi-asserted-by":"publisher","DOI":"10.1145\/3123266.3123414"},{"key":"e_1_3_2_1_32_1","doi-asserted-by":"publisher","DOI":"10.1145\/2993369.2993405"},{"key":"e_1_3_2_1_33_1","unstructured":"University of Nantes and Technicolor. 2017. Salient360!: Visual attention modeling for 360\u00b0 images grand challenge. (2017). http:\/\/www.icme2017.org\/grand-challenges\/  University of Nantes and Technicolor. 2017. Salient360!: Visual attention modeling for 360\u00b0 images grand challenge. (2017). http:\/\/www.icme2017.org\/grand-challenges\/"},{"key":"e_1_3_2_1_34_1","doi-asserted-by":"publisher","DOI":"10.1109\/ISM.2017.17"},{"key":"e_1_3_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICIP.2017.8296667"},{"key":"e_1_3_2_1_36_1","volume-title":"Visual Attention in Omnidirectional Video for Virtual Reality Applications. In 10th International Conference on Quality of Multimedia Experience (QoMEX) (2018-05-29)","author":"Ozcinar Cagri","year":"2018","unstructured":"Cagri Ozcinar and Aljosa Smolic . 2018 . Visual Attention in Omnidirectional Video for Virtual Reality Applications. In 10th International Conference on Quality of Multimedia Experience (QoMEX) (2018-05-29) . Cagri Ozcinar and Aljosa Smolic. 2018. Visual Attention in Omnidirectional Video for Virtual Reality Applications. In 10th International Conference on Quality of Multimedia Experience (QoMEX) (2018-05-29)."},{"key":"e_1_3_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1145\/3126594.3126636"},{"key":"e_1_3_2_1_38_1","doi-asserted-by":"publisher","DOI":"10.1145\/3083187.3083218"},{"key":"e_1_3_2_1_39_1","doi-asserted-by":"publisher","DOI":"10.1145\/3072959.3073668"},{"key":"e_1_3_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1049\/ibc.2016.0029"},{"key":"e_1_3_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cub.2010.02.032"},{"key":"e_1_3_2_1_42_1","first-page":"1","article-title":"Drowning in immersion","volume":"98","author":"Smith Shamus","year":"1998","unstructured":"Shamus Smith , Tim Marsh , David Duke , and Peter Wright . 1998 . Drowning in immersion . Proceedings of UK-VRSIG 98 (1998), 1 -- 9 . Shamus Smith, Tim Marsh, David Duke, and Peter Wright. 1998. Drowning in immersion. Proceedings of UK-VRSIG 98 (1998), 1--9.","journal-title":"Proceedings of UK-VRSIG"},{"key":"e_1_3_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1109\/ICMEW.2017.8026231"},{"key":"e_1_3_2_1_45_1","doi-asserted-by":"publisher","DOI":"10.1109\/PCS.2016.7906378"},{"key":"e_1_3_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1145\/3077548.3077559"}],"event":{"name":"CVMP '18: European Conference on Visual Media Production","sponsor":["SIGGRAPH ACM Special Interest Group on Computer Graphics and Interactive Techniques"],"location":"London United Kingdom","acronym":"CVMP '18"},"container-title":["Proceedings of the 15th ACM SIGGRAPH European Conference on Visual Media Production"],"original-title":[],"link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3278471.3278472","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3278471.3278472","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,18]],"date-time":"2025-06-18T02:07:51Z","timestamp":1750212471000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3278471.3278472"}},"subtitle":["a combined dataset for visual attention analysis in cinematic VR content"],"short-title":[],"issued":{"date-parts":[[2018,12,13]]},"references-count":45,"alternative-id":["10.1145\/3278471.3278472","10.1145\/3278471"],"URL":"https:\/\/doi.org\/10.1145\/3278471.3278472","relation":{},"subject":[],"published":{"date-parts":[[2018,12,13]]},"assertion":[{"value":"2018-12-13","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}