{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,1,10]],"date-time":"2026-01-10T02:24:30Z","timestamp":1768011870061,"version":"3.49.0"},"reference-count":39,"publisher":"Association for Computing Machinery (ACM)","issue":"2","license":[{"start":{"date-parts":[[2023,6,24]],"date-time":"2023-06-24T00:00:00Z","timestamp":1687564800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Access. Comput."],"published-print":{"date-parts":[[2023,6,30]]},"abstract":"<jats:p>\n            Automating the generation of audio descriptions (AD) for blind and visually impaired (BVI) people is a difficult task, since it has several challenges involved, such as: identifying gaps in dialogues; describing the essential elements; summarizing and fitting the descriptions into the dialogue gaps; generating an AD narration track, and synchronizing it with the main soundtrack. In our previous work (Campos et\u00a0al.\u00a0[\n            <jats:xref ref-type=\"bibr\">6<\/jats:xref>\n            ]), we propose a solution for automatic AD script generation, named CineAD, which uses the movie\u2019s script as a basis for the AD generation. This article proposes extending this solution to complement the information extracted from the script and reduce its dependency based on the classification of visual information from the video. To assess the viability of the proposed solution, we implemented a proof of concept of the solution and evaluated it with 11 blind users. The results showed that the solution could generate a more succinct and objective AD but with a similar users\u2019 level of understanding compared to our previous work. Thus, the solution can provide relevant information to blind users using less video time for descriptions.\n          <\/jats:p>","DOI":"10.1145\/3590955","type":"journal-article","created":{"date-parts":[[2023,4,14]],"date-time":"2023-04-14T13:59:10Z","timestamp":1681480750000},"page":"1-28","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":9,"title":["Machine Generation of Audio Description for Blind and Visually Impaired People"],"prefix":"10.1145","volume":"16","author":[{"ORCID":"https:\/\/orcid.org\/0000-0002-3874-1221","authenticated-orcid":false,"given":"Virg\u00ednia P.","family":"Campos","sequence":"first","affiliation":[{"name":"Federal University of Rio Grande do Norte, Brazil"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7735-5630","authenticated-orcid":false,"given":"Luiz M. G.","family":"Gon\u00e7alves","sequence":"additional","affiliation":[{"name":"Federal University of Rio Grande do Norte, Brazil"}]},{"ORCID":"https:\/\/orcid.org\/0009-0003-3457-3886","authenticated-orcid":false,"given":"Wesnydy L.","family":"Ribeiro","sequence":"additional","affiliation":[{"name":"Federal University of Paraiba, Brazil"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-5953-5435","authenticated-orcid":false,"given":"Tiago M. U.","family":"Ara\u00fajo","sequence":"additional","affiliation":[{"name":"Federal University of Paraiba, Brazil"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-6608-4900","authenticated-orcid":false,"given":"Tha\u00eds G.","family":"Do Rego","sequence":"additional","affiliation":[{"name":"Federal University of Paraiba, Brazil"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3807-1512","authenticated-orcid":false,"given":"Pedro H. V.","family":"Figueiredo","sequence":"additional","affiliation":[{"name":"Federal University of Paraiba, Brazil"}]},{"ORCID":"https:\/\/orcid.org\/0009-0000-9287-7114","authenticated-orcid":false,"given":"Suanny F. S.","family":"Vieira","sequence":"additional","affiliation":[{"name":"Federal University of Paraiba, Brazil"}]},{"ORCID":"https:\/\/orcid.org\/0009-0003-6939-444X","authenticated-orcid":false,"given":"Thiago F. S.","family":"Costa","sequence":"additional","affiliation":[{"name":"Federal University of Paraiba, Brazil"}]},{"ORCID":"https:\/\/orcid.org\/0009-0004-1706-8690","authenticated-orcid":false,"given":"Caio C.","family":"Moraes","sequence":"additional","affiliation":[{"name":"Federal University of Paraiba, Brazil"}]},{"ORCID":"https:\/\/orcid.org\/0009-0001-0839-5591","authenticated-orcid":false,"given":"Alexandre C. S.","family":"Cruz","sequence":"additional","affiliation":[{"name":"Federal University of Paraiba, Brazil"}]},{"ORCID":"https:\/\/orcid.org\/0000-0002-7158-0589","authenticated-orcid":false,"given":"Felipe A.","family":"Ara\u00fajo","sequence":"additional","affiliation":[{"name":"Federal University of Paraiba, Brazil"}]},{"ORCID":"https:\/\/orcid.org\/0000-0001-5834-5237","authenticated-orcid":false,"given":"Guido L.","family":"Souza Filho","sequence":"additional","affiliation":[{"name":"Federal University of Paraiba, Brazil"}]}],"member":"320","published-online":{"date-parts":[[2023,6,24]]},"reference":[{"key":"e_1_3_3_2_2","volume-title":"The Audio Description Project","author":"Blind ACB\u2014American Council of the","year":"2019","unstructured":"ACB\u2014American Council of the Blind. 2019. The Audio Description Project. Retrieved from https:\/\/www.acb.org\/adp\/ad.html."},{"key":"e_1_3_3_3_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.jvcir.2017.11.022"},{"key":"e_1_3_3_4_2","article-title":"Global prevalence of blindness and distance and near vision impairment in 2020: Progress towards the vision 2020 targets and what the future holds","volume":"61","author":"Adelson S. Flaxman, P. Briant, M. Bottone, T. Vos, K. Naidoo, T. Braithwaite, M. Cicinelli, J. Jonas, R. R. Bourne, and J.","year":"2020","unstructured":"S. Flaxman, P. Briant, M. Bottone, T. Vos, K. Naidoo, T. Braithwaite, M. Cicinelli, J. Jonas, R. R. Bourne, and J. Adelson. 2020. Global prevalence of blindness and distance and near vision impairment in 2020: Progress towards the vision 2020 targets and what the future holds. Investig. Ophthalm. Vis. Sci. 61 (2020).","journal-title":"Investig. Ophthalm. Vis. Sci."},{"key":"e_1_3_3_5_2","doi-asserted-by":"publisher","DOI":"10.4324\/9781003052968-9"},{"key":"e_1_3_3_6_2","doi-asserted-by":"publisher","DOI":"10.1109\/EATIS.2016.7520099"},{"key":"e_1_3_3_7_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10209-018-0634-4"},{"key":"e_1_3_3_8_2","doi-asserted-by":"publisher","DOI":"10.1145\/1639642.1639685"},{"key":"e_1_3_3_9_2","doi-asserted-by":"crossref","first-page":"269","DOI":"10.1007\/978-3-319-54407-6_18","volume-title":"Computer Vision\u2014ACCV 2016 Workshops","author":"Chen Tseng-Hung","year":"2017","unstructured":"Tseng-Hung Chen, Kuo-Hao Zeng, Wan-Ting Hsu, and Min Sun. 2017. Video captioning via sentence augmentation and spatio-temporal attention. In Computer Vision\u2014ACCV 2016 Workshops, Chu-Song Chen, Jiwen Lu, and Kai-Kuang Ma, (Eds.). Springer International Publishing, Cham, 269\u2013286."},{"key":"e_1_3_3_10_2","doi-asserted-by":"publisher","DOI":"10.1109\/CVPR.2009.5206848"},{"key":"e_1_3_3_11_2","doi-asserted-by":"publisher","DOI":"10.1145\/2976796.2976867"},{"key":"e_1_3_3_12_2","article-title":"Automated audio captioning with recurrent neural networks","volume":"1706","author":"Drossos Konstantinos","year":"2017","unstructured":"Konstantinos Drossos, Sharath Adavanne, and Tuomas Virtanen. 2017. Automated audio captioning with recurrent neural networks. CoRR abs\/1706.10006 (2017).","journal-title":"CoRR"},{"key":"e_1_3_3_13_2","doi-asserted-by":"publisher","DOI":"10.1145\/2461121.2461130"},{"key":"e_1_3_3_14_2","volume-title":"Audio description and technologies: Study on the semi-automatisation of the translation and voicing of audio descriptions","author":"Fern\u00e1ndez-Torn\u00e9 Anna","year":"2016","unstructured":"Anna Fern\u00e1ndez-Torn\u00e9. 2016. Audio description and technologies: Study on the semi-automatisation of the translation and voicing of audio descriptions. Ph.D. Dissertation. Universitat Aut\u00f2noma de Barcelona, Spain."},{"key":"e_1_3_3_15_2","doi-asserted-by":"publisher","DOI":"10.1109\/ICCISci.2012.6297184"},{"key":"e_1_3_3_16_2","doi-asserted-by":"publisher","DOI":"10.1007\/s10209-008-0141-0"},{"key":"e_1_3_3_17_2","volume-title":"Un Corpus de Cine. Fundamentos Teoricos de la Audiodescripcion (A Corpus of Cinema. Theoretical Foundations of Audio Description)","author":"Hurtado C. J.","year":"2010","unstructured":"C. J. Hurtado, A. Rodr\u00edguez, and C. Seibel. 2010. Un Corpus de Cine. Fundamentos Teoricos de la Audiodescripcion (A Corpus of Cinema. Theoretical Foundations of Audio Description). Universidad de Granada, Proyecto Tracce. 13\u201356."},{"key":"e_1_3_3_18_2","doi-asserted-by":"crossref","first-page":"220","DOI":"10.1007\/978-3-319-94277-3_36","volume-title":"Computers Helping People with Special Needs","author":"Ichiki Manon","year":"2018","unstructured":"Manon Ichiki, Toshihiro Shimizu, Atsushi Imai, Tohru Takagi, Mamoru Iwabuchi, Kiyoshi Kurihara, Taro Miyazaki, Tadashi Kumano, Hiroyuki Kaneko, Shoei Sato, Nobumasa Seiyama, Yuko Yamanouchi, and Hideki Sumiyoshi. 2018. Study on automated audio descriptions overlapping live television commentary. In Computers Helping People with Special Needs, Klaus Miesenberger and Georgios Kouroupetroglou (Eds.). Springer International Publishing, Cham, 220\u2013224."},{"key":"e_1_3_3_19_2","doi-asserted-by":"publisher","DOI":"10.1109\/COMAPP.2018.8460239"},{"key":"e_1_3_3_20_2","doi-asserted-by":"publisher","DOI":"10.1145\/1805986.1806025"},{"key":"e_1_3_3_21_2","doi-asserted-by":"publisher","DOI":"10.1145\/1878803.1878833"},{"key":"e_1_3_3_22_2","volume-title":"The Semi-automatic Generation of Audio Description from Screenplays, Technical Report CS-06-05","author":"Lakritz J.","year":"2002","unstructured":"J. Lakritz and A. Salway. 2002. The Semi-automatic Generation of Audio Description from Screenplays, Technical Report CS-06-05. Dept. of Computing, University of Surrey."},{"key":"e_1_3_3_23_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-319-10602-1_48"},{"key":"e_1_3_3_24_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.cviu.2017.04.013"},{"key":"e_1_3_3_25_2","doi-asserted-by":"publisher","DOI":"10.1177\/0264619618794750"},{"key":"e_1_3_3_26_2","volume-title":"REST API Design Rulebook","author":"Masse Mark","year":"2011","unstructured":"Mark Masse. 2011. REST API Design Rulebook. O\u2019Reilly Media, Sebastopol."},{"key":"e_1_3_3_27_2","doi-asserted-by":"publisher","DOI":"10.1007\/978-3-030-42105-2_12"},{"key":"e_1_3_3_28_2","doi-asserted-by":"publisher","DOI":"10.47476\/jat.v3i2.2020.139"},{"key":"e_1_3_3_29_2","unstructured":"Khoa Nguyen Konstantinos Drossos and Tuomas Virtanen. 2020. Temporal sub-sampling of audio feature sequences for automated audio captioning. arXiv preprint arXiv:2007.02676."},{"key":"e_1_3_3_30_2","first-page":"191","volume-title":"Audiodescricao como Tecnologia Assistiva para o Acesso ao Conhecimento por Pessoas Cegas. (Audio Description as Assistive Technology for Access to Knowledge for the Blind)","author":"Nunes E. V.","year":"2011","unstructured":"E. V. Nunes, F. O. Machado, and T. Vanzin. 2011. Audiodescricao como Tecnologia Assistiva para o Acesso ao Conhecimento por Pessoas Cegas. (Audio Description as Assistive Technology for Access to Knowledge for the Blind). Pandion, Florianopolis, 191\u2013232."},{"key":"e_1_3_3_31_2","doi-asserted-by":"publisher","DOI":"10.1145\/3019943.3019965"},{"key":"e_1_3_3_32_2","doi-asserted-by":"publisher","DOI":"10.1109\/NITC.2017.8285657"},{"key":"e_1_3_3_33_2","article-title":"YOLO9000: Better, faster, stronger","volume":"1612","author":"Redmon Joseph","year":"2016","unstructured":"Joseph Redmon and Ali Farhadi. 2016. YOLO9000: Better, faster, stronger. CoRR abs\/1612.08242 (2016).","journal-title":"CoRR"},{"key":"e_1_3_3_34_2","doi-asserted-by":"crossref","first-page":"505","DOI":"10.1007\/978-3-319-40238-3_48","volume-title":"Universal Access in Human-Computer Interaction. Users and Context Diversity","author":"Fa\u00e7anha Agebson Rocha","year":"2016","unstructured":"Agebson Rocha Fa\u00e7anha, Adonias Caetano de Oliveira, Marcos Vinicius de Andrade Lima, Windson Viana, and Jaime S\u00e1nchez. 2016. Audio description of videos for people with visual disabilities. In Universal Access in Human-Computer Interaction. Users and Context Diversity, Margherita Antona and Constantine Stephanidis (Eds.). Springer International Publishing, Cham, 505\u2013515."},{"key":"e_1_3_3_35_2","first-page":"142","article-title":"Text-to-speech audio description: Towards wider availability of AD","volume":"15","author":"Szarkowska A.","year":"2011","unstructured":"A. Szarkowska. 2011. Text-to-speech audio description: Towards wider availability of AD. J. Spec. Transl. 15 (2011), 142\u2013162.","journal-title":"J. Spec. Transl."},{"key":"e_1_3_3_36_2","article-title":"Going deeper with convolutions","volume":"1409","author":"Szegedy Christian","year":"2014","unstructured":"Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott E. Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, and Andrew Rabinovich. 2014. Going deeper with convolutions. CoRR abs\/1409.4842 (2014).","journal-title":"CoRR"},{"key":"e_1_3_3_37_2","unstructured":"Asociaci\u00f3n Espa\u00f1ola de Normalizaci\u00f3n. UNE-153020. 2005. Audiodescripci\u00f3n para Personas con Discapacidad Visual. Requisitos para la audiodescripci\u00f3n y elaboraci\u00f3n de audiogu\u00edas (Audio description for visually impaired people. Guidelines for audio description procedures and for the preparation of audio guides). Technical Report. AENOR. Available in: www.une.org\/encuentra-tu-norma\/busca-tu-norma\/norma?c=N0032787."},{"key":"e_1_3_3_38_2","volume-title":"Blindness and Vision Impairment","author":"Organization WHO - World Health","year":"2019","unstructured":"WHO - World Health Organization. 2019. Blindness and Vision Impairment. Retrieved from http:\/\/www.who.int\/news-room\/fact-sheets\/detailblindness-and-visual-impairment."},{"key":"e_1_3_3_39_2","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2019.05.027"},{"key":"e_1_3_3_40_2","doi-asserted-by":"publisher","DOI":"10.1016\/S1005-8885(16)60037-7"}],"container-title":["ACM Transactions on Accessible Computing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3590955","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3590955","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T17:51:22Z","timestamp":1750182682000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3590955"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2023,6,24]]},"references-count":39,"journal-issue":{"issue":"2","published-print":{"date-parts":[[2023,6,30]]}},"alternative-id":["10.1145\/3590955"],"URL":"https:\/\/doi.org\/10.1145\/3590955","relation":{},"ISSN":["1936-7228","1936-7236"],"issn-type":[{"value":"1936-7228","type":"print"},{"value":"1936-7236","type":"electronic"}],"subject":[],"published":{"date-parts":[[2023,6,24]]},"assertion":[{"value":"2021-07-19","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-03-21","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2023-06-24","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}