{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"institution":[{"id":[{"id":"https:\/\/ror.org\/03mb6wj31","id-type":"ROR","asserted-by":"publisher"},{"id":"https:\/\/www.isni.org\/000000041937028X","id-type":"ISNI","asserted-by":"publisher"},{"id":"https:\/\/www.wikidata.org\/entity\/Q1640731","id-type":"wikidata","asserted-by":"publisher"}],"name":"Universitat Polit\u00e8cnica de Catalunya","acronym":["UPC"]}],"indexed":{"date-parts":[[2026,1,19]],"date-time":"2026-01-19T20:29:23Z","timestamp":1768854563801,"version":"3.49.0"},"reference-count":0,"publisher":"Universitat Polit\u00e8cnica de Catalunya","license":[{"content-version":"vor","delay-in-days":0,"URL":"http:\/\/creativecommons.org\/licenses\/by-nc-sa\/4.0\/"}],"content-domain":{"domain":[],"crossmark-restriction":false},"short-container-title":[],"abstract":"<jats:p>Signed languages are complete and natural languages used as the first or preferred mode of communication by millions of people worldwide. However, they, unfortunately, continue to be marginalized languages. Designing, building, and evaluating models that work on sign languages presents compelling research challenges and requires interdisciplinary and collaborative efforts. The recent advances in Machine Learning (ML) and Artificial Intelligence (AI) has the power to enable better accessibility to sign language users and narrow down the existing communication barrier between the Deaf community and non-sign language users. However, recent AI-powered technologies still do not account for sign language in their pipelines. This is mainly because sign languages are visual languages, that use manual and non-manual features to convey information, and do not have a standard written form. Thus, the goal of this thesis is to contribute to the development of new technologies that account for sign language by creating large-scale multimodal resources suitable for training modern data-hungry machine learning models and developing automatic systems that focus on computer vision tasks related to sign language that aims at learning better visual understanding of sign languages.\r\nThus, in Part I, we introduce the How2Sign dataset, which is a large-scale collection of multimodal and multiview sign language videos in American Sign Language. In Part II, we contribute to the development of technologies that account for sign languages by presenting in Chapter 4 a framework called Spot-Align, based on sign spotting methods, to automatically annotate sign instances in continuous sign language. We further present the benefits of this framework and establish a baseline for the sign language recognition task on the How2Sign dataset. In addition to that, in Chapter 5 we benefit from the different annotations and modalities of the How2Sign to explore sign language video retrieval by learning cross-modal embeddings. Later in Chapter 6, we explore sign language video generation by applying Generative Adversarial Networks to the sign language domain and assess if and how well sign language users can understand automatically generated sign language videos by proposing an evaluation protocol based on How2Sign topics and English translation<\/jats:p>\n                <jats:p>Les lleng\u00fces de signes s\u00f3n lleng\u00fces completes i naturals que utilitzen milions de persones de tot el m\u00f3n com mode de comunicaci\u00f3 primer o preferit. Tanmateix, malauradament, continuen essent lleng\u00fces marginades. Dissenyar, construir i avaluar tecnologies que funcionin amb les lleng\u00fces de signes presenta reptes de recerca que requereixen d\u2019esfor\u00e7os interdisciplinaris i col\u00b7laboratius. Els aven\u00e7os recents en l\u2019aprenentatge autom\u00e0tic i la intel\u00b7lig\u00e8ncia artificial (IA) poden millorar l\u2019accessibilitat tecnol\u00f2gica dels signants, i alhora reduir la barrera de comunicaci\u00f3 existent entre la comunitat sorda i les persones no-signants. Tanmateix, les tecnologies m\u00e9s modernes en IA encara no consideren les lleng\u00fces de signes en les seves interf\u00edcies amb l\u2019usuari. Aix\u00f2 es deu principalment a que les lleng\u00fces de signes s\u00f3n llenguatges visuals, que utilitzen caracter\u00edstiques manuals i no manuals per transmetre informaci\u00f3, i no tenen una forma escrita est\u00e0ndard. Els objectius principals d\u2019aquesta tesi s\u00f3n la creaci\u00f3 de recursos multimodals a gran escala adequats per entrenar models d\u2019aprenentatge autom\u00e0tic per a lleng\u00fces de signes, i desenvolupar sistemes de visi\u00f3 per computador adre\u00e7ats a una millor comprensi\u00f3 autom\u00e0tica de les lleng\u00fces de signes. Aix\u00ed, a la Part I presentem la base de dades How2Sign, una gran col\u00b7lecci\u00f3 multimodal i multivista de v\u00eddeos de la llengua de signes nord-americana. A la Part II, contribu\u00efm al desenvolupament de tecnologia per a lleng\u00fces de signes, presentant al cap\u00edtol 4 una soluci\u00f3 per anotar signes autom\u00e0ticament anomenada Spot-Align, basada en m\u00e8todes de localitzaci\u00f3 de signes en seq\u00fc\u00e8ncies cont\u00ednues de signes. Despr\u00e9s, presentem els avantatges d\u2019aquesta soluci\u00f3 i proporcionem uns primers resultats per la tasca de reconeixement de la llengua de signes a la base de dades How2Sign. A continuaci\u00f3, al cap\u00edtol 5 aprofitem de les anotacions i diverses modalitats de How2Sign per explorar la cerca de v\u00eddeos en llengua de signes a partir de l\u2019entrenament d\u2019incrustacions multimodals. Finalment, al cap\u00edtol 6, explorem la generaci\u00f3 de v\u00eddeos en llengua de signes aplicant xarxes advers\u00e0ries generatives al domini de la llengua de signes. Avaluem fins a quin punt els signants poden entendre els v\u00eddeos generats autom\u00e0ticament, proposant un nou protocol d\u2019avaluaci\u00f3 basat en les categories dins de How2Sign i la traducci\u00f3 dels v\u00eddeos a l\u2019angl\u00e8s escrit<\/jats:p>\n                <jats:p>Las lenguas de signos son lenguas completas y naturales que utilizan millones de personas\r\nde todo el mundo como modo de comunicaci\u00f3n primero o preferido. Sin embargo, desgraciadamente,\r\nsiguen siendo lenguas marginadas. Dise\u00f1ar, construir y evaluar tecnolog\u00edas\r\nque funcionen con las lenguas de signos presenta retos de investigaci\u00f3n que requieren\r\nesfuerzos interdisciplinares y colaborativos. Los avances recientes en el aprendizaje autom\u00e1tico\r\ny la inteligencia artificial (IA) pueden mejorar la accesibilidad tecnol\u00f3gica de\r\nlos signantes, al tiempo que reducir la barrera de comunicaci\u00f3n existente entre la comunidad\r\nsorda y las personas no signantes. Sin embargo, las tecnolog\u00edas m\u00e1s modernas en\r\nIA todav\u00eda no consideran las lenguas de signos en sus interfaces con el usuario. Esto\r\nse debe principalmente a que las lenguas de signos son lenguajes visuales, que utilizan\r\ncaracter\u00edsticas manuales y no manuales para transmitir informaci\u00f3n, y carecen de una\r\nforma escrita est\u00e1ndar. Los principales objetivos de esta tesis son la creaci\u00f3n de recursos\r\nmultimodales a gran escala adecuados para entrenar modelos de aprendizaje autom\u00e1tico\r\npara lenguas de signos, y desarrollar sistemas de visi\u00f3n por computador dirigidos a una\r\nmejor comprensi\u00f3n autom\u00e1tica de las lenguas de signos.\r\nAs\u00ed, en la Parte I presentamos la base de datos How2Sign, una gran colecci\u00f3n multimodal\r\ny multivista de v\u00eddeos de lenguaje la lengua de signos estadounidense. En la Part II,\r\ncontribuimos al desarrollo de tecnolog\u00eda para lenguas de signos, presentando en el cap\u00edtulo\r\n4 una soluci\u00f3n para anotar signos autom\u00e1ticamente llamada Spot-Align, basada en\r\nm\u00e9todos de localizaci\u00f3n de signos en secuencias continuas de signos. Despu\u00e9s, presentamos\r\nlas ventajas de esta soluci\u00f3n y proporcionamos unos primeros resultados por la tarea\r\nde reconocimiento de la lengua de signos en la base de datos How2Sign. A continuaci\u00f3n,\r\nen el cap\u00edtulo 5 aprovechamos de las anotaciones y diversas modalidades de How2Sign\r\npara explorar la b\u00fasqueda de v\u00eddeos en lengua de signos a partir del entrenamiento de\r\nincrustaciones multimodales. Finalmente, en el cap\u00edtulo 6, exploramos la generaci\u00f3n\r\nde v\u00eddeos en lengua de signos aplicando redes adversarias generativas al dominio de la\r\nlengua de signos. Evaluamos hasta qu\u00e9 punto los signantes pueden entender los v\u00eddeos\r\ngenerados autom\u00e1ticamente, proponiendo un nuevo protocolo de evaluaci\u00f3n basado en\r\nlas categor\u00edas dentro de How2Sign y la traducci\u00f3n de los v\u00eddeos al ingl\u00e9s escrito.<\/jats:p>","DOI":"10.5821\/dissertation-2117-370231","type":"dissertation","created":{"date-parts":[[2023,10,11]],"date-time":"2023-10-11T01:53:23Z","timestamp":1696989203000},"approved":{"date-parts":[[2022,6,27]]},"source":"Crossref","is-referenced-by-count":0,"title":["Data and methods for a visual understanding of sign languages"],"prefix":"10.5821","author":[{"sequence":"additional","affiliation":[]},{"given":"Amanda","family":"Cardoso Duarte","sequence":"first","affiliation":[]}],"member":"3865","container-title":[],"original-title":[],"deposited":{"date-parts":[[2026,1,19]],"date-time":"2026-01-19T06:28:24Z","timestamp":1768804104000},"score":1,"resource":{"primary":{"URL":"https:\/\/hdl.handle.net\/2117\/370231"}},"subtitle":[],"editor":[{"given":"Xavier","family":"Gir\u00f3 Nieto","sequence":"first","affiliation":[]},{"given":"Jordi","family":"Torres Vi\u00f1als","sequence":"additional","affiliation":[]}],"short-title":[],"issued":{"date-parts":[[null]]},"references-count":0,"URL":"https:\/\/doi.org\/10.5821\/dissertation-2117-370231","relation":{},"subject":[]}}