{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,26]],"date-time":"2026-04-26T05:00:49Z","timestamp":1777179649446,"version":"3.51.4"},"reference-count":168,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2021,4,15]],"date-time":"2021-04-15T00:00:00Z","timestamp":1618444800000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2021,4,15]],"date-time":"2021-04-15T00:00:00Z","timestamp":1618444800000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Int. J. Autom. Comput."],"published-print":{"date-parts":[[2021,6]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Audio-visual learning, aimed at exploiting the relationship between audio and visual modalities, has drawn considerable attention since deep learning started to be used successfully. Researchers tend to leverage these two modalities to improve the performance of previously considered single-modality tasks or address new challenging problems. In this paper, we provide a comprehensive survey of recent audio-visual learning development. We divide the current audio-visual learning tasks into four different subfields: audio-visual separation and localization, audio-visual correspondence learning, audio-visual generation, and audio-visual representation learning. State-of-the-art methods, as well as the remaining challenges of each subfield, are further discussed. Finally, we summarize the commonly used datasets and challenges.<\/jats:p>","DOI":"10.1007\/s11633-021-1293-0","type":"journal-article","created":{"date-parts":[[2021,4,15]],"date-time":"2021-04-15T06:31:42Z","timestamp":1618468302000},"page":"351-376","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":137,"title":["Deep Audio-visual Learning: A Survey"],"prefix":"10.1007","volume":"18","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-2155-1488","authenticated-orcid":false,"given":"Hao","family":"Zhu","sequence":"first","affiliation":[]},{"given":"Man-Di","family":"Luo","sequence":"additional","affiliation":[]},{"given":"Rui","family":"Wang","sequence":"additional","affiliation":[]},{"given":"Ai-Hua","family":"Zheng","sequence":"additional","affiliation":[]},{"ORCID":"https:\/\/orcid.org\/0000-0002-3807-991X","authenticated-orcid":false,"given":"Ran","family":"He","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2021,4,15]]},"reference":[{"issue":"5234","key":"1293_CR1","doi-asserted-by":"publisher","first-page":"303","DOI":"10.1126\/science.270.5234.303","volume":"270","author":"R V Shannon","year":"1995","unstructured":"R. V. Shannon, F. G. Zeng, V. Kamath, J. Wygonski, M. Ekelid. Speech recognition with primarily temporal cues. Science, vol. 270, no. 5234, pp. 303\u2013304, 1995. DOI: https:\/\/doi.org\/10.1126\/science.270.5234.303.","journal-title":"Science"},{"key":"1293_CR2","doi-asserted-by":"publisher","first-page":"1090","DOI":"10.1109\/ICASSP.2019.8683453","volume-title":"Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing","author":"G Krishna","year":"2019","unstructured":"G. Krishna, C. Tran, J. G. Yu, A. H. Tewfik. Speech recognition with no speech or with noisy speech. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, Brighton, UK, pp. 1090\u20131094, 2019. DOI: https:\/\/doi.org\/10.1109\/ICASSP.2019.8683453."},{"issue":"8","key":"1293_CR3","doi-asserted-by":"publisher","first-page":"1561","DOI":"10.1109\/TPAMI.2010.220","volume":"33","author":"R He","year":"2011","unstructured":"R. He, W. S. Zheng, B. G. Hu. Maximum correntropy criterion for robust face recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 8, pp. 1561\u20131576, 2011. DOI: https:\/\/doi.org\/10.1109\/TPAMI.2010.220.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"1293_CR4","unstructured":"C. Y. Fu, X. Wu, Y. B. Hu, H. B. Huang, R. He. Dual variational generation for low shot heterogeneous face recognition. In Proceedings of Advances in Neural Information Processing Systems, Vancouver, Canada, pp. 2670\u20132679, 2019."},{"issue":"5","key":"1293_CR5","doi-asserted-by":"publisher","first-page":"671","DOI":"10.1007\/s11633-018-1153-8","volume":"16","author":"S G Tong","year":"2019","unstructured":"S. G. Tong, Y. Y. Huang, Z. M. Tong. A robust face recognition method combining lbp with multi-mirror symmetry for images with various face interferences. International Journal of Automation and Computing, vol. 16, no. 5, pp. 671\u2013682, 2019. DOI: https:\/\/doi.org\/10.1007\/s11633-018-1153-8.","journal-title":"International Journal of Automation and Computing"},{"issue":"5","key":"1293_CR6","doi-asserted-by":"publisher","first-page":"563","DOI":"10.1007\/s11633-019-1177-8","volume":"16","author":"A X Li","year":"2019","unstructured":"A. X. Li, K. X. Zhang, L. W. Wang. Zero-shot fine-grained classification by deep feature learning with semantics. International Journal of Automation and Computing, vol. 16, no. 5, pp. 563\u2013574, 2019. DOI: https:\/\/doi.org\/10.1007\/s11633-019-1177-8.","journal-title":"International Journal of Automation and Computing"},{"key":"1293_CR7","doi-asserted-by":"publisher","first-page":"2826","DOI":"10.1109\/TIP.2021.3055617","volume":"30","author":"Y F Ding","year":"2021","unstructured":"Y. F. Ding, Z. Y. Ma, S. G. Wen, J. Y. Xie, D. L. Chang, Z. W. Si, M. Wu, H. B. Ling. AP-CNN: Weakly supervised attention pyramid convolutional neural network or fine-grained visual classification. IEEE Transactions on Image Processing, vol. 30, pp. 2826\u20132836, 2021. DOI: https:\/\/doi.org\/10.1109\/TIP.2021.3055617.","journal-title":"IEEE Transactions on Image Processing"},{"key":"1293_CR8","doi-asserted-by":"publisher","first-page":"4683","DOI":"10.1109\/TIP.2020.2973812","volume":"29","author":"D L Chang","year":"2020","unstructured":"D. L. Chang, Y. F. Ding, J. Y. Xie, A. K. Bhunia, X. X. Li, Z. Y. Ma, M. Wu, J. Guo, Y. Z. Song. The devil is in the channels: Mutual-channel loss or fine-grained image classification. IEEE Transactions on Image Processing, vol. 29, pp. 4683\u20134695, 2020. DOI: https:\/\/doi.org\/10.1109\/TIP.2020.2973812.","journal-title":"IEEE Transactions on Image Processing"},{"key":"1293_CR9","doi-asserted-by":"publisher","first-page":"3051","DOI":"10.1109\/ICASSP.2018.8462527","volume-title":"Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing","author":"A Gabbay","year":"2018","unstructured":"A. Gabbay, A. Ephrat, T. Halperin, S. Peleg. Seeing through noise: Visually driven speaker separation and enhancement. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, Calgary, Canada, pp. 3051\u20133055, 2018. DOI: https:\/\/doi.org\/10.1109\/ICASSP.2018.8462527."},{"key":"1293_CR10","doi-asserted-by":"publisher","unstructured":"T. Afouras, J. S. Chung, A. Zisserman. The conversation: Deep audio-visual speech enhancement. In Proceedings of the 19th Annual Conference of the International Speech Communication Association, Hyderabad, India, pp. 3244\u20133248, 2018. DOI: https:\/\/doi.org\/10.21437\/Interspeech.20181400.","DOI":"10.21437\/Interspeech.20181400"},{"key":"1293_CR11","doi-asserted-by":"publisher","unstructured":"A. Ephrat, I. Mosseri, O. Lang, T. Dekel, K. Wilson, A. Hassidim, W. T. Freeman, M. Rubinstein. Looking to listen at the cocktail party: A speaker-independent audio-visual model for speech separation ACM Transactions on Graphics, vol. 37, no. 4, Article number 112, 2018. DOI: https:\/\/doi.org\/10.1145\/3197517.3201357.","DOI":"10.1145\/3197517.3201357"},{"key":"1293_CR12","doi-asserted-by":"publisher","unstructured":"P. Morgado, N. Vasconcelos, T. Langlois, O. Wang. Self-supervised generation of spatial audio for 360\u00b0 video. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, Montreal, Canada, pp. 360\u2013370, 2018. DOI: https:\/\/doi.org\/10.5555\/3326943.3326977.","DOI":"10.5555\/3326943.3326977"},{"key":"1293_CR13","doi-asserted-by":"publisher","unstructured":"I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, A. Courville Improved training of Wasserstein GANs In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, USA, pp.5769\u20135779, 2017. DOI: https:\/\/doi.org\/10.5555\/3295222.3295327.","DOI":"10.5555\/3295222.3295327"},{"key":"1293_CR14","doi-asserted-by":"publisher","first-page":"4396","DOI":"10.1109\/CVPR.2019.00453","volume-title":"Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"T Karras","year":"2019","unstructured":"T. Karras, S. Laine, T. Aila. A style-based generator architecture for generative adversarial networks. In Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Long Beach, USA, pp. 4396\u20134405, 2019. DOI: https:\/\/doi.org\/10.1109\/CVPR.2019.00453."},{"issue":"8","key":"1293_CR15","doi-asserted-by":"publisher","first-page":"1798","DOI":"10.1109\/TPAMI.2013.50","volume":"35","author":"Y Y Bengio","year":"2013","unstructured":"Y. Y. Bengio, A. Courville, P. Vincent. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 35, no. 8, pp. 1798\u20131828, 2013. DOI: https:\/\/doi.org\/10.1109\/TPAMI.2013.50.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"1293_CR16","doi-asserted-by":"publisher","first-page":"609","DOI":"10.1109\/ICCV.2017.73","volume-title":"Proceedings of IEEE International Conference on Computer Vision","author":"R Arandjelovic","year":"2017","unstructured":"R. Arandjelovic, A. Zisserman. Look, listen and learn. In Proceedings of IEEE International Conference on Computer Vision, IEEE, Venice, Italy, pp. 609\u2013617, 2017. DOI: https:\/\/doi.org\/10.1109\/ICCV.2017.73."},{"key":"1293_CR17","doi-asserted-by":"publisher","unstructured":"B. Korbar, D. Tran, L. Torresani. Cooperative learning of audio and video models from self-supervised synchronization. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, Montreal, Canada pp. 7774\u20137785, 2018. DOI: https:\/\/doi.org\/10.5555\/3327757.3327874.","DOI":"10.5555\/3327757.3327874"},{"key":"1293_CR18","doi-asserted-by":"publisher","first-page":"545","DOI":"10.21437\/Interspeech.2016-1176","volume-title":"Proceedings of Interspeech 2016","author":"Y Z Isik","year":"2016","unstructured":"Y. Z. Isik, J. Le Roux, Z. Chen, S. Watanabe, J. R. Hershey. Single-channel multi-speaker separation using deep clustering. In Proceedings of Interspeech 2016, ISCA, San Francisco, USA, pp. 545\u2013549, 2016. DOI: https:\/\/doi.org\/10.21437\/Interspeech.2016-1176."},{"issue":"4","key":"1293_CR19","doi-asserted-by":"publisher","first-page":"787","DOI":"10.1109\/TASLP.2018.2795749","volume":"26","author":"Y Luo","year":"2018","unstructured":"Y. Luo, Z. Chen, N. Mesgarani. Speaker-independent speech separation with deep attractor network. IEEE\/ACM Transactions on Audio, Speech, and Language Processing, vol. 26, no. 4, pp. 787\u2013796, 2018. DOI: https:\/\/doi.org\/10.1109\/TASLP.2018.2795749.","journal-title":"IEEE\/ACM Transactions on Audio, Speech, and Language Processing"},{"key":"1293_CR20","doi-asserted-by":"publisher","first-page":"32","DOI":"10.1007\/3-540-40063-X_5","volume-title":"Proceedings of the 3rd International Conference on Multimodal Interfaces","author":"T Darrell","year":"2000","unstructured":"T. Darrell, J. W. Fisher III, P. Viola. Audio-visual segmentation and \u201cthe cocktail party effect\u201d. In Proceedings of the 3rd International Conference on Multimodal Interfaces, Springer, Beijing, China, pp. 32\u201340, 2000. DOI: https:\/\/doi.org\/10.1007\/3-540-40063-X_5."},{"key":"1293_CR21","doi-asserted-by":"publisher","unstructured":"J. W. Fisher III, T. Darrell, W. T. Freeman, P. Viola. Learning joint statistical models for audio-visual fusion and segregation. In Proceedings of the 13th International Conference on Neural Information Processing Systems, Denver, USA, pp. 742\u2013748, 2000. DOI: https:\/\/doi.org\/10.5555\/3008751.3008859.","DOI":"10.5555\/3008751.3008859"},{"key":"1293_CR22","doi-asserted-by":"publisher","first-page":"2906","DOI":"10.1109\/ICASSP.2017.7952688","volume-title":"Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing","author":"B C Li","year":"2017","unstructured":"B. C. Li, K. Dinesh, Z. Y. Duan, G. Sharma. See and listen: Score-informed association of sound tracks to players in chamber music performance videos. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, New Orleans, USA, pp. 2906\u20132910, 2017. DOI: https:\/\/doi.org\/10.1109\/ICASSP.2017.7952688."},{"key":"1293_CR23","doi-asserted-by":"publisher","first-page":"2901","DOI":"10.1109\/ICASSP.2017.7952687","volume-title":"Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing","author":"J Pu","year":"2017","unstructured":"J. Pu, Y. Panagakis, S. Petridis, M. Pantic. Audio-visual object localization and separation using low-rank and sparsity. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, New Orleans, USA, pp. 2901\u20132905, 2017. DOI: https:\/\/doi.org\/10.1109\/ICASSP.2017.7952687."},{"issue":"8","key":"1293_CR24","doi-asserted-by":"publisher","first-page":"1735","DOI":"10.1162\/neco.1997.9.8.1735","volume":"9","author":"S Hochreiter","year":"1997","unstructured":"S. Hochreiter, J. Schmidhuber. Long short-term memory. Neural Computation, vol. 9, no. 8, pp. 1735\u20131780, 1997. DOI: https:\/\/doi.org\/10.1162\/neco.1997.9.8.1735.","journal-title":"Neural Computation"},{"issue":"9","key":"1293_CR25","doi-asserted-by":"publisher","first-page":"1315","DOI":"10.1109\/LSP.2018.2853566","volume":"25","author":"R Lu","year":"2018","unstructured":"R. Lu, Z. Y. Duan, C. S. Zhang. Listen and look: Audio-visual matching assisted speech source separation. IEEE Signal Processing Letters, vol. 25, no. 9, pp. 1315\u20131319, 2018. DOI: https:\/\/doi.org\/10.1109\/LSP.2018.2853566.","journal-title":"IEEE Signal Processing Letters"},{"key":"1293_CR26","doi-asserted-by":"publisher","first-page":"6900","DOI":"10.1109\/ICASSP.2019.8682061","volume-title":"Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing","author":"G Morrone","year":"2019","unstructured":"G. Morrone, S. Bergamaschi, L. Pasa, L. Fadiga, V. Tikhanoff, L. Badino. Face landmark-based speaker-independent audio-visual speech enhancement in multi-talker environments. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, Brighton, UK, pp. 6900\u20136904, 2019. DOI: https:\/\/doi.org\/10.1109\/ICASSP.2019.8682061."},{"key":"1293_CR27","doi-asserted-by":"publisher","unstructured":"J. Hershey, J. Movellan. Audio-vision: Using audio-visual synchrony to locate sounds. In Proceedings of the 12th International Conference on Neural Information Processing Systems, Denver, USA, pp. 813\u2013819, 1999. DOI: https:\/\/doi.org\/10.5555\/3009657.3009772.","DOI":"10.5555\/3009657.3009772"},{"key":"1293_CR28","doi-asserted-by":"publisher","DOI":"10.1002\/0471221104","volume-title":"Optimum Array Processing: Part IV of Detection, Estimation and Modulation Theory","author":"H L van Trees","year":"2002","unstructured":"H. L. van Trees. Optimum Array Processing: Part IV of Detection, Estimation and Modulation Theory, New York, USA: Wiley-Interscience, 2002."},{"key":"1293_CR29","doi-asserted-by":"publisher","first-page":"693","DOI":"10.1109\/ICCVW.2015.95","volume-title":"Proceedings of IEEE International Conference on Computer Vision Workshop","author":"A Zunino","year":"2015","unstructured":"A. Zunino, M. Crocco, S. Martelli, A. Trucco, A. Del Bue, V. Murino. Seeing the sound: A new multimodal imaging device for computer vision. In Proceedings of IEEE International Conference on Computer Vision Workshop, IEEE, Santiago, Chile, pp.693\u2013701, 2015. DOI: https:\/\/doi.org\/10.1109\/ICCVW.2015.95."},{"key":"1293_CR30","doi-asserted-by":"publisher","first-page":"36","DOI":"10.1007\/978-3-030-01219-9_3","volume-title":"Proceedings of the 15th European Conference on Computer Vision","author":"R H Gao","year":"2018","unstructured":"R. H. Gao, R. Feris, K. Grauman. Learning to separate object sounds by watching unlabeled video. In Proceedings of the 15th European Conference on Computer Vision, Springer, Munich, Germany, pp. 36\u201354, 2018. DOI: https:\/\/doi.org\/10.1007\/978-3-030-01219-9_3."},{"issue":"3","key":"1293_CR31","doi-asserted-by":"publisher","first-page":"530","DOI":"10.1109\/JSTSP.2020.2980956","volume":"14","author":"R Z Gu","year":"2020","unstructured":"R. Z. Gu, S. X. Zhang, Y. Xu, L. W. Chen, Y. X. Zou, D. Yu. Multi-modal multi-channel target speech separation. IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 3, pp. 530\u2013541, 2020. DOI: https:\/\/doi.org\/10.1109\/JSTSP.2020.2980956.","journal-title":"IEEE Journal of Selected Topics in Signal Processing"},{"key":"1293_CR32","unstructured":"L. Y. Zhu, E. Rahtu. Separating sounds from a single image. [Online], Available: https:\/\/arxiv.org\/abs\/2007.07984, 2020."},{"issue":"2","key":"1293_CR33","doi-asserted-by":"publisher","first-page":"378","DOI":"10.1109\/TMM.2012.2228476","volume":"15","author":"H Izadinia","year":"2013","unstructured":"H. Izadinia, I. Saleemi, M. Shah. Multimodal analysis for identification and segmentation of moving-sounding objects. IEEE Transactions on Multimedia, vol. 15, no. 2, pp. 378\u2013390, 2013. DOI: https:\/\/doi.org\/10.1109\/TMM.2012.2228476.","journal-title":"IEEE Transactions on Multimedia"},{"key":"1293_CR34","doi-asserted-by":"publisher","first-page":"4358","DOI":"10.1109\/CVPR.2018.00458","volume-title":"Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"A Senocak","year":"2018","unstructured":"A. Senocak, T. H. Oh, J. Kim, M. H. Yang, I. S. Kweon. Learning to localize sound source in visual scenes. In Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, USA, pp. 4358\u20134366, 2018. DOI: https:\/\/doi.org\/10.1109\/CVPR.2018.00458."},{"key":"1293_CR35","doi-asserted-by":"publisher","first-page":"84","DOI":"10.1007\/978-3-319-24261-3_7","volume-title":"Proceedings of the 3rd International Workshop on Similarity-Based Pattern Recognition","author":"E Hoffer","year":"2015","unstructured":"E. Hoffer, N. Ailon. Deep metric learning using triplet network. In Proceedings of the 3rd International Workshop on Similarity-Based Pattern Recognition, Springer, Copenhagen, Denmark, pp. 84\u201392, 2015. DOI: https:\/\/doi.org\/10.1007\/978-3-319-24261-3_7."},{"key":"1293_CR36","doi-asserted-by":"publisher","first-page":"6291","DOI":"10.1109\/ICCV.2019.00639","volume-title":"Proceedings of IEEE\/CVF International Conference on Computer Vision","author":"Y Wu","year":"2019","unstructured":"Y. Wu, L. C. Zhu, Y. Yan, Y. Yang. Dual attention matching for audio-visual event localization. In Proceedings of IEEE\/CVF International Conference on Computer Vision, IEEE, Seoul, Korea, pp. 6291\u20136299, 2019. DOI: https:\/\/doi.org\/10.1109\/ICCV.2019.00639."},{"key":"1293_CR37","doi-asserted-by":"publisher","first-page":"252","DOI":"10.1007\/978-3-030-01216-8_16","volume-title":"Proceedings of the 15th European Conference on Computer Vision","author":"Y P Tian","year":"2018","unstructured":"Y. P. Tian, J. Shi, B. C. Li, Z. Y. Duan, C. L. Xu. Audio-visual event localization in unconstrained videos. In Proceedings of the 15th European Conference on Computer Vision, Springer, Munich, Germany, pp. 252\u2013268, 2018. DOI: https:\/\/doi.org\/10.1007\/978-3-030-01216-8_16."},{"key":"1293_CR38","unstructured":"R. Sharma, K. Somandepalli, S. Narayanan. Crossmodal learning for audio-visual speech event localization. [Online], Available: https:\/\/arxiv.org\/abs\/2003.04358, 2020."},{"key":"1293_CR39","doi-asserted-by":"publisher","first-page":"587","DOI":"10.1007\/978-3-030-01246-5_35","volume-title":"Proceedings of 15th European Conference on Computer Vision","author":"H Zhao","year":"2018","unstructured":"H. Zhao, C. Gan, A. Rouditchenko, C. Vondrick, J. McDermott, A. Torralba. The sound of pixels. In Proceedings of 15th European Conference on Computer Vision, Springer, Munich, Germany, pp. 587\u2013604, 2018. DOI: https:\/\/doi.org\/10.1007\/978-3-030-01246-5_35."},{"key":"1293_CR40","doi-asserted-by":"publisher","first-page":"1735","DOI":"10.1109\/ICCV.2019.00182","volume-title":"Proceedings of IEEE\/CVF International Conference on Computer Vision","author":"H Zhao","year":"2019","unstructured":"H. Zhao, C. Gan, W. C. Ma, A. Torralba. The sound of motions. In Proceedings of IEEE\/CVF International Conference on Computer Vision, IEEE, Seoul, Korea, pp. 1735\u20131744, 2019. DOI: https:\/\/doi.org\/10.1109\/ICCV.2019.00182."},{"key":"1293_CR41","doi-asserted-by":"publisher","first-page":"2357","DOI":"10.1109\/ICAS-SP.2019.8682467","volume-title":"Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing","author":"A Rouditchenko","year":"2019","unstructured":"A. Rouditchenko, H. Zhao, C. Gan, J. McDermott, A. Torralba. Self-supervised audio-visual co-segmentation. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, Brighton, UK, pp. 2357\u20132361, 2019. DOI: https:\/\/doi.org\/10.1109\/ICAS-SP.2019.8682467."},{"key":"1293_CR42","doi-asserted-by":"publisher","DOI":"10.1109\/WASPAA.2019.8937237","volume-title":"Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics","author":"S Parekh","year":"2019","unstructured":"S. Parekh, A. Ozerov, S. Essid, N. Q. K. Duong, P. P\u00e9rez, G. Richard. Identify, locate and separate: Audio-visual object extraction in large video collections using weak supervision. In Proceedings of IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, IEEE, New Paltz, USA, pp.268\u2013272, 2019. DOI: https:\/\/doi.org\/10.1109\/WASPAA.2019.8937237."},{"key":"1293_CR43","unstructured":"X. C. Sun, H. Jia, Z. Zhang, Y. Z. Yang, Z. Y. Sun, J. Yang. Sound localization and separation in three-dimensional space using a single microphone with a metamaterial enclosure, [Online], Available: https:\/\/arxiv.org\/abs\/1908.08160, 2019."},{"key":"1293_CR44","doi-asserted-by":"publisher","unstructured":"K. Sriskandaraja, V. Sethu, E. Ambikairajah. Deep siamese architecture based replay detection for secure voice biometric. In Proceedings of the 19th Annual Conference of the International Speech Communication Association, Hyderabad, India, pp. 671\u2013675, 2018. DOI: https:\/\/doi.org\/10.21437\/Interspeech.2018-1819.","DOI":"10.21437\/Interspeech.2018-1819"},{"key":"1293_CR45","doi-asserted-by":"publisher","unstructured":"R. Bia\u0142obrzeski, M. Ko\u015bmider, M. Matuszewski, M. Plata, A. Rakowski. Robust Bayesian and light neural networks for voice spoofing detection. In Proceedings of the 20th Annual Conference of the International Speech Communication Association, Graz, Austria, pp. 1028\u20131032, 2019. DOI: https:\/\/doi.org\/10.21437\/Interspeech.2019-2676.","DOI":"10.21437\/Interspeech.2019-2676"},{"key":"1293_CR46","doi-asserted-by":"publisher","unstructured":"A. Gomez-Alanis, A. M. Peinado, J. A. Gonzalez, A. M. Gomez. A light convolutional GRU-RNN deep feature extractor for ASV spoofing detection. In Proceedings of the 20th Annual Conference of the International Speech Communication Association, Graz, Austria, pp. 1068\u20131072, 2019. DOI: https:\/\/doi.org\/10.21437\/Interspeech.2019-2212.","DOI":"10.21437\/Interspeech.2019-2212"},{"issue":"11","key":"1293_CR47","doi-asserted-by":"publisher","first-page":"2884","DOI":"10.1109\/TIFS.2018.2833032","volume":"13","author":"X Wu","year":"2018","unstructured":"X. Wu, R. He, Z. N. Sun, T. N. Tan. A light CNN for deep face representation with noisy labels. IEEE Transactions on Information Forensics and Security, vol. 13, no. 11, pp. 2884\u20132896, 2018. DOI: https:\/\/doi.org\/10.1109\/TIFS.2018.2833032.","journal-title":"IEEE Transactions on Information Forensics and Security"},{"key":"1293_CR48","unstructured":"J. Chung, C. Gulcehre, K. Cho, Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. Online], Available: https:\/\/arxiv.org\/abs\/1412.3555, 2014."},{"key":"1293_CR49","doi-asserted-by":"publisher","first-page":"8427","DOI":"10.1109\/CVPR.2018.00879c","volume-title":"Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"A Nagrani","year":"2018","unstructured":"A. Nagrani, S. Albanie, A. Zisserman. Seeing voices and hearing faces: Cross-modal biometric matching. In Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, USA, pp. 8427\u20138436, 2018. DOI: https:\/\/doi.org\/10.1109\/CVPR.2018.00879c."},{"key":"1293_CR50","doi-asserted-by":"crossref","unstructured":"A. Torfi, S. M. Iranmanesh, N. M. Nasrabadi, J. Dawson. 3D convolutional neural networks for audio-visual recognition. [Online], Available: https:\/arxiv.org\/abs\/1706.05739, 2017.","DOI":"10.1109\/ACCESS.2017.2761539"},{"key":"1293_CR51","doi-asserted-by":"publisher","unstructured":"K. Simonyan, A. Zisserman. Two-stream convolutional networks for action recognition in videos. In Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, Canada, pp. 568\u2013576, 2014. DOI: https:\/\/doi.org\/10.5555\/2968826.2968890.","DOI":"10.5555\/2968826.2968890"},{"key":"1293_CR52","unstructured":"Y. D. Wen, M. Al Ismail, W. Y. Liu, B. Raj, R. Singh. Disjoint mapping network for cross-modal matching of voices and faces. In Proceedings of the 7th International Conference on Learning Representations, New Orleans, USA, 2019."},{"key":"1293_CR53","unstructured":"S. Ioffe, C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the 32nd International Conference on International Conference on Machine Learning, Lille, France, pp. 448\u2013456, 2015."},{"issue":"38","key":"1293_CR54","doi-asserted-by":"publisher","first-page":"10166","DOI":"10.1073\/pnas.1711125114","volume":"114","author":"C Lippert","year":"2017","unstructured":"C. Lippert, R. Sabatini, M. C. Maher, E. Y. Kang, S. Lee, O. Arikan, A. Harley, A. Bernal, P. Garst, V. Lavrenko, K. Yocum, T. Wong, M. F. Zhu, W. Y. Yang, C. Chang, T. Lu, C. W. H. Lee, B. Hicks, S. Ramakrishnan, H. B. Tang, C. Xie, J. Piper, S. Brewerton, Y. Turpaz, A. Telenti, R. K. Roby, F. J. Och, J. C. Venter. Identification of individuals by trait prediction using whole-genome sequencing data. In Proceedings of the National Academy of Sciences of the United States of America, vol. 114, no. 38, pp. 10166\u201310171, 2017. DOI: https:\/\/doi.org\/10.1073\/pnas.1711125114.","journal-title":"Proceedings of the National Academy of Sciences of the United States of America"},{"key":"1293_CR55","unstructured":"K. Hoover, S. Chaudhuri, C. Pantofaru, M. Slaney, I. Sturdy. Putting a face to the voice: Fusing audio and visual signals across a video to determine speakers. [Online], Available: https:\/\/arxiv.org\/abs\/1706.00079, 2017."},{"key":"1293_CR56","doi-asserted-by":"publisher","first-page":"3965","DOI":"10.1109\/ICASSP.2019.8682524","volume-title":"Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing","author":"S W Chung","year":"2019","unstructured":"S. W. Chung, J. S. Chung, H. G. Kang. Perfect match: Improved cross-modal embeddings for audio-visual synchronisation. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, Brighton, UK, pp. 3965\u20133969, 2019. DOI: https:\/\/doi.org\/10.1109\/ICASSP.2019.8682524."},{"key":"1293_CR57","doi-asserted-by":"publisher","first-page":"300","DOI":"10.1109\/ICMEW.2019.00-70","volume-title":"Proceedings of IEEE International Conference on Multimedia & Expo Workshops","author":"R Wang","year":"2019","unstructured":"R. Wang, H. B. Huang, X. F. Zhang, J. X. Ma, A. H. Zheng. A novel distance learning for elastic cross-modal audio-visual matching. In Proceedings of IEEE International Conference on Multimedia & Expo Workshops, IEEE, Shanghai, China, pp. 300\u2013305, 2019. DOI: https:\/\/doi.org\/10.1109\/ICMEW.2019.00-70."},{"key":"1293_CR58","doi-asserted-by":"publisher","unstructured":"A. H. Zheng, M. L. Hu, B. Jiang, Y. Huang, Y. Yan, B. Luo. Adversarial-metric learning for audio-visual cross-modal matching. IEEE Transactions on Multimedia, 2021. DOI: https:\/\/doi.org\/10.1109\/TMM.2021.3050089.","DOI":"10.1109\/TMM.2021.3050089"},{"key":"1293_CR59","doi-asserted-by":"publisher","first-page":"326","DOI":"10.1109\/ICIP.1995.529712","volume-title":"Proceedings of International Conference on Image Processing","author":"R K Srihari","year":"1995","unstructured":"R. K. Srihari. Combining text and image information in content-based retrieval. In Proceedings of International Conference on Image Processing, IEEE, Washington, USA, pp. 326\u2013329, 1995. DOI: https:\/\/doi.org\/10.1109\/ICIP.1995.529712."},{"key":"1293_CR60","doi-asserted-by":"publisher","first-page":"362","DOI":"10.1117\/12.234775","volume":"2670","author":"L R Long","year":"1996","unstructured":"L. R. Long, L. E. Berman, G. R. Thoma. Prototype client\/server application for biomedical text\/image retrieval on the Internet. In Proceedings of Storage and Retrieval for Still Image and Video Databases IV, SPIE, San Jose, USA, vol. 2670, pp. 362\u2013372, 1996. DOI: https:\/\/doi.org\/10.1117\/12.234775.","journal-title":"Proceedings of Storage and Retrieval for Still Image and Video Databases IV"},{"key":"1293_CR61","doi-asserted-by":"publisher","first-page":"251","DOI":"10.1145\/1873951.1873987","volume-title":"Proceedings of the 18th ACM International Conference on Multimedia","author":"N Rasiwasia","year":"2010","unstructured":"N. Rasiwasia, J. C. Pereira, E. Coviello, G. Doyle, G. R. G. Lanckriet, R. Levy, N. Vasconcelos. A new approach to cross-modal multimedia retrieval. In Proceedings of the 18th ACM International Conference on Multimedia, ACM, Firenze, Italy, pp. 251\u2013260, 2010. DOI: https:\/\/doi.org\/10.1145\/1873951.1873987."},{"key":"1293_CR62","unstructured":"Y. Aytar, C. Vondrick, A. Torralba. See, hear, and read: Deep aligned representations. [Online], Available: https:\/\/arxiv.org\/abs\/1706.00932, 2017."},{"key":"1293_CR63","doi-asserted-by":"publisher","first-page":"711","DOI":"10.1007\/978-3-030-11018-5_62","volume-title":"Proceedings of European Conference on Computer Vision Workshop","author":"D Sur\u00eds","year":"2019","unstructured":"D. Sur\u00eds, A. Duarte, A. Salvador, J. Torres, X. Gir\u00f3-i-Nieto. Cross-modal embeddings for video and audio retrieval. In Proceedings of European Conference on Computer Vision Workshop, Springer, Munich, Germany, pp. 711\u2013716, 2019. DOI: https:\/\/doi.org\/10.1007\/978-3-030-11018-5_62."},{"key":"1293_CR64","unstructured":"S. Hong, W. Im, H. S. Yang. Content-based video-music retrieval using soft intra-modal structure constraint. [Online], Available: https:\/\/arxiv.org\/abs\/1704.06761, 2017."},{"key":"1293_CR65","doi-asserted-by":"publisher","first-page":"73","DOI":"10.1007\/978-3-030-01261-8_5","volume-title":"Proceedings of the 15th European Conference on Computer Vision","author":"A Nagrani","year":"2018","unstructured":"A. Nagrani, S. Albanie, A. Zisserman. Learnable PINs: Cross-modal embeddings for person identity. In Proceedings of the 15th European Conference on Computer Vision, Springer, Munich, Germany, pp. 73\u201389, 2018. DOI: https:\/\/doi.org\/10.1007\/978-3-030-01261-8_5."},{"key":"1293_CR66","doi-asserted-by":"publisher","unstructured":"D. H. Zeng, Y. Yu, K. Oyama. Deep triplet neural networks with cluster-CCA for audio-visual cross-modal retrieval. ACM Transactions on Multimedia Computing, Communications, and Applications, vol. 16, no. 3, Article number 76, 2020. DOI: https:\/\/doi.org\/10.1145\/3387164.","DOI":"10.1145\/3387164"},{"key":"1293_CR67","doi-asserted-by":"publisher","first-page":"119","DOI":"10.1007\/978-3-030-58542-6_8","volume-title":"Proceedings of the 16th European Conference on Computer Vision","author":"V Sanguineti","year":"2020","unstructured":"V. Sanguineti, P. Morerio, N. Pozzetti, D. Greco, M. Cristani, V. Murino. Leveraging acoustic images for effective self-supervised audio representation learning. In Proceedings of the 16th European Conference on Computer Vision, Springer, Glasgow, UK, pp. 119\u2013135, 2020. DOI: https:\/\/doi.org\/10.1007\/978-3-030-58542-6_8."},{"issue":"10","key":"1293_CR68","doi-asserted-by":"publisher","first-page":"7049","DOI":"10.1109\/TGRS.2020.2979273","volume":"58","author":"Y X Chen","year":"2020","unstructured":"Y. X. Chen, X. Q. Lu, S. Wang. Deep cross-modal image-voice retrieval in remote sensing. IEEE Transactions on Geoscience and Remote Sensing, vol. 58, no. 10, pp. 7049\u20137061, 2020. DOI: https:\/\/doi.org\/10.1109\/TGRS.2020.2979273.","journal-title":"IEEE Transactions on Geoscience and Remote Sensing"},{"key":"1293_CR69","doi-asserted-by":"publisher","first-page":"133","DOI":"10.1007\/978-3-030-49666-1_11","volume-title":"Information Technology in Biomedicine","author":"N Takashima","year":"2021","unstructured":"N. Takashima, F. Li, M. Grzegorzek, K. Shirahama. Cross-modal music-emotion retrieval using DeepCCA. Information Technology in Biomedicine, E. Pietka, P. Badura, J. Kawa, W. Wieclawek, Eds., Cham, Germany: Springer, pp. 133\u2013145, 2021. DOI: https:\/\/doi.org\/10.1007\/978-3-030-49666-1_11."},{"key":"1293_CR70","doi-asserted-by":"publisher","unstructured":"I. Kansizoglou, L. Bampis, A. Gasteratos. An active learning paradigm for online audio-visual emotion recognition. IEEE Transactions on Affective Computing, 2019. DOI: https:\/\/doi.org\/10.1109\/TAFFC.2019.2961089.","DOI":"10.1109\/TAFFC.2019.2961089"},{"issue":"3","key":"1293_CR71","doi-asserted-by":"publisher","first-page":"141","DOI":"10.1109\/6046.865479","volume":"2","author":"S Dupont","year":"2000","unstructured":"S. Dupont, J. Luettin. Audio-visual speech modeling for continuous speech recognition. IEEE Transactions on Multimedia, vol. 2, no. 3, pp. 141\u2013151, 2000. DOI: https:\/\/doi.org\/10.1109\/6046.865479.","journal-title":"IEEE Transactions on Multimedia"},{"issue":"1","key":"1293_CR72","doi-asserted-by":"publisher","first-page":"45","DOI":"10.1109\/TAFFC.2015.2446462","volume":"7","author":"S Petridis","year":"2016","unstructured":"S. Petridis, M. Pantic. Prediction-based audiovisual fusion for classification of non-linguistic vocalisations. IEEE Transactions on Affective Computing, vol. 7, no. 1, pp. 45\u201358, 2016. DOI: https:\/\/doi.org\/10.1109\/TAFFC.2015.2446462.","journal-title":"IEEE Transactions on Affective Computing"},{"issue":"9","key":"1293_CR73","doi-asserted-by":"publisher","first-page":"1306","DOI":"10.1109\/JPROC.2003.817150","volume":"91","author":"G Potamianos","year":"2003","unstructured":"G. Potamianos, C. Neti, G. Gravier, A. Garg, A. W. Senior. Recent advances in the automatic recognition of audiovisual speech. In Proceedings of the IEEE, vol. 91, no. 9, pp. 1306\u20131326, 2003. DOI: https:\/\/doi.org\/10.1109\/JPROC.2003.817150.","journal-title":"Proceedings of the IEEE"},{"key":"1293_CR74","doi-asserted-by":"publisher","first-page":"3574","DOI":"10.1109\/CVPR.2016.389","volume-title":"Proceedings of IEEE Conference on Computer Vision and Pattern Recognition","author":"D Hu","year":"2016","unstructured":"D. Hu, X. L. Li, X. Q. Lu. Temporal multimodal learning in audiovisual speech recognition. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Las Vegas, USA, pp. 3574\u20133582, 2016. DOI: https:\/\/doi.org\/10.1109\/CVPR.2016.389."},{"key":"1293_CR75","doi-asserted-by":"publisher","unstructured":"J. Ngiam, A. Khosla, M. Kim, J. Nam, H. Lee, A. Y. Ng. Multimodal deep learning. In Proceedings of the 28th International Conference on International Conference on Machine Learning, Bellevue, USA, pp. 689\u2013696, 2011. DOI: https:\/\/doi.org\/10.5555\/3104482.3104569.","DOI":"10.5555\/3104482.3104569"},{"key":"1293_CR76","doi-asserted-by":"crossref","unstructured":"H. Ninomiya, N. Kitaoka, S. Tamura, Y. Iribe, K. Takeda. Integration of deep bottleneck features for audio-visual speech recognition. In Proceedings of the 16th Annual Conference of the International Speech Communication Association, Dresden, Germany, pp. 563\u2013567, 2015.","DOI":"10.21437\/Interspeech.2015-204"},{"key":"1293_CR77","doi-asserted-by":"publisher","first-page":"2592","DOI":"10.1109\/ICASSP.2017.7952625","volume-title":"Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing","author":"S Petridis","year":"2017","unstructured":"S. Petridis, Z. W. Li, M. Pantic. End-to-end visual speech recognition with LSTMS. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, New Orleans, USA, pp. 2592\u20132596, 2017. DOI: https:\/\/doi.org\/10.1109\/ICASSP.2017.7952625."},{"key":"1293_CR78","doi-asserted-by":"publisher","first-page":"6115","DOI":"10.1109\/ICASSP.2016.7472852","volume-title":"Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing","author":"M Wand","year":"2016","unstructured":"M. Wand, J. Koutn\u00edk, J. Schmidhuber. Lipreading with long short-term memory. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, Shanghai, China, pp. 6115\u20136119, 2016. DOI: https:\/\/doi.org\/10.1109\/ICASSP.2016.7472852."},{"key":"1293_CR79","unstructured":"Y. M. Assael, B. Shillingford, S. Whiteson, N. de Freitas. LipNet: Sentence-level lipreading. [Online], Available: https:\/\/arxiv.org\/abs\/1611.01599v1, 2016."},{"key":"1293_CR80","doi-asserted-by":"publisher","unstructured":"T. Stafylakis, G. Tzimiropoulos. Combining residual networks with LSTMs for lipreading. In Proceedings of the 18th Annual Conference of the International Speech Communication Association, Stockholm, Sweden, pp. 3652\u20133656, 2017. DOI: https:\/\/doi.org\/10.21437\/Interspeech.2017-85.","DOI":"10.21437\/Interspeech.2017-85"},{"key":"1293_CR81","doi-asserted-by":"publisher","first-page":"905","DOI":"10.1109\/ASRU46091.2019.9004036","volume-title":"Proceedings of IEEE Automatic Speech Recognition and Understanding Workshop","author":"T Makino","year":"2019","unstructured":"T. Makino, H. Liao, Y. Assael, B. Shillingford, B. Garcia, O. Braga, O. Siohan. Recurrent neural network transducer for audio-visual speech recognition. In Proceedings of IEEE Automatic Speech Recognition and Understanding Workshop, IEEE, Singapore, pp. 905\u2013912, 2019. DOI: https:\/\/doi.org\/10.1109\/ASRU46091.2019.9004036."},{"issue":"5","key":"1293_CR82","doi-asserted-by":"publisher","first-page":"2421","DOI":"10.1121\/1.2229005","volume":"120","author":"M Cooke","year":"2006","unstructured":"M. Cooke, J. Barker, S. Cunningham, X. Shao. An audio-visual corpus for speech perception and automatic speech recognition. The Journal of the Acoustical Society of America, vol. 120, no. 5, pp. 2421\u20132424, 2006. DOI: https:\/\/doi.org\/10.1121\/1.2229005.","journal-title":"The Journal of the Acoustical Society of America"},{"key":"1293_CR83","doi-asserted-by":"publisher","first-page":"5200","DOI":"10.1109\/ICASSP.2016.7472669","volume-title":"Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing","author":"G Trigeorgis","year":"2016","unstructured":"G. Trigeorgis, F. Ringeval, R. Brueckner, E. Marchi, M. A. Nicolaou, B. Schuller, S. Zafeiriou. Adieu features? End-to-end speech emotion recognition using a deep convolutional recurrent network. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, Shanghai, China, pp. 5200\u20135204, 2016. DOI: https:\/\/doi.org\/10.1109\/ICASSP.2016.7472669."},{"key":"1293_CR84","doi-asserted-by":"publisher","first-page":"3444","DOI":"10.1109\/CVPR.2017.367","volume-title":"Proceedings of IEEE Conference on Computer Vision and Pattern Recognition","author":"J S Chung","year":"2017","unstructured":"J. S. Chung, A. Senior, O. Vinyals, A. Zisserman. Lip reading sentences in the wild. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Honolulu, USA, pp. 3444\u20133453, 2017. DOI: https:\/\/doi.org\/10.1109\/CVPR.2017.367."},{"key":"1293_CR85","doi-asserted-by":"publisher","unstructured":"M. Nussbaum-Thom, J. Cui, B. Ramabhadran, V. Goel. Acoustic modeling using bidirectional gated recurrent convolutional units. In Proceedings of the 17th Annual Conference of the International Speech Communication Association, San Francisco, USA, pp. 390\u2013394, 2016. DOI: https:\/\/doi.org\/10.21437\/Interspeech.2016-212.","DOI":"10.21437\/Interspeech.2016-212"},{"key":"1293_CR86","unstructured":"T. Afouras, J. S. Chung, A. Senior, O. Vinyals, A. Zisserman. Deep audio-visual speech recognition. [Online], Available: https:\/\/arxiv.org\/abs\/1809.02108, 2018."},{"key":"1293_CR87","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/IJCNN.2019.8851942","volume-title":"Proceedings of International Joint Conference on Neural Networks","author":"Y Y Zhang","year":"2019","unstructured":"Y. Y. Zhang, Z. R. Wang, J. Du. Deep fusion: An attention guided factorized bilinear pooling for audio-video emotion recognition. In Proceedings of International Joint Conference on Neural Networks, IEEE, Budapest, Hungary, pp. 1\u20139, 2019. DOI: https:\/\/doi.org\/10.1109\/IJCNN.2019.8851942"},{"key":"1293_CR88","doi-asserted-by":"publisher","first-page":"6565","DOI":"10.1109\/ICASSP.2019.8683733","volume-title":"Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing","author":"P Zhou","year":"2019","unstructured":"P. Zhou, W. W. Yang, W. Chen, Y. F. Wang, J. Jia Modality attention for end-to-end audio-visual speech recognition. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, Brighton, UK, pp. 6565\u20136569, 2019. DOI: https:\/\/doi.org\/10.1109\/ICASSP.2019.8683733."},{"key":"1293_CR89","doi-asserted-by":"publisher","unstructured":"R. J. Tao, R. K. Das, H. Z. Li. Audio-visual speaker recognition with a cross-modal discriminative network. In Proceedings of the 21st Annual Conference of the International Speech Communication Association, Shanghai, China, pp. 2242\u20132246, 2020. DOI: https:\/\/doi.org\/10.21437\/Interspeech.2020-1814.","DOI":"10.21437\/Interspeech.2020-1814"},{"key":"1293_CR90","doi-asserted-by":"publisher","unstructured":"I. J. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, D. Warde-Farley, S. Ozair, A. Courville, Y. Bengio. Generative adversarial nets. In Proceedings of the 27th International Conference on Neural Information Processing Systems, Montreal, Canada, pp. 2672\u20132680, 2014. DOI: https:\/\/doi.org\/10.5555\/2969033.29691250.","DOI":"10.5555\/2969033.29691250"},{"key":"1293_CR91","unstructured":"M. Arjovsky, S. Chintala, L. Bottou. Wasserstein GAN. [Online], Available: https:\/\/arxiv.org\/abs\/1701.07875, 2017."},{"key":"1293_CR92","doi-asserted-by":"publisher","first-page":"349","DOI":"10.1145\/3126686.3126723","volume-title":"Proceedings of the on Thematic Workshops of ACM Multimedia","author":"L L Chen","year":"2017","unstructured":"L. L. Chen, S. Srivastava, Z. Y. Duan, C. L. Xu. Deep cross-modal audio-visual generation. In Proceedings of the on Thematic Workshops of ACM Multimedia, ACM, Mountain View, USA, pp. 349\u2013357, 2017. DOI: https:\/\/doi.org\/10.1145\/3126686.3126723."},{"key":"1293_CR93","doi-asserted-by":"publisher","unstructured":"H. Zhu, H. B. Huang, Y. Li, A. H. Zheng, R. He. Arbitrary talking face generation via attentional audio-visual coherence learning. In Proceedings of the 29th International Joint Conference on Artificial Intelligence, Yokohama, Japan, pp. 2362\u20132368, 2020. DOI: https:\/\/doi.org\/10.24963\/ijcai.2020\/327.","DOI":"10.24963\/ijcai.2020\/327"},{"key":"1293_CR94","doi-asserted-by":"publisher","first-page":"79","DOI":"10.1109\/CVPR.2018.00016","volume-title":"Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"L H Wei","year":"2018","unstructured":"L. H. Wei, S. L. Zhang, W. Gao, Q. Tian. Person transfer GAN to bridge domain gap for person re-identification. In Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, USA, pp. 79\u201388, 2018. DOI: https:\/\/doi.org\/10.1109\/CVPR.2018.00016."},{"key":"1293_CR95","doi-asserted-by":"publisher","first-page":"731","DOI":"10.1007\/978-3030-01240-3_44","volume-title":"Proceedings of the 15th European Conference on Computer Vision","author":"S W Huang","year":"2018","unstructured":"S. W. Huang, C. T. Lin, S. P. Chen, Y. Y. Wu, P. H. Hsu, S. H. Lai. AugGAN: Cross domain adaptation with GAN-based data augmentation, In Proceedings of the 15th European Conference on Computer Vision, Springer, Munich, Germany, pp. 731\u2013744, 2018. DOI: https:\/\/doi.org\/10.1007\/978-3030-01240-3_44."},{"key":"1293_CR96","doi-asserted-by":"crossref","unstructured":"T. Le Cornu, B. Milner. Reconstructing intelligible audio speech from visual speech features. In Proceedings of the 16th Annual Conference of the International Speech Communication Association, Dresden, Germany, pp. 3355\u20133359, 2015.","DOI":"10.21437\/Interspeech.2015-139"},{"key":"1293_CR97","doi-asserted-by":"publisher","first-page":"5095","DOI":"10.1109\/ICASSP.2017.7953127","volume-title":"Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing","author":"A Ephrat","year":"2017","unstructured":"A. Ephrat, S. Peleg. Vid2speech: Speech reconstruction from silent video. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, New Orleans, USA, pp. 5095\u20135099, 2017. DOI: https:\/\/doi.org\/10.1109\/ICASSP.2017.7953127."},{"key":"1293_CR98","doi-asserted-by":"publisher","DOI":"10.1109\/ICCVW.2017.61","volume-title":"Proceedings of IEEE International Conference on Computer Vision","author":"A Ephrat","year":"2017","unstructured":"A. Ephrat, T. Halperin, S. Peleg. Improved speech reconstruction from silent video. In Proceedings of IEEE International Conference on Computer Vision, IEEE, Venice, Italy, pp.455\u2013462, 2017. DOI: https:\/\/doi.org\/10.1109\/ICCVW.2017.61."},{"issue":"9","key":"1293_CR99","doi-asserted-by":"publisher","first-page":"1751","DOI":"10.1109\/TASLP.2017.2716178","volume":"25","author":"T Le Cornu","year":"2017","unstructured":"T. Le Cornu, B. Milner. Generating intelligible audio speech from visual speech. IEEE\/ACM Transactions on Audio, Speech, and Language Processing, vol. 25, no. 9, pp. 1751\u20131761, 2017. DOI: https:\/\/doi.org\/10.1109\/TASLP.2017.2716178.","journal-title":"IEEE\/ACM Transactions on Audio, Speech, and Language Processing"},{"key":"1293_CR100","doi-asserted-by":"publisher","unstructured":"A. Davis, M. Rubinstein, N. Wadhwa, G. J. Mysore, F. Durand, W. T. Freeman. The visual microphone: Passive recovery of sound from video. ACM Transactions on Graphics, vol. 33, no. 4, Article number 79, 2014. DOI: https:\/\/doi.org\/10.1145\/2601097.2601119.","DOI":"10.1145\/2601097.2601119"},{"key":"1293_CR101","doi-asserted-by":"publisher","first-page":"2405","DOI":"10.1109\/CVPR.2016.264","volume-title":"Proceedings of IEEE Conference on Computer Vision and Pattern Recognition","author":"A Owens","year":"2016","unstructured":"A. Owens, P. Isola, J. McDermott, A. Torralba, E. H. Adelson, W. T. Freeman. Visually indicated sounds. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, IEEE, Las Vegas, USA, pp. 2405\u20132413, 2016. DOI: https:\/\/doi.org\/10.1109\/CVPR.2016.264."},{"key":"1293_CR102","doi-asserted-by":"publisher","first-page":"3550","DOI":"10.1109\/CVPR.2018.00374","volume-title":"Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"Y P Zhou","year":"2018","unstructured":"Y. P. Zhou, Z. W. Wang, C. Fang, T. Bui, T. L. Berg. Visual to sound: Generating natural sound for videos in the wild. In Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, USA, pp. 3550\u20133558, 2018. DOI: https:\/\/doi.org\/10.1109\/CVPR.2018.00374."},{"key":"1293_CR103","unstructured":"S. Mehri, K. Kumar, I. Gulrajani, R. Kumar, S. Jain, J. Sotelo, A. C. Courville, Y. Bengio. SampleRNN: An unconditional end-to-end neural audio generation model. In Proceedings of the 5th International Conference on Learning Representations, Toulon, France, 2017."},{"key":"1293_CR104","doi-asserted-by":"publisher","first-page":"52","DOI":"10.1007\/978-3-030-58610-2_4","volume-title":"Proceedings of the 16th European Conference on Computer Vision","author":"H Zhou","year":"2020","unstructured":"H. Zhou, X. D. Xu, D. H. Lin, X. G. Wang, Z. W. Liu. Sep-stereo: Visually guided stereophonic audio generation by associating source separation. In Proceedings of the 16th European Conference on Computer Vision, Springer, Glasgow, UK, pp. 52\u201369, 2020. DOI: https:\/\/doi.org\/10.1007\/978-3-030-58610-2_4."},{"key":"1293_CR105","doi-asserted-by":"publisher","DOI":"10.1109\/ICASSP.2019.8682383","volume-title":"Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing","author":"C H Wan","year":"2019","unstructured":"C. H. Wan, S. P. Chuang, H. Y. Lee. Towards audio to scene image synthesis using generative adversarial network. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, Brighton, UK, pp.496\u2013500, 2019. DOI: https:\/\/doi.org\/10.1109\/ICASSP.2019.8682383."},{"key":"1293_CR106","first-page":"2510","volume-title":"Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops","author":"Y Qiu","year":"2018","unstructured":"Y. Qiu, H. Kataoka. Image generation associated with music data. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, IEEE, Salt Lake City, USA, pp. 2510\u20132513, 2018."},{"key":"1293_CR107","doi-asserted-by":"crossref","unstructured":"W. L. Hao, Z. X. Zhang, H. Guan. CMCGAN: A uniform framework for cross-modal visual-audio mutual generation. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, the 30th Innovative Applications of Artificial Intelligence, and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence, New Orleans, USA, pp. 6886\u20136893, 2018.","DOI":"10.1609\/aaai.v32i1.12329"},{"issue":"3","key":"1293_CR108","doi-asserted-by":"publisher","first-page":"517","DOI":"10.1109\/JSTSP.2020.2987417","volume":"14","author":"J G Li","year":"2020","unstructured":"J. G. Li, X. F. Zhang, C. M. Jia, J. Z. Xu, L. Zhang, Y. Wang, S. W. Ma, W. Gao. Direct speech-to-image translation. IEEE Journal of Selected Topics in Signal Processing, vol. 14, no. 3, pp. 517\u2013529, 2020. DOI: https:\/\/doi.org\/10.1109\/JSTSP.2020.2987417.","journal-title":"IEEE Journal of Selected Topics in Signal Processing"},{"key":"1293_CR109","doi-asserted-by":"publisher","first-page":"850","DOI":"10.1109\/TASLP.2021.3053391","volume":"29","author":"X S Wang","year":"2021","unstructured":"X. S. Wang, T. T. Qiao, J. H. Zhu, A. Hanjalic, O. Scharenborg. Generating images from spoken descriptions. IEEE\/ACM Transactions on Audio, Speech, and Language Processing, vol. 29, pp. 850\u2013865, 2021. DOI: https:\/\/doi.org\/10.1109\/TASLP.2021.3053391.","journal-title":"IEEE\/ACM Transactions on Audio, Speech, and Language Processing"},{"key":"1293_CR110","doi-asserted-by":"publisher","first-page":"8633","DOI":"10.1109\/ICASSP.2019.8682970","volume-title":"Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing","author":"A Duarte","year":"2019","unstructured":"A. Duarte, F. Roldan, M. Tubau, J. Escur, S. Pascual, A. Salvador, E. Mohedano, K. McGuinness, J. Torres, X. Giro-i-Nieto. Wav2Pix: Speech-conditioned face generation using generative adversarial networks. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE Brighton, UK, pp. 8633\u20138637, 2019. DOI: https:\/\/doi.org\/10.1109\/ICASSP.2019.8682970."},{"key":"1293_CR111","doi-asserted-by":"publisher","first-page":"7531","DOI":"10.1109\/CVPR.2019.00772","volume-title":"Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"T H Oh","year":"2019","unstructured":"T. H. Oh, T. Dekel, C. Kim, I. Mosseri, W. T. Freeman, M. Rubinstein, W. Matusik. Speech2Face: Learning the face behind a voice. In Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Long Beach, USA, pp. 7531\u20137540, 2019. DOI: https:\/\/doi.org\/10.1109\/CVPR.2019.00772."},{"key":"1293_CR112","unstructured":"Y. D. Wen, B. Raj, R. Singh. Face reconstruction from voice using generative adversarial networks. In Proceedings of the 33rd Conference on Neural Information Processing Systems, Vancouver, Canada, pp. 5266\u20135275, 2019."},{"issue":"1","key":"1293_CR113","doi-asserted-by":"publisher","first-page":"35","DOI":"10.1007\/s12369-012-0169-4","volume":"5","author":"A A Samadani","year":"2013","unstructured":"A. A. Samadani, E. Kubica, R. Gorbet, D. Kulic. Perception and generation of affective hand movements. International Journal of Social Robotics, vol. 5, no. 1, pp. 35\u201351, 2013. DOI: https:\/\/doi.org\/10.1007\/s12369-012-0169-4.","journal-title":"International Journal of Social Robotics"},{"key":"1293_CR114","doi-asserted-by":"publisher","first-page":"363","DOI":"10.1007\/978-3-642-16958-8_34","volume-title":"Proceedings of the 3rd International Conference on Motion in Games","author":"J Tilmanne","year":"2010","unstructured":"J. Tilmanne, T. Dutoit. Expressive gait synthesis using PCA and Gaussian modeling. In Proceedings of the 3rd International Conference on Motion in Games, Springer, Utrecht, The Netherlands, pp. 363\u2013374, 2010. DOI: https:\/\/doi.org\/10.1007\/978-3-642-16958-8_34."},{"key":"1293_CR115","doi-asserted-by":"publisher","first-page":"183","DOI":"10.1145\/344779.344865","volume-title":"Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques","author":"M Brand","year":"2000","unstructured":"M. Brand, A. Hertzmann. Style machines. In Proceedings of the 27th Annual Conference on Computer Graphics and Interactive Techniques, ACM, New Orleans, USA, pp. 183\u2013192, 2000. DOI: https:\/\/doi.org\/10.1145\/344779.344865."},{"key":"1293_CR116","doi-asserted-by":"publisher","first-page":"975","DOI":"10.1145\/1273496.1273619","volume-title":"Proceedings of the 24th International Conference on Machine Learning","author":"J M Wang","year":"2007","unstructured":"J. M. Wang, D. J. Fleet, A. Hertzmann. Multifactor Gaussian process models for style-content separation. In Proceedings of the 24th International Conference on Machine Learning, ACM, Corvalis, USA, pp. 975\u2013982, 2007. DOI: https:\/\/doi.org\/10.1145\/1273496.1273619."},{"key":"1293_CR117","doi-asserted-by":"publisher","DOI":"10.1145\/1553374.1553505","volume-title":"Proceedings of the 26th Annual International Conference on Machine Learning","author":"G W Taylor","year":"2009","unstructured":"G. W. Taylor, G. E. Hinton. Factored conditional restricted Boltzmann machines for modeling motion style. In Proceedings of the 26th Annual International Conference on Machine Learning, ACM, Montreal, Canada, pp. 1025\u20131032, 2009. DOI: https:\/\/doi.org\/10.1145\/1553374.1553505."},{"key":"1293_CR118","unstructured":"L. Crnkovic-Friis, L. Crnkovic-Friis. Generative choreography using deep learning. In Proceedings of the 7th International Conference on Computational Creativity, Paris, France, pp. 272\u2013277, 2016."},{"key":"1293_CR119","doi-asserted-by":"publisher","unstructured":"D. Holden, J. Saito, T. Komura. A deep learning framework for character motion synthesis and editing. ACM Transactions on Graphics, vol. 35, no. 4, Article number 138, 2016. DOI: https:\/\/doi.org\/10.1145\/2897824.2925975.","DOI":"10.1145\/2897824.2925975"},{"key":"1293_CR120","first-page":"26","volume-title":"Proceedings of the 23rd ACM SIGKDD Conference on Knowledge Discovery and Data Mining Workshop on Machine Learning for Creativity","author":"O Alemi","year":"2017","unstructured":"O. Alemi J. Fran\u00e7osse, P. Pasquier. GrooveNet: Rea-time music-driven dance movement generation using artificial neural networks. In Proceedings of the 23rd ACM SIGKDD Conference on Knowledge Discovery and Data Mining Workshop on Machine Learning for Creativity, ACM, Halifax, Canada, pp. 26, 2017."},{"key":"1293_CR121","unstructured":"J. Lee, S. Kim, K. Lee. Listen to dance: Music-driven choreography generation using autoregressive encoder-decoder network. [Online], Available: https:\/\/arxiv.org\/abs\/1811.00818, 2018."},{"key":"1293_CR122","doi-asserted-by":"publisher","first-page":"7574","DOI":"10.1109\/CVPR.2018.00790","volume-title":"Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"E Shlizerman","year":"2018","unstructured":"E. Shlizerman, L. Dery, H. Schoen, I. Kemelmacher-Shlizerman. Audio to body dynamics. In Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, USA, pp. 7574\u20137583, 2018. DOI: https:\/\/doi.org\/10.1109\/CVPR.2018.00790."},{"key":"1293_CR123","doi-asserted-by":"publisher","first-page":"1598","DOI":"10.1145\/3240508.3240526","volume-title":"Proceedings of the 26th ACM International Conference on Multimedia","author":"T R Tang","year":"2018","unstructured":"T. R. Tang, J. Jia, H. Y. Mao. Dance with melody: An LSTM-autoencoder approach to music-oriented dance synthesis. In Proceedings of the 26th ACM International Conference on Multimedia, ACM, Seoul, Republic of Korea, pp. 1598\u20131606, 2018. DOI: https:\/\/doi.org\/10.1145\/3240508.3240526"},{"key":"1293_CR124","doi-asserted-by":"publisher","DOI":"10.1109\/IJCNN.2019.8851872","volume-title":"Proceedings of International Joint Conference on Neural Networks","author":"N Yalta","year":"2019","unstructured":"N. Yalta, S. Watanabe, K. Nakadai, T. Ogata. Weakly-supervised deep recurrent neural networks for basic dance step generation. In Proceedings of International Joint Conference on Neural Networks, IEEE, Budapest, Hungary, 2019. DOI: https:\/\/doi.org\/10.1109\/IJCNN.2019.8851872."},{"key":"1293_CR125","unstructured":"R. Kumar, J. Sotelo, K. Kumar, A. de Br\u00e9bisson, Y. Bengio. ObamaNet: Photo-realistic lip-sync from text. [Online], Available: https:\/\/arxiv.org\/abs\/1801.01442, 2017."},{"issue":"5\u20136","key":"1293_CR126","doi-asserted-by":"publisher","first-page":"602","DOI":"10.1016\/j.neunet.2005.06.042","volume":"18","author":"A Graves","year":"2005","unstructured":"A. Graves, J. Schmidhuber. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural Networks, vol. 18, no. 5\u20136, pp. 602\u2013610, 2005. DOI: https:\/\/doi.org\/10.1016\/j.neunet.2005.06.042.","journal-title":"Neural Networks"},{"key":"1293_CR127","doi-asserted-by":"publisher","unstructured":"S. Suwajanakorn, S. M. Seitz, I. Kemelmacher-Shlizerman. Synthesizing Obama: Learning lip sync from audio. ACM Transactions on Graphics, vol. 36, no. 4, Article number 95, 2017. DOI: https:\/\/doi.org\/10.1145\/3072959.3073640.","DOI":"10.1145\/3072959.3073640"},{"issue":"11\u201312","key":"1293_CR128","doi-asserted-by":"publisher","first-page":"1767","DOI":"10.1007\/s11263-019-01150-y","volume":"127","author":"A Jamaludin","year":"2019","unstructured":"A. Jamaludin, J. S. Chung, A. Zisserman. You said that?: Synthesising talking faces from audio International Journal of Computer Vision, vol. 127, no. 11\u201312, pp. 1767\u20131779, 2019. DOI: https:\/\/doi.org\/10.1007\/s11263-019-01150-y.","journal-title":"International Journal of Computer Vision"},{"key":"1293_CR129","unstructured":"S. A. Jalalifar, H. Hasani, H. Aghajan. Speech-driven facial reenactment using conditional generative adversarial networks. [Online], Available: https:\/\/arxiv.org\/abs\/1803.07461, 2018."},{"key":"1293_CR130","doi-asserted-by":"crossref","unstructured":"K. Vougioukas, S. Petridis, M. Pantic. End-to-end speech-driven facial animation with temporal GANs. In Proceedings of British Machine Vision Conference, Newcastle, UK, 2018.","DOI":"10.1007\/s11263-019-01251-8"},{"key":"1293_CR131","doi-asserted-by":"publisher","first-page":"2849","DOI":"10.1109\/ICCV.2017.308","volume-title":"Proceedings of IEEE International Conference on Computer Vision","author":"M Saito","year":"2017","unstructured":"M. Saito, E. Matsumoto, S. Saito. Temporal generative adversarial nets with singular value clipping. In Proceedings of IEEE International Conference on Computer Vision, IEEE, Venice, Italy, pp. 2849\u20132858, 2017. DOI: https:\/\/doi.org\/10.1109\/ICCV.2017.308."},{"key":"1293_CR132","doi-asserted-by":"publisher","first-page":"538","DOI":"10.1007\/978-3-030-01234-2_32","volume-title":"Proceedings of the 15th European Conference on Computer Vision","author":"L Chen","year":"2018","unstructured":"L. Chen, Z. H. Li, R. K. Maddox, Z. Y. Duan, C. L. Xu. Lip movements generation at a glance. In Proceedings of the 15th European Conference on Computer Vision, Springer, Munich, Germany, pp. 538\u2013553, 2018. DOI: https:\/\/doi.org\/10.1007\/978-3-030-01234-2_32."},{"key":"1293_CR133","doi-asserted-by":"publisher","unstructured":"H. Zhou, Y. Liu, Z. W. Liu, P. Luo, X. G. Wang. Talking face generation by adversarially disentangled audio-visual representation. In Proceedings of the 33rd AAAI Conference on Artificial Intelligence, the 31st Innovative Applications of Artificial Intelligence Conference, the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, Honolulu, USA, pp. 9299\u20139306, 2019. DOI: https:\/\/doi.org\/10.1609\/aaai.v33i01.33019299.","DOI":"10.1609\/aaai.v33i01.33019299"},{"key":"1293_CR134","doi-asserted-by":"publisher","first-page":"7824","DOI":"10.1109\/CVPR.2019.00802","volume-title":"Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"L L Chen","year":"2019","unstructured":"L. L. Chen, R. K. Maddox, Z. Y. Duan, C. L. Xu. Hierarchical cross-modal talking face generation with dynamic pixel-wise loss. In Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Long Beach, USA, pp.7824\u20137833, 2019. DOI: https:\/\/doi.org\/10.1109\/CVPR.2019.00802."},{"key":"1293_CR135","doi-asserted-by":"publisher","first-page":"690","DOI":"10.1007\/978-3-030-01261-8_41","volume-title":"Proceedings of the 15th European Conference on Computer Vision","author":"O Wiles","year":"2018","unstructured":"O. Wiles, A. S. Koepke, A. Zisserman. X2Face: A network for controlling face generation using images, audio, and pose codes. In Proceedings of the 15th European Conference on Computer Vision, Springer, Munich, Germany, pp. 690\u2013706, 2018. DOI: https:\/\/doi.org\/10.1007\/978-3-030-01261-8_41."},{"key":"1293_CR136","unstructured":"S. E. Eskimez, Y. Zhang, Z. Y. Duan. Speech driven talking face generation from a single image and an emotion condition. [Online], Available: https:\/\/arxiv.org\/abs\/2008.03592, 2020."},{"key":"1293_CR137","doi-asserted-by":"publisher","first-page":"27","DOI":"10.1109\/TASLP.2019.2947741","volume":"28","author":"S E Eskimez","year":"2020","unstructured":"S. E. Eskimez, R. K. Maddox, C. L. Xu, Z. Y. Duan. Noise-resilient training method for face landmark generation from speech. IEEE\/ACM Transactions on Audio, Speech, and Language Processing, vol. 28, pp.27\u201338, 2020. DOI: https:\/\/doi.org\/10.1109\/TASLP.2019.2947741.","journal-title":"IEEE\/ACM Transactions on Audio, Speech, and Language Processing"},{"key":"1293_CR138","doi-asserted-by":"publisher","first-page":"892","DOI":"10.5555\/3157096.3157196","volume-title":"Proceedings of the 30th International Conference on Neural Information Processing Systems","author":"Y Aytar","year":"2016","unstructured":"Y. Aytar, C. Vondrick, A. Torralba. Soundnet: Learning sound representations from unlabeled video. In Proceedings of the 30th International Conference on Neural Information Processing Systems, NIPS, Barcelona, Spain, pp. 892\u2013900, 2016. DOI:https:\/\/doi.org\/10.5555\/3157096.3157196."},{"key":"1293_CR139","doi-asserted-by":"publisher","first-page":"451","DOI":"10.1007\/978-3-030-01246-5_27","volume-title":"Proceedings of the 15th European Conference on Computer Vision","author":"R Arandjelovic","year":"2018","unstructured":"R. Arandjelovic, A. Zisserman. Objects that sound. In Proceedings of the 15th European Conference on Computer Vision, Springer, Munich, Germany, pp. 451\u2013466, 2018. DOI: https:\/\/doi.org\/10.1007\/978-3-030-01246-5_27."},{"key":"1293_CR140","doi-asserted-by":"publisher","first-page":"424","DOI":"10.1109\/ASRU.2017.8268967","volume-title":"Proceedings of IEEE Automatic Speech Recognition and Understanding Workshop","author":"K Leidal","year":"2017","unstructured":"K. Leidal D. Harwath, J. Glass. Learning modaiity-invariant representations for speech and images. In Proceedings of IEEE Automatic Speech Recognition and Understanding Workshop, IEEE, Okinawa, Japan, pp. 424\u2013429, 2017. DOI: https:\/\/doi.org\/10.1109\/ASRU.2017.8268967."},{"key":"1293_CR141","doi-asserted-by":"publisher","unstructured":"D. Hu, F. P. Nie, X. L. Li. Deep multimodal clustering for unsupervised audiovisual learning. In Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Long Beach, USA, pp. 9240\u20139249. DOI: https:\/\/doi.org\/10.1109\/CVPR.2019.00947.","DOI":"10.1109\/CVPR.2019.00947"},{"key":"1293_CR142","doi-asserted-by":"publisher","first-page":"639","DOI":"10.1007\/978-3-030-01231-1_39","volume-title":"Proceedings of the 15th European Conference on Computer Vision","author":"A Owens","year":"2018","unstructured":"A. Owens, A. A. Efros. Audio-visual scene analysis with self-supervised multisensory features. In Proceedings of the 15th European Conference on Computer Vision, Springer, Munich, Germany, pp. 639\u2013658, 2018. DOI: https:\/\/doi.org\/10.1007\/978-3-030-01231-1_39."},{"key":"1293_CR143","doi-asserted-by":"publisher","first-page":"41","DOI":"10.1145\/1553374.1553380","volume-title":"Proceedings of the 26th Annual International Conference on Machine Learning","author":"Y Bengio","year":"2009","unstructured":"Y. Bengio, J. Louradour, R. Collobert, J. Weston. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ACM, Montreal, Canada, pp. 41\u201348, 2009. DOI: https:\/\/doi.org\/10.1145\/1553374.1553380."},{"key":"1293_CR144","first-page":"2518","volume-title":"Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops","author":"S Parekh","year":"2018","unstructured":"S. Parekh, S. Essid, A. Ozerov, N. Q. K. Duong, P. P\u00e9rez, G. Richard. Weakly supervised representation learning for unsynchronized audio-visual events. In Proceedings of IEEE Conference on Computer Vision and Pattern Recognition Workshops, IEEE, Salt Lake City, USA, pp. 2518\u20132519, 2018."},{"issue":"5","key":"1293_CR145","doi-asserted-by":"publisher","first-page":"603","DOI":"10.1109\/TMM.2015.2407694","volume":"17","author":"N Harte","year":"2015","unstructured":"N. Harte, E. Gillen. TCD-TIMIT: An audio-visual corpus of continuous speech. IEEE Transactions on Multimedia, vol. 17, no. 5, pp. 603\u2013615, 2015. DOI: https:\/\/doi.org\/10.1109\/TMM.2015.2407694.","journal-title":"IEEE Transactions on Multimedia"},{"key":"1293_CR146","doi-asserted-by":"publisher","first-page":"199","DOI":"10.1007\/978-3-642-01793-3_21","volume-title":"Proceedings of the 3rd International Conference on Advances in Biometrics","author":"C Sanderson","year":"2009","unstructured":"C. Sanderson, B. C. Lovell. Multi-region probabilistic histograms for robust and scalable identity inference. In Proceedings of the 3rd International Conference on Advances in Biometrics, Springer, Alghero, Italy, pp. 199\u2013208, 2009. DOI: https:\/\/doi.org\/10.1007\/978-3-642-01793-3_21."},{"key":"1293_CR147","doi-asserted-by":"publisher","unstructured":"S. R. Livingstone, F. A. Russo. The Ryerson audio-visual database of emotional speech and song (RAVDESS): A dynamic, multimodal set of facial and vocal expressions in North American English. PLoS One, vol. 13, no. 5, Article number e0196391, 2018. DOI: https:\/\/doi.org\/10.1371\/journal.pone.0196391.","DOI":"10.1371\/journal.pone.0196391"},{"issue":"6","key":"1293_CR148","doi-asserted-by":"publisher","first-page":"EL523","DOI":"10.1121\/1.5042758","volume":"143","author":"N Alghamdi","year":"2018","unstructured":"N. Alghamdi, S. Maddock, R. Marxer, J. Barker, G. J. Brown. A corpus of audio-visual Lombard speech with frontal and profile views. The Journal of the Acoustical Society of America, vol. 143, no. 6, pp. EL523\u2013EL529, 2018. DOI: https:\/\/doi.org\/10.1121\/1.5042758.","journal-title":"The Journal of the Acoustical Society of America"},{"issue":"7","key":"1293_CR149","doi-asserted-by":"publisher","first-page":"1254","DOI":"10.1109\/TMM.2009.2030637","volume":"11","author":"G Y Zhao","year":"2009","unstructured":"G. Y. Zhao, M. Barnard, M. Pietikainen. Lipreading with local spatiotemporal descriptors. IEEE Transactions on Multimedia, vol. 11, no. 7, pp. 1254\u20131265, 2009. DOI: https:\/\/doi.org\/10.1109\/TMM.2009.2030637.","journal-title":"IEEE Transactions on Multimedia"},{"key":"1293_CR150","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1109\/FG.2015.7163155","volume-title":"Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition","author":"I Anina","year":"2015","unstructured":"I. Anina, Z. H. Zhou, G. Y. Zhao, M. Pietik\u00e4inen. OuluVs2: A multi-view audiovisual database for non-rigid mouth motion analysis. In Proceedings of the 11th IEEE International Conference and Workshops on Automatic Face and Gesture Recognition, IEEE, Ljubljana, Slovenia, pp. 1\u20135, 2015. DOI: https:\/\/doi.org\/10.1109\/FG.2015.7163155."},{"issue":"3","key":"1293_CR151","doi-asserted-by":"publisher","first-page":"1022","DOI":"10.1109\/TPAMI.2019.2944808","volume":"43","author":"J Kossaifi","year":"2021","unstructured":"J. Kossaifi, R. Walecki, Y. Panagakis, J. Shen, M. Schmitt, F. Ringeval, J. Han, V. Pandit, A. Toisoul, B. Schuller, K. Star, E. Hajiyev, M. Pantic. SEWA DB: A rich database for audio-visual emotion and sentiment research in the wild. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 43, no. 3, pp. 1022\u20131040, 2021. DOI: https:\/\/doi.org\/10.1109\/TPAMI.2019.2944808.","journal-title":"IEEE Transactions on Pattern Analysis and Machine Intelligence"},{"key":"1293_CR152","doi-asserted-by":"publisher","first-page":"700","DOI":"10.1007\/978-3-030-58589-1_42","volume-title":"Proceedings of the 16th European Conference on Computer Vision","author":"K S Y Wang","year":"2020","unstructured":"K. S. Y. Wang, Q. Y. Wu, L. S. Song, Z. Q. Yang, W. Wu, C. Qian, R. He, Y. Qiao, C. C. Loy. Mead: A large-scale audio-visual dataset for emotional talking-face generation. In Proceedings of the 16th European Conference on Computer Vision, Springer, Glasgow, UK, pp. 700\u2013717, 2020. DOI: https:\/\/doi.org\/10.1007\/978-3-030-58589-1_42."},{"key":"1293_CR153","doi-asserted-by":"publisher","first-page":"87","DOI":"10.1007\/978-3-319-54184-6_6","volume-title":"Proceedings of the 13th Asian Conference on Computer Vision","author":"J S Chung","year":"2017","unstructured":"J. S. Chung, A. Zisserman. Lip reading in the wild. In Proceedings of the 13th Asian Conference on Computer Vision, Springer, Taipei, China, pp. 87\u2013103, 2017. DOI: https:\/\/doi.org\/10.1007\/978-3-319-54184-6_6."},{"key":"1293_CR154","volume-title":"Proceedings of British Machine Vision Conference 2017","author":"J S Chung","year":"2017","unstructured":"J. S. Chung, A. Zisserman. Lip reading in profile. In Proceedings of British Machine Vision Conference 2017, BMVA Press, London, UK, 2017."},{"key":"1293_CR155","doi-asserted-by":"publisher","unstructured":"A. Nagrani, J. S. Chung, A. Zisserman. VoxCeleb: A large-scale speaker identification dataset. In Proceedings of the 18th Annual Conference of the International Speech Communication Association, Stockholm, Sweden, pp. 2616\u20132620, 2017. DOI: https:\/\/doi.org\/10.21437\/Interspeech.2017-950.","DOI":"10.21437\/Interspeech.2017-950"},{"key":"1293_CR156","doi-asserted-by":"publisher","first-page":"1086","DOI":"10.21437\/Interspeech.2018-1929","volume-title":"Proceedings of the 19th Annual Conference of the International Speech Communication Association","author":"J S Chung","year":"2018","unstructured":"J. S. Chung, A. Nagrani, A. Zisserman. VoxCeleb2: Deep speaker recognition. In Proceedings of the 19th Annual Conference of the International Speech Communication Association, Association, Hyderabad, India, pp. 1086\u20131090, 2018. DOI: https:\/\/doi.org\/10.21437\/Interspeech.2018-1929."},{"key":"1293_CR157","doi-asserted-by":"publisher","first-page":"3718","DOI":"10.1109\/ICCVW.2019.00460","volume-title":"Proceedings of IEEE\/CVF International Conference on Computer Vision Workshop","author":"J Roth","year":"2019","unstructured":"J. Roth, S. Chaudhuri, O. Klejch, R. Marvin, A. Gallagher, L. Kaver, S. Ramaswamy, A. Stopczynski, C. Schmid, Z. H. Xi, C. Pantofaru. Supplementary material: AVA-ActiveSpeaker: An audio-visual dataset for active speaker detection. In Proceedings of IEEE\/CVF International Conference on Computer Vision Workshop, IEEE, Seoul, Korea, pp. 3718\u20133722, 2019. DOI: https:\/\/doi.org\/10.1109\/ICCVW.2019.00460."},{"key":"1293_CR158","unstructured":"O. Gillet, G. Richard. ENST-drums: An extensive audio-visual database for drum signals processing. In Proceedings of the 7th International Conference on Music Information Retrieval, Victoria, Canada, pp. 156\u2013159, 2006."},{"key":"1293_CR159","unstructured":"A. Bazzica, J. C. van Gemert, C. C. S. Liem, A. Hanjalic. Vision-based detection of acoustic timed events: A case study on clarinet note onsets. [Online], Available: https:\/\/arxiv.org\/abs\/1706.09556, 2017."},{"issue":"2","key":"1293_CR160","doi-asserted-by":"publisher","first-page":"522","DOI":"10.1109\/TMM.2018.2856090","volume":"21","author":"B C Li","year":"2019","unstructured":"B. C. Li, X. Z. Liu, K. Dinesh, Z. Y. Duan, G. Sharma. Creating a multitrack classical music performance dataset for multimodal music analysis: Challenges, insights, and applications. IEEE Transactions on Multimedia, vol. 21, no. 2, pp. 522\u2013535, 2019. DOI: https:\/\/doi.org\/10.1109\/TMM.2018.2856090.","journal-title":"IEEE Transactions on Multimedia"},{"key":"1293_CR161","unstructured":"W. Kay, J. Carreira, K. Simonyan, B. Zhang, C. Hillier, S. Vijayanarasimhan, F. Viola, T. Green, T. Back, P. Natsev, M. Suleyman, A. Zisserman. The kinetics human action video dataset. [Online], Available: https:\/\/arxiv.org\/abs\/1705.06950, 2017."},{"key":"1293_CR162","unstructured":"J. Carreira, E. Noland, A. Banki-Horvath, C. Hillier, A. Zisserman. A short note about kinetics-600. [Online], Available: https:\/\/arxiv.org\/abs\/1808.01340, 2018."},{"key":"1293_CR163","unstructured":"J. Carreira, E. Noland, C. Hillier, A. Zisserman. A short note on the kinetics-700 human action dataset. [Online], Available: https:\/\/arxiv.org\/abs\/1907.06987, 2019."},{"key":"1293_CR164","doi-asserted-by":"publisher","first-page":"6047","DOI":"10.1109\/CVPR.2018.00633","volume-title":"Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition","author":"C H Gu","year":"2018","unstructured":"C. H. Gu, C. Sun, D. A. Ross, C. Vondrick, C. Pantofaru, Y. Q. Li, S. Vijayanarasimhan, G. Toderici, S. Ricco, R. Sukthankar, C. Schmid, J. Malik. AVA: A video dataset of spatio-temporally localized atomic visual actions. In Proceedings of IEEE\/CVF Conference on Computer Vision and Pattern Recognition, IEEE, Salt Lake City, USA, pp. 6047\u20136056, 2018. DOI: https:\/\/doi.org\/10.1109\/CVPR.2018.00633."},{"key":"1293_CR165","doi-asserted-by":"publisher","first-page":"776","DOI":"10.1109\/ICASSP.2017.7952261","volume-title":"Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing","author":"J F Gemmeke","year":"2017","unstructured":"J. F. Gemmeke, D. P. W. Ellis, D. Freedman, A. Jansen, W. Lawrence, R. C. Moore, M. Plakal, M. Ritter. Audio set: An ontology and human-labeled dataset for audio events. In Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, IEEE, New Orleans, USA, pp. 776\u2013780, 2017. DOI: https:\/\/doi.org\/10.1109\/ICASSP.2017.7952261."},{"key":"1293_CR166","doi-asserted-by":"publisher","first-page":"193","DOI":"10.1007\/978-3-030-11018-5_18","volume-title":"Proceedings of European Conference on Computer Vision","author":"J Lee","year":"2019","unstructured":"J. Lee, A. Natsev, W. Reade, R. Sukthankar, G. Toderici. The 2nd youtube-8m large-scale video understanding challenge. In Proceedings of European Conference on Computer Vision, Springer, Munich, Germany, pp. 193\u2013205, 2019. DOI: https:\/\/doi.org\/10.1007\/978-3-030-11018-5_18."},{"key":"1293_CR167","doi-asserted-by":"publisher","first-page":"843","DOI":"10.1109\/ICCV.2017.97","volume-title":"Proceedings of IEEE International Conference on Computer Vision","author":"C Sun","year":"2017","unstructured":"C. Sun, A. Shrivastava, S. Singh, A. Gupta. Revisiting unreasonable effectiveness of data in deep learning era. In Proceedings of IEEE International Conference on Computer Vision, IEEE, Venice, Italy, pp.843\u2013852, 2017. DOI: https:\/\/doi.org\/10.1109\/ICCV.2017.97."},{"key":"1293_CR168","doi-asserted-by":"crossref","unstructured":"O. M. Parkhi, A. Vedaldi, A. Zisserman. Deep face recognition. In Proceedings of British Machine Vision Conference, Swansea, UK, 2015.","DOI":"10.5244\/C.29.41"}],"container-title":["International Journal of Automation and Computing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11633-021-1293-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s11633-021-1293-0\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s11633-021-1293-0.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,1,31]],"date-time":"2023-01-31T12:19:06Z","timestamp":1675167546000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s11633-021-1293-0"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,4,15]]},"references-count":168,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2021,6]]}},"alternative-id":["1293"],"URL":"https:\/\/doi.org\/10.1007\/s11633-021-1293-0","relation":{},"ISSN":["1476-8186","1751-8520"],"issn-type":[{"value":"1476-8186","type":"print"},{"value":"1751-8520","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,4,15]]},"assertion":[{"value":"4 December 2020","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 March 2021","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"15 April 2021","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}}]}}