{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,6,5]],"date-time":"2025-06-05T04:53:21Z","timestamp":1749099201380,"version":"3.37.3"},"reference-count":34,"publisher":"Springer Science and Business Media LLC","issue":"3","license":[{"start":{"date-parts":[[2022,9,8]],"date-time":"2022-09-08T00:00:00Z","timestamp":1662595200000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2022,9,8]],"date-time":"2022-09-08T00:00:00Z","timestamp":1662595200000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100000266","name":"Engineering and Physical Sciences Research Council","doi-asserted-by":"publisher","award":["EP\/L015463\/1","EP\/R032718\/1"],"award-info":[{"award-number":["EP\/L015463\/1","EP\/R032718\/1"]}],"id":[{"id":"10.13039\/501100000266","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100003816","name":"Huawei Technologies","doi-asserted-by":"publisher","id":[{"id":"10.13039\/501100003816","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["AI Ethics"],"published-print":{"date-parts":[[2023,8]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Automatic prediction of human attributions of valence and arousal using facial recognition technologies can improve human\u2013computer and human\u2013robot interaction. However, data protection has become an issue of great concern in affect recognition using facial images, as the facial identities of people (i.e. recognising who a person is) could be exposed in the process. For instance, malicious individuals could exploit facial images of users to assume their identities and infiltrate biometric authentication systems. Possible solutions to protect the facial identity of users are to: (1) extract anonymised facial features in users\u2019 local machines, namely action units (AU) of facial images, discard their facial images and send the AUs to the developer for processing, and (2) employ a federated learning approach i.e. process users\u2019 facial images in their local machines and only send their locally trained models back to the developer\u2019s machine for augmenting the final model. In this paper, we implement and compare the performance of these privacy-preserving strategies for affect recognition. Results on the popular RECOLA affective datasets show promising affect recognition performance in adopting a federated learning approach to protect users\u2019 identities, with Concordance Correlation Coefficient of 0.426 for valence and 0.390 for arousal.<\/jats:p>","DOI":"10.1007\/s43681-022-00215-y","type":"journal-article","created":{"date-parts":[[2022,9,8]],"date-time":"2022-09-08T17:05:55Z","timestamp":1662656755000},"page":"937-946","update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":6,"title":["Facial identity protection using deep learning technologies: an application in affective computing"],"prefix":"10.1007","volume":"3","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-4070-534X","authenticated-orcid":false,"given":"Jimiama M.","family":"Mase","sequence":"first","affiliation":[]},{"given":"Natalie","family":"Leesakul","sequence":"additional","affiliation":[]},{"given":"Grazziela P.","family":"Figueredo","sequence":"additional","affiliation":[]},{"given":"Mercedes Torres","family":"Torres","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2022,9,8]]},"reference":[{"issue":"1","key":"215_CR1","doi-asserted-by":"publisher","first-page":"205395172090438","DOI":"10.1177\/2053951720904386","volume":"7","author":"A McStay","year":"2020","unstructured":"McStay, A.: Emotional ai, soft biometrics and the surveillance of emotional life: An unusual consensus on privacy. Big Data Soc. 7(1), 2053951720904386 (2020)","journal-title":"Big Data Soc."},{"issue":"2","key":"215_CR2","doi-asserted-by":"publisher","first-page":"223","DOI":"10.1109\/TAFFC.2017.2695999","volume":"10","author":"DH Kim","year":"2017","unstructured":"Kim, D.H., Baddar, W.J., Jang, J., Ro, Y.M.: Multi-objective based spatio-temporal feature representation learning robust to expression intensity variations for facial expression recognition. IEEE Trans. Affect. Comput. 10(2), 223\u2013236 (2017)","journal-title":"IEEE Trans. Affect. Comput."},{"key":"215_CR3","doi-asserted-by":"publisher","first-page":"101","DOI":"10.1016\/j.patrec.2018.04.010","volume":"115","author":"N Jain","year":"2018","unstructured":"Jain, N., Kumar, S., Kumar, A., Shamsolmoali, P., Zareapoor, M.: Hybrid deep neural networks for face emotion recognition. Pattern Recogn. Lett. 115, 101\u2013106 (2018)","journal-title":"Pattern Recogn. Lett."},{"key":"215_CR4","doi-asserted-by":"publisher","first-page":"105754","DOI":"10.1016\/j.aap.2020.105754","volume":"146","author":"JM Mase","year":"2020","unstructured":"Mase, J.M., Majid, S., Mesgarpour, M., Torres, M.T., Figueredo, G.P., Chapman, P.: Evaluating the impact of heavy goods vehicle driver monitoring and coaching to reduce risky behaviour. Accid. Anal. Prev. 146, 105754 (2020)","journal-title":"Accid. Anal. Prev."},{"doi-asserted-by":"crossref","unstructured":"Bishop, J.: Supporting communication between people with social orientation impairments using affective computing technologies: rethinking the autism spectrum. In: Assistive Technologies for Physical and Cognitive Disabilities, pp. 42\u201355. IGI Global (2015)","key":"215_CR5","DOI":"10.4018\/978-1-4666-7373-1.ch003"},{"key":"215_CR6","doi-asserted-by":"publisher","first-page":"98","DOI":"10.1016\/j.inffus.2017.02.003","volume":"37","author":"S Poria","year":"2017","unstructured":"Poria, S., Cambria, E., Bajpai, R., Hussain, A.: A review of affective computing: from unimodal analysis to multimodal fusion. Inf. Fusion 37, 98\u2013125 (2017)","journal-title":"Inf. Fusion"},{"unstructured":"Breuer, R., Kimmel, R.: A deep learning perspective on the origin of facial expressions. arXiv preprint arXiv:1705.01842 (2017)","key":"215_CR7"},{"doi-asserted-by":"crossref","unstructured":"Sun, Y., Wang, X., Tang, X.: Deep learning face representation from predicting 10,000 classes. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1891\u20131898 (2014)","key":"215_CR8","DOI":"10.1109\/CVPR.2014.244"},{"issue":"2","key":"215_CR9","doi-asserted-by":"publisher","first-page":"205395171666686","DOI":"10.1177\/2053951716666868","volume":"3","author":"A McStay","year":"2016","unstructured":"McStay, A.: Empathic media and advertising: industry, policy, legal and citizen perspectives (the case for intimacy). Big Data Soc. 3(2), 2053951716666868 (2016)","journal-title":"Big Data Soc."},{"issue":"8","key":"215_CR10","doi-asserted-by":"publisher","first-page":"1301","DOI":"10.1109\/JSTSP.2017.2764438","volume":"11","author":"P Tzirakis","year":"2017","unstructured":"Tzirakis, P., Trigeorgis, G., Nicolaou, M.A., Schuller, B.W., Zafeiriou, S.: End-to-end multimodal emotion recognition using deep neural networks. IEEE J. Sel. Top. Signal Process. 11(8), 1301\u20131309 (2017)","journal-title":"IEEE J. Sel. Top. Signal Process."},{"doi-asserted-by":"crossref","unstructured":"Chao, L., Tao, J., Yang, M., Li, Y., Wen, Z.: Long short term memory recurrent neural network based multimodal dimensional emotion recognition. In: Proceedings of the 5th International Workshop on Audio\/Visual Emotion Challenge, pp. 65\u201372 (2015)","key":"215_CR11","DOI":"10.1145\/2808196.2811634"},{"doi-asserted-by":"crossref","unstructured":"Khorrami, P., Paine, T.L., Brady, K., Dagli, C., Huang, T.S.: How deep neural networks can improve emotion recognition on video data. In: 2016 IEEE International Conference on Image Processing (ICIP), pp. 619\u2013623. IEEE (2016)","key":"215_CR12","DOI":"10.1109\/ICIP.2016.7532431"},{"doi-asserted-by":"crossref","unstructured":"Lee, J., Kim, S., Kiim, S., Sohn, K.: Spatiotemporal attention based deep neural networks for emotion recognition. In: 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1513\u20131517. IEEE (2018)","key":"215_CR13","DOI":"10.1109\/ICASSP.2018.8461920"},{"unstructured":"Ortega, J.D.S., Senoussaoui, M., Granger, E., Pedersoli, M., Cardinal, P., Koerich, A.L.: Multimodal fusion with deep neural networks for audio-video emotion recognition. arXiv preprint arXiv:1907.03196 (2019)","key":"215_CR14"},{"doi-asserted-by":"crossref","unstructured":"Valstar, M., Gratch, J., Schuller, B., Ringeval, F., Lalanne, D., Torres, M.T., Scherer, S., Stratou, G., Cowie, R., Pantic, M.: Avec 2016: Depression, mood, and emotion recognition workshop and challenge. In: Proceedings of the 6th International Workshop on Audio\/Visual Emotion Challenge, pp. 3\u201310 (2016)","key":"215_CR15","DOI":"10.1145\/2988257.2988258"},{"key":"215_CR16","doi-asserted-by":"publisher","first-page":"76","DOI":"10.1016\/j.imavis.2016.11.020","volume":"65","author":"J Han","year":"2017","unstructured":"Han, J., Zhang, Z., Cummins, N., Ringeval, F., Schuller, B.: Strength modelling for real-worldautomatic continuous affect recognition from audiovisual signals. Image Vis. Comput. 65, 76\u201386 (2017)","journal-title":"Image Vis. Comput."},{"unstructured":"Sun, P., Li, Y., Qi, H., Lyu, S.: Landmarkgan: synthesizing faces from landmarks. arXiv preprint arXiv:2011.00269 (2020)","key":"215_CR17"},{"doi-asserted-by":"crossref","unstructured":"Choi, J., Medioni, G., Lin, Y., Silva, L., Regina, O., Pamplona, M., Faltemier, T.C.: 3d face reconstruction using a single or multiple views. In: 2010 20th International Conference on Pattern Recognition, pp. 3959\u20133962. IEEE (2010)","key":"215_CR18","DOI":"10.1109\/ICPR.2010.963"},{"issue":"1","key":"215_CR19","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1038\/s41598-021-84632-9","volume":"11","author":"Y Fan","year":"2021","unstructured":"Fan, Y., Lam, J.C.K., Li, V.O.K.: Demographic effects on facial emotion expression: an interdisciplinary investigation of the facial action units of happiness. Sci. Rep. 11(1), 1\u201311 (2021)","journal-title":"Sci. Rep."},{"doi-asserted-by":"crossref","unstructured":"Jaiswal, M., Provost, E.M.: Privacy enhanced multimodal neural representations for emotion recognition. In: Proceedings of the AAAI Conference on Artificial Intelligence 34, 7985\u20137993 (2020)","key":"215_CR20","DOI":"10.1609\/aaai.v34i05.6307"},{"issue":"3","key":"215_CR21","first-page":"1","volume":"13","author":"Q Yang","year":"2019","unstructured":"Yang, Q., Liu, Y., Cheng, Y., Kang, Y., Chen, T., Han, Yu.: Federated learning. Synth. Lect. Artif. Intell. Mach. Learn. 13(3), 1\u2013207 (2019)","journal-title":"Synth. Lect. Artif. Intell. Mach. Learn."},{"doi-asserted-by":"crossref","unstructured":"Latif, S., Khalifa, S., Rana, R., Jurdak, R.: Federated learning for speech emotion recognition applications. In: 2020 19th ACM\/IEEE International Conference on Information Processing in Sensor Networks (IPSN), pp. 341\u2013342. IEEE (2020)","key":"215_CR22","DOI":"10.1109\/IPSN48710.2020.00-16"},{"issue":"1","key":"215_CR23","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3428152","volume":"21","author":"YS Can","year":"2021","unstructured":"Can, Y.S., Ersoy, C.: Privacy-preserving federated deep learning for wearable iot-based biomedical monitoring. ACM Trans. Internet Technol. (TOIT) 21(1), 1\u201317 (2021)","journal-title":"ACM Trans. Internet Technol. (TOIT)"},{"unstructured":"Xu, X., Peng, H., Sun, L., Bhuiyan, M.Z.Al., Liu, L., He, L.: Fedmood: Federated learning on mobile health data for mood detection. arXiv preprint arXiv:2102.09342 (2021)","key":"215_CR24"},{"issue":"8","key":"215_CR25","doi-asserted-by":"publisher","first-page":"6949","DOI":"10.1109\/JIOT.2020.3037207","volume":"8","author":"P Chhikara","year":"2020","unstructured":"Chhikara, P., Singh, P., Tekchandani, R., Kumar, N., Guizani, M.: Federated learning meets human emotions: A decentralized framework for human-computer interaction for iot applications. IEEE Internet Things J. 8(8), 6949\u20136962 (2020)","journal-title":"IEEE Internet Things J."},{"doi-asserted-by":"crossref","unstructured":"Baltrusaitis, T., Zadeh, A., Lim, Y.C., Morency, L.-P.: Openface 2.0: facial behavior analysis toolkit. In: 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018), pp. 59\u201366. IEEE (2018)","key":"215_CR26","DOI":"10.1109\/FG.2018.00019"},{"doi-asserted-by":"crossref","unstructured":"Ringeval, F., Sonderegger, A., Sauer, J., Lalanne, D.: Introducing the Recola multimodal corpus of remote collaborative and affective interactions. In: 2013 10th IEEE International Conference and Workshops on Automatic Face and Gesture recognition (FG), pp. 1\u20138. IEEE (2013)","key":"215_CR27","DOI":"10.1109\/FG.2013.6553805"},{"unstructured":"Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Gated feedback recurrent neural networks. In: International Conference on Machine Learning, pp. 2067\u20132075 (2015)","key":"215_CR28"},{"key":"215_CR29","doi-asserted-by":"publisher","first-page":"46","DOI":"10.1016\/j.inffus.2020.10.011","volume":"68","author":"P Tzirakis","year":"2021","unstructured":"Tzirakis, P., Chen, J., Zafeiriou, S., Schuller, B.: End-to-end multimodal affect recognition in real-world environments. Inf. Fusion 68, 46\u201353 (2021)","journal-title":"Inf. Fusion"},{"doi-asserted-by":"crossref","unstructured":"Mase, J.M., Chapman, P., Figueredo, G.P., Torres, M.T.: Benchmarking deep learning models for driver distraction detection. In: International Conference on Machine Learning, Optimization, and Data Science, pp. 103\u2013117. Springer (2020)","key":"215_CR30","DOI":"10.1007\/978-3-030-64580-9_9"},{"issue":"2","key":"215_CR31","doi-asserted-by":"publisher","first-page":"157","DOI":"10.1109\/72.279181","volume":"5","author":"Y Bengio","year":"1994","unstructured":"Bengio, Y., Simard, P., Frasconi, P.: Learning long-term dependencies with gradient descent is difficult. IEEE Trans. Neural Netw. 5(2), 157\u2013166 (1994)","journal-title":"IEEE Trans. Neural Netw."},{"issue":"3","key":"215_CR32","doi-asserted-by":"publisher","first-page":"211","DOI":"10.1007\/s11263-015-0816-y","volume":"115","author":"O Russakovsky","year":"2015","unstructured":"Russakovsky, O., Deng, J., Hao, S., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vis. 115(3), 211\u2013252 (2015)","journal-title":"Int. J. Comput. Vis."},{"unstructured":"Beutel, D.J., Topal, T., Mathur, A., Qiu, X., Parcollet, T., Lane, N.D.: Flower: a friendly federated learning research framework. arXiv preprint arXiv:2007.14390 (2020)","key":"215_CR33"},{"unstructured":"Abadi, Mart\u00edn, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Corrado, Greg S., Davis, Andy, Dean, Jeffrey, Devin, Matthieu, Ghemawat, Sanjay, Goodfellow, Ian, Harp, Andrew, Irving, Geoffrey, Isard, Michael, Jia, Yangqing, Jozefowicz, Rafal, Kaiser, Lukasz, Kudlur, Manjunath, Levenberg, Josh, Man\u00e9, Dandelion, Monga, Rajat, Moore, Sherry, Murray, Derek, Olah, Chris, Schuster, Mike, Shlens, Jonathon, Steiner, Benoit, Sutskever, Ilya, Talwar, Kunal, Tucker, Paul, Vanhoucke, Vincent, Vasudevan, Vijay, Vi\u00e9gas, Fernanda, Vinyals, Oriol, Warden, Pete, Wattenberg, Martin, Wicke, Martin, Yu, Yuan, Zheng, Xiaoqiang: TensorFlow: Federated learning, 2015. Software available from tensorflow.org","key":"215_CR34"}],"container-title":["AI and Ethics"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s43681-022-00215-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s43681-022-00215-y\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s43681-022-00215-y.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2023,7,20]],"date-time":"2023-07-20T15:21:30Z","timestamp":1689866490000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s43681-022-00215-y"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2022,9,8]]},"references-count":34,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2023,8]]}},"alternative-id":["215"],"URL":"https:\/\/doi.org\/10.1007\/s43681-022-00215-y","relation":{},"ISSN":["2730-5953","2730-5961"],"issn-type":[{"type":"print","value":"2730-5953"},{"type":"electronic","value":"2730-5961"}],"subject":[],"published":{"date-parts":[[2022,9,8]]},"assertion":[{"value":"25 May 2022","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"21 August 2022","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"8 September 2022","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"On behalf of all authors, the corresponding author states that there is no conflict of interest. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}]}}