{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,4,9]],"date-time":"2026-04-09T01:11:13Z","timestamp":1775697073903,"version":"3.50.1"},"reference-count":55,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2024,2,1]],"date-time":"2024-02-01T00:00:00Z","timestamp":1706745600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"},{"start":{"date-parts":[[2024,2,1]],"date-time":"2024-02-01T00:00:00Z","timestamp":1706745600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.springernature.com\/gp\/researchers\/text-and-data-mining"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Multimedia Systems"],"published-print":{"date-parts":[[2024,2]]},"DOI":"10.1007\/s00530-023-01248-x","type":"journal-article","created":{"date-parts":[[2024,2,3]],"date-time":"2024-02-03T01:28:18Z","timestamp":1706923698000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":16,"title":["A defensive attention mechanism to detect deepfake content across multiple modalities"],"prefix":"10.1007","volume":"30","author":[{"given":"S.","family":"Asha","sequence":"first","affiliation":[]},{"given":"P.","family":"Vinod","sequence":"additional","affiliation":[]},{"given":"Varun G.","family":"Menon","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2024,2,3]]},"reference":[{"issue":"4","key":"1248_CR1","doi-asserted-by":"publisher","first-page":"3974","DOI":"10.1007\/s10489-022-03766-z","volume":"53","author":"M Masood","year":"2023","unstructured":"Masood, M., Nawaz, M., Malik, K.M., Javed, A., Irtaza, A., Malik, H.: Deepfakes generation and detection: state-of-the-art, open challenges, countermeasures, and way forward. Appl. Intell. 53(4), 3974\u20134026 (2023)","journal-title":"Appl. Intell."},{"issue":"4","key":"1248_CR2","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1145\/3072959.3073640","volume":"36","author":"S Suwajanakorn","year":"2017","unstructured":"Suwajanakorn, S., Seitz, S.M., Kemelmacher-Shlizerman, I.: Synthesizing obama: learning lip sync from audio. ACM Trans. Graph. (ToG) 36(4), 1\u201313 (2017)","journal-title":"ACM Trans. Graph. (ToG)"},{"key":"1248_CR3","unstructured":"News Desk. Fabricated video of vladimir putin takes twitter by storm. 2020. https:\/\/www.globalvillagespace.com\/fabricated-video-of-vladimir-putin-takes-twitter-by-storm. Accessed 27 Aug 2023"},{"key":"1248_CR4","first-page":"484","volume-title":"A lip sync expert is all you need for speech to lip generation in the wild","author":"K Prajwal","year":"2020","unstructured":"Prajwal, K., Mukhopadhyay, R., Namboodiri, V.P., Jawahar, C.: A lip sync expert is all you need for speech to lip generation in the wild. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 484\u2013492. Seattle WA, USA (2020)"},{"key":"1248_CR5","unstructured":"Jia, Y., Zhang, Y., Weiss, R., Wang, Q., Shen, J., Ren, F., Nguyen, P., Pang, R., Moreno, I. Lopez, Wu, Y. et al.: Transfer learning from speaker verification to multispeaker text-to-speech synthesis. In: Advances in neural information processing systems, pp. 1\u201311 (2018)"},{"key":"1248_CR6","first-page":"630","volume-title":"Multi-feature based emotion recognition for video clips","author":"C Liu","year":"2018","unstructured":"Liu, C., Tang, T., Lv, K., Wang, M.: Multi-feature based emotion recognition for video clips. In: Proceedings of the 20th ACM International Conference on Multimodal Interaction, pp. 630\u2013634. Boulder CO USA (2018)"},{"key":"1248_CR7","doi-asserted-by":"crossref","unstructured":"Lu, C., Zheng, W., Li, C., Tang, C., Liu, S., Yan, S., Zong, Y.: Multiple spatio-temporal feature learning for video-based emotion recognition in the wild. In: Proceedings of the 20th ACM International Conference on Multimodal Interaction, pp. 646\u2013652 (2018)","DOI":"10.1145\/3242969.3264992"},{"issue":"11","key":"1248_CR8","doi-asserted-by":"publisher","first-page":"65","DOI":"10.1109\/35.41402","volume":"27","author":"BP Yuhas","year":"1989","unstructured":"Yuhas, B.P., Goldstein, M.H., Sejnowski, T.J.: Integration of acoustic and visual speech signals using neural networks. IEEE Commun. Mag. 27(11), 65\u201371 (1989)","journal-title":"IEEE Commun. Mag."},{"key":"1248_CR9","doi-asserted-by":"publisher","first-page":"366","DOI":"10.1109\/AFGR.1998.670976","volume-title":"Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition","author":"LS Chen","year":"1998","unstructured":"Chen, L.S., Huang, T.S., Miyasato, T., Nakatsu, R.: Multimodal human emotion\/expression recognition. Proceedings Third IEEE International Conference on Automatic Face and Gesture Recognition, pp. 366\u2013371. IEEE (1998)"},{"key":"1248_CR10","volume-title":"Thirteenth Annual Conference of the International Speech Communication Association","author":"Y Attabi","year":"2012","unstructured":"Attabi, Y., Dumouchel, P.: Anchor models and wccn normalization for speaker trait classification. In: Thirteenth Annual Conference of the International Speech Communication Association., Oregon, USA (2012)"},{"key":"1248_CR11","unstructured":"Liang, P.P., Salakhutdinov, R., Morency, L.-P.: Computational modeling of human multimodal language: the mosei dataset and interpretable dynamic fusion. In: First Workshop and Grand Challenge on Computational Modeling of Human Multimodal Language, Melbourne (2018)"},{"key":"1248_CR12","doi-asserted-by":"crossref","unstructured":"Zadeh, A., Liang, P.P., Mazumder, N., Poria, S., Cambria, E., Morency, L.-P.: Memory fusion network for multi-view sequential learning. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32(1). Washington, DC, USA, (2018)]","DOI":"10.1609\/aaai.v32i1.12021"},{"key":"1248_CR13","doi-asserted-by":"publisher","first-page":"213","DOI":"10.1007\/978-3-030-87664-7_10","volume-title":"Handbook of Digital Face Manipulation and Detection","author":"R Roy","year":"2022","unstructured":"Roy, R., Joshi, I., Das, A., Dantcheva, A.: 3d cnn architectures and attention mechanisms for deepfake detection. Handbook of Digital Face Manipulation and Detection, pp. 213\u2013234. Springer, Cham (2022)"},{"key":"1248_CR14","doi-asserted-by":"crossref","unstructured":"Das, A., Das, S., Dantcheva, A.: Demystifying attention mechanisms for deepfake detection. In: 2021 16th IEEE International Conference on Automatic Face and Gesture Recognition (FG 2021), pp. 1\u20137. IEEE (2021)","DOI":"10.1109\/FG52635.2021.9667026"},{"key":"1248_CR15","unstructured":"Korshunov, P., Marcel, S.: Deepfakes: a new threat to face recognition? Assessment and detection. arXiv preprint arXiv:1812.08685 (2018)"},{"key":"1248_CR16","first-page":"3207","volume-title":"Celeb-df: a large-scale challenging dataset for deepfake forensics","author":"Y Li","year":"2020","unstructured":"Li, Y., Yang, X., Sun, P., Qi, H., Lyu, S.: Celeb-df: a large-scale challenging dataset for deepfake forensics. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, pp. 3207\u20133216. California, USA (2020)"},{"key":"1248_CR17","doi-asserted-by":"crossref","unstructured":"Rossler, A., Cozzolino, D., Verdoliva, L., Riess, C., Thies, J., Nie\u00dfner, M.: Faceforensics++: Learning to detect manipulated facial images. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 1\u201311 (2019)","DOI":"10.1109\/ICCV.2019.00009"},{"issue":"2","key":"1248_CR18","first-page":"3","volume":"1","author":"N Dufour","year":"2019","unstructured":"Dufour, N., Gully, A.: Contributing data to deepfake detection research. Google AI Blog 1(2), 3 (2019)","journal-title":"Google AI Blog"},{"key":"1248_CR19","unstructured":"Dolhansky, B., Howes, R., Pflaum, B., Baram, N., Ferrer, C.C.: The deepfake detection challenge (dfdc) preview dataset. arXiv preprint arXiv:1910.08854 (2019)"},{"key":"1248_CR20","unstructured":"Khalid, H., Tariq, S., Kim, M., Woo, S.S.: Fakeavceleb: a novel audio-video multimodal deepfake dataset. arXiv preprint arXiv:2108.05080 (2021)"},{"key":"1248_CR21","doi-asserted-by":"crossref","unstructured":"Mittal, T., Bhattacharya, U., Chandra, R., Bera, A., Manocha, D.: \u201cEmotions don\u201d lie: a deepfake detection method using audio-visual affective cues. In: Proceedings of the 28th ACM International Conference on Multimedia. ACM, pp. 2823\u20132832 (2020)","DOI":"10.1145\/3394171.3413570"},{"key":"1248_CR22","doi-asserted-by":"crossref","unstructured":"Lewis, J.K., Toubal, I.E., Chen, H., Sandesera, V., Lomnitz, M., Hampel-Arias, Z., Prasad, C., Palaniappan, K.: Deepfake video detection based on spatial, spectral, and temporal inconsistencies using multimodal deep learning. In: IEEE Applied Imagery Pattern Recognition Workshop (AIPR), vol. 2020, pp. 1\u20139. IEEE (2020)","DOI":"10.1109\/AIPR50011.2020.9425167"},{"key":"1248_CR23","doi-asserted-by":"crossref","unstructured":"Lomnitz, M., Hampel-Arias, Z., Sandesara, V., Hu, S.: Multimodal approach for deepfake detection. In: IEEE Applied Imagery Pattern Recognition Workshop (AIPR), vol. 2020, pp. 1\u20139. IEEE (2020)","DOI":"10.1109\/AIPR50011.2020.9425192"},{"key":"1248_CR24","first-page":"439","volume-title":"Not made for each other-audio-visual dissonance-based deepfake detection and localization","author":"K Chugh","year":"2020","unstructured":"Chugh, K., Gupta, P., Dhall, A., Subramanian, R.: Not made for each other-audio-visual dissonance-based deepfake detection and localization. In: Proceedings of the 28th ACM International Conference on Multimedia, pp. 439\u2013447. United States, Seattle (2020)"},{"key":"1248_CR25","doi-asserted-by":"crossref","unstructured":"Hosler, B., Salvi, D., Murray, A., Antonacci, F., Bestagini, P., Tubaro, S., Stamm, M.C.: Do deepfakes feel emotions? A semantic approach to detecting deepfakes via emotional inconsistencies. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern Recognition, California, USA, pp. 1013\u20131022 (2021)","DOI":"10.1109\/CVPRW53098.2021.00112"},{"key":"1248_CR26","doi-asserted-by":"crossref","unstructured":"Khalid, H., Kim, M., Tariq, S., Woo, S.S.: Evaluation of an audio-video multimodal deepfake dataset using unimodal and multimodal detectors. In: Proceedings of the 1st Workshop on Synthetic Multimedia-Audiovisual Deepfake Generation and Detection, pp. 7\u201315 (2021)","DOI":"10.1145\/3476099.3484315"},{"key":"1248_CR27","doi-asserted-by":"crossref","unstructured":"Liu, X., Yu, Y., Li, X., Zhao, Y.: Mcl: multimodal contrastive learning for deepfake detection. IEEE Transactions on Circuits and Systems for Video Technology (2023)","DOI":"10.1109\/TCSVT.2023.3312738"},{"key":"1248_CR28","first-page":"1","volume":"27","author":"V Mnih","year":"2014","unstructured":"Mnih, V., Heess, N., Graves, A., et al.: Recurrent models of visual attention. Adv. Neural Inf. Process. Syst. 27, 1\u20139 (2014)","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"1248_CR29","doi-asserted-by":"crossref","unstructured":"Wang, F., Jiang, M., Qian, C., Yang, S., Li, C., Zhang, H., Wang, X., Tang,  X.: Residual attention network for image classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Hawaii, USA, pp. 3156\u20133164 (2017)","DOI":"10.1109\/CVPR.2017.683"},{"key":"1248_CR30","volume-title":"Audiovisual transformer with instance attention for audio-visual event localization","author":"Y-B Lin","year":"2020","unstructured":"Lin, Y.-B., Wang, Y.-C.F.: Audiovisual transformer with instance attention for audio-visual event localization. In: Proceedings of the Asian Conference on Computer Vision, Macao, China (2020)"},{"key":"1248_CR31","doi-asserted-by":"publisher","first-page":"171","DOI":"10.1016\/j.neucom.2018.01.007","volume":"284","author":"H Choi","year":"2018","unstructured":"Choi, H., Cho, K., Bengio, Y.: Fine-grained attention mechanism for neural machine translation. Neurocomputing 284, 171\u2013176 (2018)","journal-title":"Neurocomputing"},{"issue":"14","key":"1248_CR32","doi-asserted-by":"publisher","first-page":"20533","DOI":"10.1007\/s11042-019-7404-z","volume":"78","author":"H Ge","year":"2019","unstructured":"Ge, H., Yan, Z., Yu, W., Sun, L.: An attention mechanism based convolutional lstm network for video action recognition. Multimed. Tools Appl. 78(14), 20533\u201320556 (2019)","journal-title":"Multimed. Tools Appl."},{"key":"1248_CR33","doi-asserted-by":"crossref","unstructured":"Hsiao, P.-W., Chen, C.-P.: Effective attention mechanism in dynamic models for speech emotion recognition. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), vol. 2018, pp. 2526\u20132530. IEEE (2018)","DOI":"10.1109\/ICASSP.2018.8461431"},{"key":"1248_CR34","doi-asserted-by":"publisher","first-page":"1","DOI":"10.1007\/s10044-022-01083-2","volume":"25","author":"S Ganguly","year":"2022","unstructured":"Ganguly, S., Mohiuddin, S., Malakar, S., Cuevas, E., Sarkar, R.: Visual attention-based deepfake video forgery detection. Pattern Anal. Appl. 25, 1\u201312 (2022)","journal-title":"Pattern Anal. Appl."},{"key":"1248_CR35","doi-asserted-by":"crossref","unstructured":"Zhou, Y., Lim, S.-N.: Joint audio-visual deepfake detection. In: Proceedings of the IEEE\/CVF International Conference on Computer Vision, pp. 14800\u201314809 (2021)","DOI":"10.1109\/ICCV48922.2021.01453"},{"key":"1248_CR36","doi-asserted-by":"crossref","unstructured":"Yu, Y., Liu, X., Ni, R., Yang, S., Zhao, Y., Kot, A.C.: Pvass-mdd: predictive visual-audio alignment self-supervision for multimodal deepfake detection. IEEE Transactions on Circuits and Systems for Video Technology (2023)","DOI":"10.1109\/TCSVT.2023.3309899"},{"issue":"6","key":"1248_CR37","doi-asserted-by":"publisher","first-page":"122","DOI":"10.3390\/jimaging9060122","volume":"9","author":"D Salvi","year":"2023","unstructured":"Salvi, D., Liu, H., Mandelli, S., Bestagini, P., Zhou, W., Zhang, W., Tubaro, S.: A robust approach to multimodal deepfake detection. J. Imaging 9(6), 122 (2023)","journal-title":"J. Imaging"},{"key":"1248_CR38","unstructured":"Kharel, A., Paranjape, M., Bera, A.: Df-transfusion: Multimodal deepfake detection via lip-audio cross-attention and facial self-attention. arXiv preprint arXiv:2309.06511 (2023)"},{"key":"1248_CR39","unstructured":"Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)"},{"key":"1248_CR40","doi-asserted-by":"crossref","unstructured":"Machado, G.R., Silva, E., Goldschmidt, R.R.: A non-deterministic method to construct ensemble-based classifiers to protect decision support systems against adversarial images: a case study. In: Proceedings of the XV Brazilian Symposium on Information Systems. ACM, p.\u00a072 (2019)","DOI":"10.1145\/3330204.3330282"},{"key":"1248_CR41","unstructured":"\u201cDlib python api tutorials link,\u201d http:\/\/dlib.net\/python\/index.html (2015)"},{"issue":"1\u20133","key":"1248_CR42","doi-asserted-by":"publisher","first-page":"185","DOI":"10.1016\/0004-3702(81)90024-2","volume":"17","author":"BK Horn","year":"1981","unstructured":"Horn, B.K., Schunck, B.G.: Determining optical flow. Artif. Intell. 17(1\u20133), 185\u2013203 (1981)","journal-title":"Artif. Intell."},{"key":"1248_CR43","unstructured":"Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)"},{"key":"1248_CR44","doi-asserted-by":"crossref","unstructured":"Dang, H., Liu, F., Stehouwer, J., Liu, X., Jain, A.K.: On the detection of digital face manipulation. In: Proceedings of the IEEE\/CVF Conference on Computer Vision and Pattern recognition, USA, pp. 5781\u20135790 (2020)","DOI":"10.1109\/CVPR42600.2020.00582"},{"issue":"10","key":"1248_CR45","doi-asserted-by":"publisher","first-page":"2965","DOI":"10.1016\/j.patcog.2008.05.008","volume":"41","author":"D O\u2019Shaughnessy","year":"2008","unstructured":"O\u2019Shaughnessy, D.: Automatic speech recognition: history, methods and challenges. Pattern Recognit. 41(10), 2965\u20132979 (2008)","journal-title":"Pattern Recognit."},{"issue":"4","key":"1248_CR46","doi-asserted-by":"publisher","first-page":"396","DOI":"10.1109\/TAFFC.2017.2661284","volume":"9","author":"Y Baveye","year":"2017","unstructured":"Baveye, Y., Chamaret, C., Dellandr\u00e9a, E., Chen, L.: Affective video content analysis: a multidisciplinary insight. IEEE Trans. Affect. Comput. 9(4), 396\u2013409 (2017)","journal-title":"IEEE Trans. Affect. Comput."},{"key":"1248_CR47","first-page":"1","volume":"28","author":"JK Chorowski","year":"2015","unstructured":"Chorowski, J.K., Bahdanau, D., Serdyuk, D., Cho, K., Bengio, Y.: Attention-based models for speech recognition. Adv. Neural Inf. Process. Syst. 28, 1\u20139 (2015)","journal-title":"Adv. Neural Inf. Process. Syst."},{"key":"1248_CR48","doi-asserted-by":"publisher","first-page":"118 530","DOI":"10.1109\/ACCESS.2019.2936817","volume":"7","author":"J Chen","year":"2019","unstructured":"Chen, J., Jiang, D., Zhang, Y.: A hierarchical bidirectional gru model with attention for eeg-based emotion classification. IEEE Access 7, 118 530-118 540 (2019)","journal-title":"IEEE Access"},{"key":"1248_CR49","doi-asserted-by":"crossref","unstructured":"Chung, J.S., Nagrani, A., Zisserman, A.: Voxceleb2: deep speaker recognition. arXiv preprint arXiv:1806.05622 (2018)","DOI":"10.21437\/Interspeech.2018-1929"},{"key":"1248_CR50","unstructured":"Tan, M., Le, Q.: Efficientnet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning. PMLR, pp. 6105\u20136114 (2019)"},{"key":"1248_CR51","volume-title":"Thirty-first AAAI Conference on Artificial Intelligence","author":"C Szegedy","year":"2017","unstructured":"Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Thirty-first AAAI Conference on Artificial Intelligence. California, USA (2017)"},{"key":"1248_CR52","doi-asserted-by":"crossref","unstructured":"Chollet, F.: Xception: deep learning with depthwise separable convolutions. In: Proceedings of the IEEE conference on computer vision and pattern recognition, Hawaii, USA, pp. 1251\u20131258 (2017)","DOI":"10.1109\/CVPR.2017.195"},{"key":"1248_CR53","doi-asserted-by":"crossref","unstructured":"Afchar, D., Nozick, V., Yamagishi, J., Echizen, I.: Mesonet: a compact facial video forgery detection network. In: IEEE International Workshop on Information Forensics and Security (WIFS), vol. 2018, pp. 1\u20137. IEEE (2018)","DOI":"10.1109\/WIFS.2018.8630761"},{"key":"1248_CR54","doi-asserted-by":"crossref","unstructured":"Tran, D., Wang, H., Torresani, L., Ray, J., LeCun, Y., Paluri, M.: A closer look at spatiotemporal convolutions for action recognition. In: Proceedings of the IEEE\/CVF International Conference on\nComputer Vision and Pattern Recognition, Hawaii, USA, pp. 6450\u20136459 (2018)","DOI":"10.1109\/CVPR.2018.00675"},{"issue":"1","key":"1248_CR55","doi-asserted-by":"publisher","first-page":"5","DOI":"10.1109\/T-AFFC.2011.20","volume":"3","author":"G McKeown","year":"2011","unstructured":"McKeown, G., Valstar, M., Cowie, R., Pantic, M., Schroder, M.: The semaine database: annotated multimodal records of emotionally colored conversations between a person and a limited agent. IEEE Trans. Affect. Comput. 3(1), 5\u201317 (2011)","journal-title":"IEEE Trans. Affect. Comput."}],"container-title":["Multimedia Systems"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00530-023-01248-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s00530-023-01248-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s00530-023-01248-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2024,2,14]],"date-time":"2024-02-14T06:25:56Z","timestamp":1707891956000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s00530-023-01248-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2024,2]]},"references-count":55,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2024,2]]}},"alternative-id":["1248"],"URL":"https:\/\/doi.org\/10.1007\/s00530-023-01248-x","relation":{},"ISSN":["0942-4962","1432-1882"],"issn-type":[{"value":"0942-4962","type":"print"},{"value":"1432-1882","type":"electronic"}],"subject":[],"published":{"date-parts":[[2024,2]]},"assertion":[{"value":"9 July 2023","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"19 December 2023","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"3 February 2024","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"The authors declare no competing interests.\u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0\u00a0","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Conflict of interest"}}],"article-number":"56"}}