{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,26]],"date-time":"2025-12-26T07:16:11Z","timestamp":1766733371065,"version":"3.41.0"},"reference-count":47,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2025,6,6]],"date-time":"2025-06-06T00:00:00Z","timestamp":1749168000000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2025,6,6]],"date-time":"2025-06-06T00:00:00Z","timestamp":1749168000000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"name":"Manipal University Jaipur"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["Discov Internet Things"],"abstract":"<jats:title>Abstract<\/jats:title>\n          <jats:p>Home automation systems are popular because they enhance the quality of life and the way users interact with the environment. Deploying complex machine learning models on Internet of Things (IoT) devices with limited resources is still difficult. This study proposes a home automation system based on a TinyML (Tiny Machine Learning) model to recognize specific spoken keywords. The developed model runs effectively on IoT devices which usually have limited resources. Using TinyML, the limitations of memory size, processing power and latency associated with IoT devices are addressed. The objective of this research is to train a keyword-spotting model for devices with low computation and memory. The trained TinyML model can recognize specific voice commands associated with home automation tasks, such as controlling lights, thermostats, and other appliances. To test our approach, we ran experiments in real-world settings and on edge IoT devices with limited resources. The results show that our keyword spotting model is both highly accurate and efficient and uses minimum computational resources. This research helps in the advancement of TinyML applications in home automation and broadens the potential for voice interaction in constrained environments. The keyword spotting model in the proposed system is built using Deep Convolutional Neural Network (DCNN). Different data pre-processing techniques are also applied to refine the dataset. The trained model is then converted to be deployed on the low resource devices without compromising the model\u2019s efficiency. The model attains an 96.67% test accuracy. The model is quantized for devices with limited resources. It operates with an 11 ms latency, using 19.8\u00a0K of RAM and 55.0\u00a0K of flash for recognizing and classifying users\u2019 voice commands in real-time. This demonstrates how TinyML can create efficient and user-friendly smart home solutions. The main contribution of the work presented in this paper is that the designed model can be deployed on a wide range of IoT devices. Since the model is trained on voice instructions which limits the model\u2019s robustness. In future work, this limitation can be eliminated by integrating multilingual instructions.<\/jats:p>","DOI":"10.1007\/s43926-025-00165-x","type":"journal-article","created":{"date-parts":[[2025,6,6]],"date-time":"2025-06-06T08:05:59Z","timestamp":1749197159000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":1,"title":["Voice-activated home automation system for IoT edge devices using TinyML"],"prefix":"10.1007","volume":"5","author":[{"given":"Timothy","family":"Malche","sequence":"first","affiliation":[]},{"given":"Sandeep","family":"Budhani","sequence":"additional","affiliation":[]},{"given":"Pramod Kumar","family":"Soni","sequence":"additional","affiliation":[]},{"given":"Govind Murari","family":"Upadhyay","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2025,6,6]]},"reference":[{"key":"165_CR1","doi-asserted-by":"publisher","first-page":"4169","DOI":"10.1109\/ACCESS.2021.3139508","volume":"10","author":"Z-H Pez-Espejo Tan","year":"2022","unstructured":"Pez-Espejo Tan Z-H, Hansen JHL, Jensen J. Deep spoken keyword spotting: an overview. IEEE Access. 2022;10:4169\u201399. https:\/\/doi.org\/10.1109\/ACCESS.2021.3139508.","journal-title":"IEEE Access"},{"issue":"6","key":"165_CR2","doi-asserted-by":"publisher","first-page":"7521","DOI":"10.1007\/s10586-024-04351-4","volume":"27","author":"A Heidari","year":"2024","unstructured":"Heidari A, Shishehlou H, Darbandi M, Navimipour NJ, Yalcin S. A reliable method for data aggregation on the industrial internet of things using a hybrid optimization algorithm and density correlation degree. Clust Comput. 2024;27(6):7521\u201339.","journal-title":"Clust Comput"},{"key":"165_CR3","volume-title":"Towards the internet of things: architectures, security, and applications","author":"M Jabraeil Jamali","year":"2019","unstructured":"Jabraeil Jamali M, Bahrami B, Heidari A, Allahverdizadeh P, Norouzi F. Towards the internet of things: architectures, security, and applications. Springer; 2019."},{"key":"165_CR4","unstructured":"Zhang Y, Suda N, Lai L, Chandra V. Hello edge: Keyword spotting on microcontrollers. CoRR; 2017. abs\/1711.07128. arXiv:1711.07128"},{"key":"165_CR5","doi-asserted-by":"publisher","unstructured":"Guam\u00e1n S, Calvopi\u00f1a A, Orta P, Tapia F, Yoo SG. Device control system for a smart home using voice commands: A practical case. In: Proceedings of the 2018 10th international conference on information management and engineering. ICIME 2018. Association for Computing Machinery, New York, NY, USA; 2018. p. 86\u20139. https:\/\/doi.org\/10.1145\/3285957.3285977 .","DOI":"10.1145\/3285957.3285977"},{"key":"165_CR6","doi-asserted-by":"crossref","unstructured":"Jose C, Wang J, Strimel GP, Khursheed MO, Mishchenko Y, Kulis B. Latency control for keyword spotting; 2022. arXiv preprint arXiv:2206.07261.","DOI":"10.21437\/Interspeech.2022-10608"},{"key":"165_CR7","unstructured":"Ahmed S, Shumailov I, Papernot N, Fawaz K. Towards more robust keyword spotting for voice assistants. In: 31st USENIX security symposium (USENIX Security 22); 2022. p. 2655\u201372."},{"key":"165_CR8","doi-asserted-by":"publisher","unstructured":"Bushur JIM. Hardware\/software co-design for keyword spotting on edge devices; 2023. https:\/\/doi.org\/10.25394\/PGS.22701319.v1","DOI":"10.25394\/PGS.22701319.v1"},{"issue":"6","key":"165_CR9","doi-asserted-by":"publisher","first-page":"219","DOI":"10.3390\/fi15060219","volume":"15","author":"J Bushur","year":"2023","unstructured":"Bushur J, Chen C. Neural network exploration for keyword spotting on edge devices. Future Internet. 2023;15(6):219.","journal-title":"Future Internet"},{"key":"165_CR10","unstructured":"Wang J, Li S. Keyword spotting system and evaluation of pruning and quantization methods on low-power edge microcontrollers; 2022. arxiv:2208.02765"},{"key":"165_CR11","doi-asserted-by":"publisher","unstructured":"Chen G, Parada C, Heigold G. Small-footprint keyword spotting using deep neural networks. In: 2014 IEEE international conference on acoustics, speech and signal processing (ICASSP); 2014. p. 4087\u201391. https:\/\/doi.org\/10.1109\/ICASSP.2014.6854370","DOI":"10.1109\/ICASSP.2014.6854370"},{"issue":"1","key":"165_CR12","doi-asserted-by":"publisher","first-page":"4087","DOI":"10.1155\/2020\/4579291","volume":"2020","author":"R Majeed","year":"2020","unstructured":"Majeed R, Abdullah NA, Ashraf I, Zikria YB, Mushtaq MF, Umer M. An intelligent secure and smart home automation system. Sci Progr. 2020;2020(1):4087\u201391. https:\/\/doi.org\/10.1155\/2020\/4579291.","journal-title":"Sci Progr"},{"key":"165_CR13","unstructured":"Wang Z, Li X, Zhou J. Small-footprint keyword spotting using deep neural network and connectionist temporal classifier. CoRR; 2017. abs\/1709.03665. arxiv:1709.03665."},{"key":"165_CR14","doi-asserted-by":"crossref","unstructured":"Choi S, Seo S, Shin B, Byun H, Kersner M, Kim B, Kim D, Ha S. Temporal convolution for real-time keyword spotting on mobile devices. CoRR; 2019. abs\/1904.03814. arxiv:1904.03814.","DOI":"10.21437\/Interspeech.2019-1363"},{"key":"165_CR15","doi-asserted-by":"publisher","unstructured":"Leroy D, Coucke A, Lavril T, Gisselbrecht T, Dureau J. Federated learning for keyword spotting. In: ICASSP 2019\u20132019 IEEE international conference on acoustics, speech and signal processing (ICASSP); 2019. p. 6341\u20135. https:\/\/doi.org\/10.1109\/ICASSP.2019.8683546","DOI":"10.1109\/ICASSP.2019.8683546"},{"key":"165_CR16","unstructured":"Bluche T, Primet M, Gisselbrecht T. Small-footprint open-vocabulary keyword spotting with quantized LSTM networks. CoRR; 2020. abs\/2002.10851. arxiv:2002.10851."},{"key":"165_CR17","doi-asserted-by":"crossref","unstructured":"Hard A, Partridge K, Nguyen C, Subrahmanya N, Shah A, Zhu P, Moreno IL, Mathews R. Training keyword spotting models on non-IID data with federated learning; 2020. arxiv:2005.10406","DOI":"10.21437\/Interspeech.2020-3023"},{"key":"165_CR18","doi-asserted-by":"publisher","unstructured":"Mittermaier S, K\u00fcrzinger L, Waschneck B, Rigoll G. Small-footprint keyword spotting on raw audio data with sinc-convolutions. In: ICASSP 2020\u20142020 IEEE international conference on acoustics, speech and signal processing (ICASSP); 2020. p. 7454\u20138. https:\/\/doi.org\/10.1109\/ICASSP40776.2020.9053395","DOI":"10.1109\/ICASSP40776.2020.9053395"},{"key":"165_CR19","doi-asserted-by":"publisher","unstructured":"Rybakov O, Kononenko N, Subrahmanya N, Visontai M, Laurenzo S. Streaming keyword spotting on mobile devices. In: Interspeech 2020. ISCA; 2020. https:\/\/doi.org\/10.21437\/interspeech.2020-1003","DOI":"10.21437\/interspeech.2020-1003"},{"key":"165_CR20","doi-asserted-by":"publisher","first-page":"600","DOI":"10.1007\/s10766-021-00712-3","volume":"49","author":"R Stahl","year":"2021","unstructured":"Stahl R, Hoffman A, Mueller-Gritschneder D, Gerstlauer A, Schlichtmann U. Deeperthings fully distributed CNN inference on resource-constrained edge devices. Int J Parallel Progr. 2021;49:600\u201324. https:\/\/doi.org\/10.1007\/s10766-021-00712-3.","journal-title":"Int J Parallel Progr"},{"key":"165_CR21","doi-asserted-by":"publisher","first-page":"4169","DOI":"10.1109\/ACCESS.2021.3139508","volume":"10","author":"LZH Espejo","year":"2022","unstructured":"Espejo LZH, Hansen JHL, Jensen J. Deep spoken keyword spotting: an overview. IEEE Access. 2022;10:4169\u201399. https:\/\/doi.org\/10.1109\/ACCESS.2021.3139508.","journal-title":"IEEE Access"},{"key":"165_CR22","doi-asserted-by":"publisher","unstructured":"Liu Z, Li T, Zhang P. RNN-T based open-vocabulary keyword spotting in mandarin with multi-level detection. In: ICASSP 2021\u20142021 IEEE international conference on acoustics, speech and signal processing (ICASSP); 2021. p. 5649\u201353. https:\/\/doi.org\/10.1109\/ICASSP39728.2021.9413588","DOI":"10.1109\/ICASSP39728.2021.9413588"},{"key":"165_CR23","doi-asserted-by":"publisher","unstructured":"Shrivastava A, Kundu A, Dhir C, Naik D, Tuzel O. Optimize what matters: Training DNN-HMM keyword spotting model using end metric. In: ICASSP 2021\u20142021 IEEE international conference on acoustics, speech and signal processing (ICASSP); 2021. p. 4000\u20134. https:\/\/doi.org\/10.1109\/ICASSP39728.2021.9414797","DOI":"10.1109\/ICASSP39728.2021.9414797"},{"key":"165_CR24","doi-asserted-by":"publisher","DOI":"10.1155\/2022\/7437023","author":"TH Riku Immonen","year":"2022","unstructured":"Riku Immonen TH. Tiny machine learning for resource-constrained microcontrollers. J Sens. 2022. https:\/\/doi.org\/10.1155\/2022\/7437023.","journal-title":"J Sens"},{"key":"165_CR25","doi-asserted-by":"publisher","DOI":"10.3390\/mi13060851","author":"NN Alajlan","year":"2022","unstructured":"Alajlan NN, Ibrahim DM. Tinyml: enabling of inference deep learning models on ultra-low-power IoT edge devices for ai applications. Micromachines. 2022. https:\/\/doi.org\/10.3390\/mi13060851.","journal-title":"Micromachines"},{"key":"165_CR26","doi-asserted-by":"publisher","DOI":"10.3390\/electronics11162571","author":"K He","year":"2022","unstructured":"He K, Chen D, Su T. A configurable accelerator for keyword spotting based on small-footprint temporal efficient neural network. Electronics. 2022. https:\/\/doi.org\/10.3390\/electronics11162571.","journal-title":"Electronics"},{"key":"165_CR27","doi-asserted-by":"publisher","unstructured":"Ng D, Chen Y, Tian B, Fu Q, Chng ES. Convmixer: feature interactive convolution with curriculum learning for small footprint and noisy far-field keyword spotting. In: ICASSP 2022\u20142022 IEEE international conference on acoustics, speech and signal processing (ICASSP); 2022. p. 3603\u20137. https:\/\/doi.org\/10.1109\/ICASSP43922.2022.9747025","DOI":"10.1109\/ICASSP43922.2022.9747025"},{"key":"165_CR28","unstructured":"Ahmed S, Shumailov I, Papernot N, Fawaz K. Towards more robust keyword spotting for voice assistants. In: 31st USENIX security symposium (USENIX Security 22). USENIX Association, Boston, MA; 2022. p. 2655\u201372. https:\/\/www.usenix.org\/conference\/usenixsecurity22\/presentation\/ahmed"},{"key":"165_CR29","doi-asserted-by":"publisher","DOI":"10.54489\/ijcim.v2i1.75","author":"A Alzoubi","year":"2022","unstructured":"Alzoubi A. Machine learning for intelligent energy consumption in smart homes. Int J Comput Inf Manuf IJCIM. 2022. https:\/\/doi.org\/10.54489\/ijcim.v2i1.75.","journal-title":"Int J Comput Inf Manuf IJCIM"},{"key":"165_CR30","doi-asserted-by":"publisher","first-page":"186456","DOI":"10.1109\/ACCESS.2019.2960948","volume":"7","author":"B Liu","year":"2019","unstructured":"Liu B, Wang Z, Zhu W, Sun Y, Shen Z, Huang L, Li Y, Gong Y, Ge W. An ultra-low power always-on keyword spotting accelerator using quantized convolutional neural network and voltage-domain analog switching network-based approximate computing. IEEE Access. 2019;7:186456\u201369. https:\/\/doi.org\/10.1109\/ACCESS.2019.2960948.","journal-title":"IEEE Access"},{"key":"165_CR31","doi-asserted-by":"publisher","unstructured":"Park DS, Chan W, Zhang Y, Chiu C-C, Zoph B, Cubuk ED, Le QV. Specaugment: a simple data augmentation method for automatic speech recognition. In: Interspeech; 2019. https:\/\/doi.org\/10.21437\/interspeech.2019-2680","DOI":"10.21437\/interspeech.2019-2680"},{"key":"165_CR32","doi-asserted-by":"publisher","DOI":"10.1016\/j.apacoust.2020.107389","volume":"167","author":"Z Mushtaq","year":"2020","unstructured":"Mushtaq Z, Su SF. Environmental sound classification using a regularized deep convolutional neural network with data augmentation. Appl Acoust. 2020;167: 107389. https:\/\/doi.org\/10.1016\/j.apacoust.2020.107389.","journal-title":"Appl Acoust"},{"issue":"1","key":"165_CR33","doi-asserted-by":"publisher","first-page":"151","DOI":"10.1109\/JSSC.2020.3029097","volume":"56","author":"W Shan","year":"2021","unstructured":"Shan W, Yang M, Wang T, Lu Y, Cai H, Zhu L, Xu J, Wu C, Shi L, Yang J. A 510-nw wake-up keyword-spotting chip using serial-FFT-based MFCC and binarized depthwise separable CNN in 28-nm cmos. IEEE J Solid-State Circ. 2021;56(1):151\u201364. https:\/\/doi.org\/10.1109\/JSSC.2020.3029097.","journal-title":"IEEE J Solid-State Circ"},{"key":"165_CR34","doi-asserted-by":"publisher","first-page":"22","DOI":"10.1016\/j.neunet.2020.06.015","volume":"130","author":"M Deng","year":"2020","unstructured":"Deng M, Meng T, Cao J, Wang S, Zhang J, Fan H. Heart sound classification based on improved MFCC features and convolutional recurrent neural networks. Neural Netw. 2020;130:22\u201332. https:\/\/doi.org\/10.1016\/j.neunet.2020.06.015.","journal-title":"Neural Netw"},{"key":"165_CR35","doi-asserted-by":"publisher","unstructured":"Xiang L, Lu S, Wang X, Liu H, Pang W, Yu H. Implementation of LSTM accelerator for speech keywords recognition. In: 2019 IEEE 4th international conference on integrated circuits and microsystems (ICICM). 2019. p. 195\u2013198. https:\/\/doi.org\/10.1109\/ICICM48536.2019.8977176","DOI":"10.1109\/ICICM48536.2019.8977176"},{"key":"165_CR36","first-page":"1","volume":"5","author":"K Kaur","year":"2015","unstructured":"Kaur K, Jain N. Feature extraction and classification for automatic speaker recognition system\u2014a review. Int J Adv Res Comput Sci Softw Eng. 2015;5:1\u20136.","journal-title":"Int J Adv Res Comput Sci Softw Eng"},{"key":"165_CR37","doi-asserted-by":"publisher","DOI":"10.3390\/jlpea11020018","author":"J Lei","year":"2021","unstructured":"Lei J, Rahman T, Shafik R, Wheeldon A, Yakovlev A, Granmo O-C, Kawsar F, Mathur A. Low-power audio keyword spotting using Tsetlin machines. J Low Power Electr Appl. 2021. https:\/\/doi.org\/10.3390\/jlpea11020018.","journal-title":"J Low Power Electr Appl"},{"key":"165_CR38","unstructured":"Audio MFCC. https:\/\/docs.edgeimpulse.com\/docs\/edge-impulse-studio\/processing-blocks\/audio-mfcc. Accessed 1 Mar 2024."},{"key":"165_CR39","doi-asserted-by":"crossref","unstructured":"Arik SO, Kliegl M, Child R, Hestness J, Gibiansky A, Fougner C, Prenger R, Coates A. Convolutional recurrent neural networks for small-footprint keyword spotting; 2017. arXiv preprint arXiv:1703.05390.","DOI":"10.21437\/Interspeech.2017-1737"},{"issue":"18","key":"165_CR40","doi-asserted-by":"publisher","first-page":"3964","DOI":"10.3390\/electronics12183964","volume":"12","author":"J Yoon","year":"2023","unstructured":"Yoon J, Kim N, Lee D, Lee S-J, Kwak G-H, Kim T-H. A resource-efficient keyword spotting system based on a one-dimensional binary convolutional neural network. Electronics. 2023;12(18):3964.","journal-title":"Electronics"},{"key":"165_CR41","doi-asserted-by":"crossref","unstructured":"Daniel S Park WC. SpecAugment: a new data augmentation method for automatic speech recognition; 2019. https:\/\/research.google\/blog\/specaugment-a-new-data-augmentation-method-for-automatic-speech-recognition\/. Accessed 18 Mar 2025.","DOI":"10.21437\/Interspeech.2019-2680"},{"key":"165_CR42","doi-asserted-by":"publisher","unstructured":"Suda N, Chandra V, Dasika G, Mohanty A, Ma Y, Vrudhula S, Seo J-S, Cao Y. Throughput-optimized opencl-based FPGA accelerator for large-scale convolutional neural networks. In: Proceedings of the 2016 ACM\/SIGDA international symposium on field-programmable gate arrays. FPGA\u201916. Association for computing machinery, New York, NY, USA; 2016. p. 16\u201325. https:\/\/doi.org\/10.1145\/2847263.2847276","DOI":"10.1145\/2847263.2847276"},{"key":"165_CR43","doi-asserted-by":"publisher","unstructured":"Qiu J, Wang J, Yao S, Guo K, Li B, Zhou E, Yu J, Tang T, Xu N, Song S, Wang Y, Yang H. Going deeper with embedded fpga platform for convolutional neural network. In: Proceedings of the 2016 ACM\/SIGDA international symposium on field-programmable gate arrays. FPGA\u201916. Association for computing machinery, New York, NY, USA; 2016. p. 26\u201335. https:\/\/doi.org\/10.1145\/2847263.2847265","DOI":"10.1145\/2847263.2847265"},{"key":"165_CR44","unstructured":"Lai L, Suda N, Chandra V. Deep convolutional neural network inference with floating-point weights and fixed-point activations; 2017. arxiv:1703.03073"},{"key":"165_CR45","unstructured":"Arduino Nano 33 BLE sense overview. https:\/\/gilberttanner.com\/blog\/arduino-nano-33-ble-sense-overview\/. Accessed 20 Apr 2025."},{"key":"165_CR46","doi-asserted-by":"publisher","DOI":"10.1016\/j.apacoust.2021.108283","volume":"183","author":"A Javed","year":"2021","unstructured":"Javed A, Malik KM, Irtaza A, Malik H. Towards protecting cyber-physical and IoT systems from single- and multi-order voice spoofing attacks. Appl Acoust. 2021;183: 108283. https:\/\/doi.org\/10.1016\/j.apacoust.2021.108283.","journal-title":"Appl Acoust"},{"key":"165_CR47","doi-asserted-by":"crossref","unstructured":"Ayaz F, Zakariyya I, Cano J, Keoh SL, Singer J, Pau D, Kharbouche-Harrari M. Improving robustness against adversarial attacks with deeply quantized neural networks; 2023. arxiv:2304.12829","DOI":"10.1109\/IJCNN54540.2023.10191429"}],"container-title":["Discover Internet of Things"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s43926-025-00165-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1007\/s43926-025-00165-x\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1007\/s43926-025-00165-x.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,6]],"date-time":"2025-06-06T08:06:06Z","timestamp":1749197166000},"score":1,"resource":{"primary":{"URL":"https:\/\/link.springer.com\/10.1007\/s43926-025-00165-x"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2025,6,6]]},"references-count":47,"journal-issue":{"issue":"1","published-online":{"date-parts":[[2025,12]]}},"alternative-id":["165"],"URL":"https:\/\/doi.org\/10.1007\/s43926-025-00165-x","relation":{},"ISSN":["2730-7239"],"issn-type":[{"value":"2730-7239","type":"electronic"}],"subject":[],"published":{"date-parts":[[2025,6,6]]},"assertion":[{"value":"12 February 2025","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"22 May 2025","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"6 June 2025","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Declarations"}},{"value":"Not applicable.","order":2,"name":"Ethics","group":{"name":"EthicsHeading","label":"Ethics approval and consent to participate"}},{"value":"Not applicable.","order":3,"name":"Ethics","group":{"name":"EthicsHeading","label":"Consent for publication"}},{"value":"The authors declare no competing interests.","order":4,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"68"}}