{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2026,2,13]],"date-time":"2026-02-13T23:10:19Z","timestamp":1771024219464,"version":"3.50.1"},"reference-count":47,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2021,4,12]],"date-time":"2021-04-12T00:00:00Z","timestamp":1618185600000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2021,4,12]],"date-time":"2021-04-12T00:00:00Z","timestamp":1618185600000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"funder":[{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61571435"],"award-info":[{"award-number":["61571435"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]},{"DOI":"10.13039\/501100001809","name":"National Natural Science Foundation of China","doi-asserted-by":"publisher","award":["61801468"],"award-info":[{"award-number":["61801468"]}],"id":[{"id":"10.13039\/501100001809","id-type":"DOI","asserted-by":"publisher"}]}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J AUDIO SPEECH MUSIC PROC."],"published-print":{"date-parts":[[2021,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Deep learning-based speech enhancement algorithms have shown their powerful ability in removing both stationary and non-stationary noise components from noisy speech observations. But they often introduce artificial residual noise, especially when the training target does not contain the phase information, e.g., ideal ratio mask, or the clean speech magnitude and its variations. It is well-known that once the power of the residual noise components exceeds the noise masking threshold of the human auditory system, the perceptual speech quality may degrade. One intuitive way is to further suppress the residual noise components by a postprocessing scheme. However, the highly non-stationary nature of this kind of residual noise makes the noise power spectral density (PSD) estimation a challenging problem. To solve this problem, the paper proposes three strategies to estimate the noise PSD frame by frame, and then the residual noise can be removed effectively by applying a gain function based on the<jats:italic>decision-directed<\/jats:italic>approach. The objective measurement results show that the proposed postfiltering strategies outperform the conventional postfilter in terms of segmental signal-to-noise ratio (SNR) as well as speech quality improvement. Moreover, the AB subjective listening test shows that the preference percentages of the proposed strategies are over 60%.<\/jats:p>","DOI":"10.1186\/s13636-021-00204-9","type":"journal-article","created":{"date-parts":[[2021,4,12]],"date-time":"2021-04-12T17:03:00Z","timestamp":1618246980000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":9,"title":["Low-complexity artificial noise suppression methods for deep learning-based speech enhancement algorithms"],"prefix":"10.1186","volume":"2021","author":[{"given":"Yuxuan","family":"Ke","sequence":"first","affiliation":[]},{"given":"Andong","family":"Li","sequence":"additional","affiliation":[]},{"given":"Chengshi","family":"Zheng","sequence":"additional","affiliation":[]},{"given":"Renhua","family":"Peng","sequence":"additional","affiliation":[]},{"given":"Xiaodong","family":"Li","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2021,4,12]]},"reference":[{"issue":"12","key":"204_CR1","doi-asserted-by":"publisher","first-page":"1849","DOI":"10.1109\/TASLP.2014.2352935","volume":"22","author":"Y. Wang","year":"2014","unstructured":"Y. Wang, A. Narayanan, D. Wang, On training targets for supervised speech separation. IEEE\/ACM Trans. Audio Speech Lang. Process.22(12), 1849\u20131858 (2014).","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"issue":"5","key":"204_CR2","doi-asserted-by":"publisher","first-page":"2604","DOI":"10.1121\/1.4948445","volume":"139","author":"J. Chen","year":"2016","unstructured":"J. Chen, Y. Wang, S. E. Yoho, D. Wang, E. W. Healy, Large-scale training to increase speech intelligibility for hearing-impaired listeners in novel noises. J. Acoust. Soc. Am.139(5), 2604\u20132612 (2016).","journal-title":"J. Acoust. Soc. Am."},{"key":"204_CR3","doi-asserted-by":"crossref","unstructured":"X. Li, R. Horaud, in Interspeech 2020. International speech communication association (ISCA). Online monaural speech enhancement using delayed subband lstm (Shanghai, 2020), pp. 2462\u20132466.","DOI":"10.21437\/Interspeech.2020-2091"},{"key":"204_CR4","doi-asserted-by":"crossref","unstructured":"N. L. Westhausen, B. T. Meyer, in Interspeech 2020. International speech communication association (ISCA). Dual-signal transformation lstm network for real-time noise suppression (Shanghai, 2020), pp. 2477\u20132481.","DOI":"10.21437\/Interspeech.2020-2631"},{"issue":"10","key":"204_CR5","doi-asserted-by":"publisher","first-page":"1702","DOI":"10.1109\/TASLP.2018.2842159","volume":"26","author":"D. Wang","year":"2018","unstructured":"D. Wang, J. Chen, Supervised speech separation based on deep learning: an overview. IEEE\/ACM Trans. Audio Speech Lang. Process.26(10), 1702\u20131726 (2018).","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"204_CR6","doi-asserted-by":"crossref","unstructured":"Y. Hu, Y. Liu, S. Lv, M. Xing, S. Zhang, Y. Fu, J. Wu, B. Zhang, L. Xie, in Interspeech 2020. International speech communication association (ISCA). Dccrn: Deep complex convolution recurrent network for phase-aware speech enhancement (Shanghai, 2020), pp. 2472\u20132476.","DOI":"10.21437\/Interspeech.2020-2537"},{"key":"204_CR7","doi-asserted-by":"crossref","unstructured":"M. Strake, B. Defraene, K. Fluyt, W. Tirry, T. Fingscheidt, in Interspeech 2020. International speech communication association (ISCA). A fully convolutional recurrent network (fcrn) for joint dereverberation and denoising (Shanghai, 2020), pp. 2467\u20132471.","DOI":"10.21437\/Interspeech.2020-2439"},{"issue":"1","key":"204_CR8","doi-asserted-by":"publisher","first-page":"189","DOI":"10.1109\/TASLP.2018.2876171","volume":"27","author":"K. Tan","year":"2018","unstructured":"K. Tan, J. Chen, D. Wang, Gated residual networks with dilated convolutions for monaural speech enhancement. IEEE\/ACM Trans. Audio Speech Lang. Process.27(1), 189\u2013198 (2018).","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"204_CR9","doi-asserted-by":"crossref","unstructured":"K. Tan, D. Wang, in Interspeech 2018. International speech communication association (ISCA). A convolutional recurrent neural network for real-time speech enhancement (Hyderabad, 2018), pp. 3229\u20133233.","DOI":"10.21437\/Interspeech.2018-1405"},{"key":"204_CR10","doi-asserted-by":"crossref","unstructured":"A. Li, C. Zheng, C. Fan, R. Peng, X. Li, A recursive network with dynamic attention for monaural speech enhancement, (Shanghai, 2020).","DOI":"10.21437\/Interspeech.2020-1513"},{"key":"204_CR11","doi-asserted-by":"publisher","first-page":"107347","DOI":"10.1016\/j.apacoust.2020.107347","volume":"166","author":"A. Li","year":"2020","unstructured":"A. Li, M. Yuan, C. Zheng, X. Li, Speech enhancement using progressive learning-based convolutional recurrent neural network. Appl. Acoust.166:, 107347 (2020).","journal-title":"Appl. Acoust."},{"issue":"4","key":"204_CR12","doi-asserted-by":"publisher","first-page":"465","DOI":"10.1016\/j.specom.2010.12.003","volume":"53","author":"K. Paliwal","year":"2011","unstructured":"K. Paliwal, K. W\u00f3jcicki, B. Shannon, The importance of phase in speech enhancement. Speech Comm.53(4), 465\u2013494 (2011).","journal-title":"Speech Comm."},{"issue":"20","key":"204_CR13","first-page":"1687","volume":"2019","author":"X. Wang","year":"2019","unstructured":"X. Wang, C. Bao, Speech enhancement methods based on binaural cue coding. EURASIP J. Audio Speech Music Process.2019(20), 1687\u20134722 (2019).","journal-title":"EURASIP J. Audio Speech Music Process."},{"issue":"1","key":"204_CR14","doi-asserted-by":"publisher","first-page":"53","DOI":"10.1109\/TASLP.2018.2870725","volume":"27","author":"Y. Zhao","year":"2019","unstructured":"Y. Zhao, Z. Wang, D. Wang, Two-stage deep learning for noisy-reverberant speech enhancement. IEEE\/ACM Trans. Audio Speech Lang. Processing. 27(1), 53\u201362 (2019).","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Processing"},{"key":"204_CR15","doi-asserted-by":"crossref","unstructured":"K. Tan, X. Zhang, D. Wang, in 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Real-time speech enhancement using an efficient convolutional recurrent network for dual-microphone mobile phones in close-talk scenarios (Brighton, 2019), pp. 5751\u20135755.","DOI":"10.1109\/ICASSP.2019.8683385"},{"issue":"2","key":"204_CR16","doi-asserted-by":"publisher","first-page":"314","DOI":"10.1109\/49.608","volume":"6","author":"J. D. Johnston","year":"1988","unstructured":"J. D. Johnston, Transform coding of audio signals using perceptual noise criteria. IEEE J. Sel. Areas Commun.6(2), 314\u2013323 (1988).","journal-title":"IEEE J. Sel. Areas Commun."},{"issue":"6","key":"204_CR17","doi-asserted-by":"publisher","first-page":"1647","DOI":"10.1121\/1.383662","volume":"66","author":"M. R. Schroeder","year":"1979","unstructured":"M. R. Schroeder, B. S. Atal, H. J. L, Optimizing digital speech coders by exploiting masking properties of the human ear. J. Acoust. Soc. Am.66(6), 1647\u20131652 (1979).","journal-title":"J. Acoust. Soc. Am."},{"issue":"2","key":"204_CR18","doi-asserted-by":"publisher","first-page":"126","DOI":"10.1109\/89.748118","volume":"7","author":"N. Virag","year":"1999","unstructured":"N. Virag, Single channel speech enhancement based on masking properties of the human auditory system. IEEE Trans. Speech Audio Process.7(2), 126\u2013137 (1999).","journal-title":"IEEE Trans. Speech Audio Process."},{"issue":"12","key":"204_CR19","doi-asserted-by":"publisher","first-page":"3463","DOI":"10.1109\/78.258086","volume":"41","author":"D. Sinha","year":"1993","unstructured":"D. Sinha, A. H. Tewfik, Low bit rate transparent audio compression using adapted wavelets. IEEE Trans. Signal Process.41(12), 3463\u20133479 (1993).","journal-title":"IEEE Trans. Signal Process."},{"key":"204_CR20","first-page":"6629","volume-title":"2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","author":"A. Pandey","year":"2020","unstructured":"A. Pandey, D. Wang, in 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Densely connected neural network with dilated convolutions for real-time speech enhancement in the time domain (IEEEVirtual Barcelona, 2020), pp. 6629\u20136633."},{"issue":"11","key":"204_CR21","doi-asserted-by":"publisher","first-page":"1680","DOI":"10.1109\/LSP.2018.2871419","volume":"25","author":"J. M. Martin-Do\u00f1as","year":"2018","unstructured":"J. M. Martin-Do\u00f1as, A. M. Gomez, J. A. Gonzalez, A. M. Peinado, A deep learning loss function based on the perceptual evaluation of the speech quality. IEEE Signal Process. Lett.25(11), 1680\u20131684 (2018).","journal-title":"IEEE Signal Process. Lett."},{"key":"204_CR22","doi-asserted-by":"publisher","first-page":"26","DOI":"10.1109\/LSP.2019.2953810","volume":"27","author":"S. Fu","year":"2020","unstructured":"S. Fu, C. Liao, Y. Tsao, Learning with learned loss function: speech enhancement with quality-net to improve perceptual evaluation of speech quality. IEEE Signal Process. Lett.27:, 26\u201330 (2020).","journal-title":"IEEE Signal Process. Lett."},{"key":"204_CR23","first-page":"749","volume-title":"2001 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","author":"A. W. Rix","year":"2001","unstructured":"A. W. Rix, J. G. Beerends, M. P. Hollier, A. P. Hekstra, in 2001 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2. Perceptual evaluation of speech quality (pesq)-a new method for speech quality assessment of telephone networks and codecs (IEEESalt Lake City, Utah, 2001), pp. 749\u2013752."},{"issue":"3","key":"204_CR24","doi-asserted-by":"publisher","first-page":"483","DOI":"10.1109\/TASLP.2015.2512042","volume":"24","author":"D. S. Williamson","year":"2016","unstructured":"D. S. Williamson, Y. Wang, D. Wang, Complex ratio masking for monaural speech separation. IEEE\/ACM Trans. Audio Speech Lang. Process.24(3), 483\u2013492 (2016).","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"204_CR25","doi-asserted-by":"crossref","unstructured":"K. Tan, D. Wang, in 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Complex spectral mapping with a convolutional recurrent network for monaural speech enhancement (Brighton, 2019), pp. 6865\u20136869.","DOI":"10.1109\/ICASSP.2019.8682834"},{"key":"204_CR26","doi-asserted-by":"publisher","first-page":"380","DOI":"10.1109\/TASLP.2019.2955276","volume":"28","author":"K. Tan","year":"2020","unstructured":"K. Tan, D. Wang, Learning complex spectral mapping with gated convolutional recurrent networks for monaural speech enhancement. IEEE\/ACM Trans. Audio Speech Lang. Process.28:, 380\u2013390 (2020).","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"issue":"6","key":"204_CR27","doi-asserted-by":"publisher","first-page":"1109","DOI":"10.1109\/TASSP.1984.1164453","volume":"32","author":"Y. Ephraim","year":"1984","unstructured":"Y. Ephraim, D. Malah, Speech enhancement using a minimum-mean square error short-time spectral amplitude estimator. IEEE Trans. Acoust. Speech Signal Process.32(6), 1109\u20131121 (1984).","journal-title":"IEEE Trans. Acoust. Speech Signal Process."},{"issue":"4","key":"204_CR28","doi-asserted-by":"publisher","first-page":"113","DOI":"10.1109\/97.1001645","volume":"9","author":"I. Cohen","year":"2002","unstructured":"I. Cohen, Optimal speech enhancement under signal presence uncertainty using log-spectral amplitude estimator. IEEE Signal Process. Lett.9(4), 113\u2013116 (2002).","journal-title":"IEEE Signal Process. Lett."},{"key":"204_CR29","doi-asserted-by":"crossref","unstructured":"P. J. Wolfe, S. J. Godsill, Efficient alternatives to the ephraim and malah suppression rule for audio signal enhancement. EURASIP J. Adv. Signal Process., 1043\u20131051 (2003).","DOI":"10.1155\/S1110865703304111"},{"key":"204_CR30","doi-asserted-by":"publisher","first-page":"49","DOI":"10.1016\/j.specom.2019.10.001","volume":"114","author":"G. Itzhak","year":"2019","unstructured":"G. Itzhak, J. Benesty, I. Cohen, Nonlinear kronecker product filtering for multichannel noise reduction. Speech Comm.114:, 49\u201359 (2019). https:\/\/doi.org\/10.1016\/j.specom.2019.10.001.","journal-title":"Speech Comm."},{"issue":"5","key":"204_CR31","doi-asserted-by":"publisher","first-page":"504","DOI":"10.1109\/89.928915","volume":"9","author":"R. Martin","year":"2001","unstructured":"R. Martin, Noise power spectral density estimation based on optimal smoothing and minimum statistics. IEEE Trans. Speech Audio Process.9(5), 504\u2013512 (2001).","journal-title":"IEEE Trans. Speech Audio Process."},{"issue":"7","key":"204_CR32","first-page":"1687","volume":"2020","author":"G. Itzhak","year":"2020","unstructured":"G. Itzhak, J. Benesty, I. Cohen, Quadratic approach for single-channel noise reduction. EURASIP J. Audio Speech Music Process.2020(7), 1687\u20134722 (2020).","journal-title":"EURASIP J. Audio Speech Music Process."},{"issue":"1","key":"204_CR33","doi-asserted-by":"publisher","first-page":"12","DOI":"10.1109\/97.988717","volume":"9","author":"I. Cohen","year":"2002","unstructured":"I. Cohen, B. Berdugo, Noise estimation by minima controlled recursive averaging for robust speech enhancement. IEEE Signal Process. Lett.9(1), 12\u201315 (2002).","journal-title":"IEEE Signal Process. Lett."},{"issue":"5","key":"204_CR34","doi-asserted-by":"publisher","first-page":"466","DOI":"10.1109\/TSA.2003.811544","volume":"11","author":"I. Cohen","year":"2003","unstructured":"I. Cohen, Noise spectrum estimation in adverse environments: improved minima controlled recursive averaging. IEEE Trans. Speech Audio Process.11(5), 466\u2013475 (2003).","journal-title":"IEEE Trans. Speech Audio Process."},{"issue":"2","key":"204_CR35","doi-asserted-by":"publisher","first-page":"220","DOI":"10.1016\/j.specom.2005.08.005","volume":"48","author":"S. Rangachari","year":"2006","unstructured":"S. Rangachari, P. C. Loizou, A noise-estimation algorithm for highly non-stationary environments. Speech Commun.48(2), 220\u2013231 (2006).","journal-title":"Speech Commun."},{"key":"204_CR36","doi-asserted-by":"publisher","first-page":"4266","DOI":"10.1109\/ICASSP.2010.5495680","volume-title":"2010 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)","author":"R. C. Hendriks","year":"2010","unstructured":"R. C. Hendriks, R. Heusdens, J. Jensen, in 2010 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Mmse based noise psd tracking with low complexity (IEEEDallas, Texas, 2010), pp. 4266\u20134269."},{"issue":"4","key":"204_CR37","doi-asserted-by":"publisher","first-page":"1383","DOI":"10.1109\/TASL.2011.2180896","volume":"20","author":"T. Gerkmann","year":"2011","unstructured":"T. Gerkmann, R. C. Hendriks, Unbiased MMSE-based noise power estimation with low complexity and low tracking delay. IEEE Trans. Audio Speech Lang. Process.20(4), 1383\u20131393 (2011).","journal-title":"IEEE Trans. Audio Speech Lang. Process."},{"key":"204_CR38","doi-asserted-by":"crossref","unstructured":"J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, DARPA TIMIT acoustic-phonetic continous speech corpus CD-ROM. NIST speech disc 1-1.1. NASA STI\/Recon Technical Report N. 93: (1993).","DOI":"10.6028\/NIST.IR.4930"},{"issue":"1","key":"204_CR39","doi-asserted-by":"publisher","first-page":"7","DOI":"10.1109\/TASLP.2014.2364452","volume":"23","author":"Y. Xu","year":"2014","unstructured":"Y. Xu, J. Du, L. R. Dai, C. H. Lee, A gressive approach to speech enhancement based on deep neural networks. IEEE\/ACM Trans. Audio Speech Lang. Process.23(1), 7\u201319 (2014).","journal-title":"IEEE\/ACM Trans. Audio Speech Lang. Process."},{"key":"204_CR40","doi-asserted-by":"crossref","unstructured":"Z. Duan, G. Mysore, P. Snaragdis, in Interspeech 2012. International speech communication association (ISCA). Speech Enhancement by Online Non-negative Spectrogram Decompositionin Nonstationary Noise Environments (Portland, 2012), pp. 595\u2013598.","DOI":"10.21437\/Interspeech.2012-181"},{"key":"204_CR41","unstructured":"A. Varga, The NOISEX-92 study on the effect of additive noise on automatic speech recognition. Technical Report, DRA Speech Research Unit (1992)."},{"key":"204_CR42","volume-title":"Adam: A method for stochastic optimization","author":"D. P. Kingma","year":"2015","unstructured":"D. P. Kingma, J. Ba, Adam: A method for stochastic optimization (International Conference on Learning Representations (ICLR), San Diego, 2015)."},{"key":"204_CR43","doi-asserted-by":"crossref","unstructured":"A. Prodeus, I. Kotvytskyi, in 2017 IEEE 4th International Conference Actual Problems of Unmanned Aerial Vehicles Developments (APUAVD). On reliability of log-spectral distortion measure in speech quality estimation (Kyiv, 2017), pp. 121\u2013124.","DOI":"10.1109\/APUAVD.2017.8308790"},{"issue":"1","key":"204_CR44","doi-asserted-by":"publisher","first-page":"229","DOI":"10.1109\/TASL.2007.911054","volume":"16","author":"Y. Hu","year":"2007","unstructured":"Y. Hu, P. C. Loizou, Evaluation of objective quality measures for speech enhancement. IEEE Trans. Audio Speech Lang. Process.16(1), 229\u2013238 (2007).","journal-title":"IEEE Trans. Audio Speech Lang. Process."},{"issue":"2","key":"204_CR45","doi-asserted-by":"publisher","first-page":"282","DOI":"10.1016\/j.specom.2011.09.003","volume":"54","author":"K. bluePaliwal","year":"2012","unstructured":"K. bluePaliwal, B. Schwerin, K. W\u00f3jcicki, Speech enhancement using a minimum mean-square error short-time spectral modulation magnitude estimator. Speech Commun.54(2), 282\u2013305 (2012).","journal-title":"Speech Commun."},{"key":"204_CR46","volume-title":"The Analysis of Variance","author":"H. Scheffe","year":"1959","unstructured":"H. Scheffe, The Analysis of Variance (Wiley, New York, 1959)."},{"key":"204_CR47","doi-asserted-by":"crossref","unstructured":"J. -M. Valin, U. Isik, N. Phansalkar, R. Giri, K. Helwani, A. Krishnaswamy, in Interspeech 2020. International speech communication association (ISCA). A perceptually-motivated approach for low-complexity, real-time enhancement of fullband speech (Shanghai, 2020), pp. 2482\u20132486.","DOI":"10.21437\/Interspeech.2020-2730"}],"container-title":["EURASIP Journal on Audio, Speech, and Music Processing"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s13636-021-00204-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/article\/10.1186\/s13636-021-00204-9\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/link.springer.com\/content\/pdf\/10.1186\/s13636-021-00204-9.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,12,24]],"date-time":"2022-12-24T03:25:17Z","timestamp":1671852317000},"score":1,"resource":{"primary":{"URL":"https:\/\/asmp-eurasipjournals.springeropen.com\/articles\/10.1186\/s13636-021-00204-9"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2021,4,12]]},"references-count":47,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2021,12]]}},"alternative-id":["204"],"URL":"https:\/\/doi.org\/10.1186\/s13636-021-00204-9","relation":{},"ISSN":["1687-4722"],"issn-type":[{"value":"1687-4722","type":"electronic"}],"subject":[],"published":{"date-parts":[[2021,4,12]]},"assertion":[{"value":"3 September 2020","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"12 March 2021","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"12 April 2021","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The authors declare that they have no competing interests.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"17"}}