{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,12,21]],"date-time":"2025-12-21T06:25:40Z","timestamp":1766298340414,"version":"3.37.3"},"reference-count":54,"publisher":"Springer Science and Business Media LLC","issue":"1","license":[{"start":{"date-parts":[[2020,1,30]],"date-time":"2020-01-30T00:00:00Z","timestamp":1580342400000},"content-version":"tdm","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"},{"start":{"date-parts":[[2020,1,30]],"date-time":"2020-01-30T00:00:00Z","timestamp":1580342400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/creativecommons.org\/licenses\/by\/4.0"}],"content-domain":{"domain":["link.springer.com"],"crossmark-restriction":false},"short-container-title":["J AUDIO SPEECH MUSIC PROC."],"published-print":{"date-parts":[[2020,12]]},"abstract":"<jats:title>Abstract<\/jats:title><jats:p>Attention-based encoder-decoder models have recently shown competitive performance for automatic speech recognition (ASR) compared to conventional ASR systems. However, how to employ attention models for online speech recognition still needs to be explored. Different from conventional attention models wherein the soft alignment is obtained by a pass over the entire input sequence, attention models for online recognition must learn online alignment to attend part of input sequence monotonically when generating output symbols. Based on the fact that every output symbol is corresponding to a segment of input sequence, we propose a new attention mechanism for learning online alignment by decomposing the conventional alignment into two parts:<jats:italic>segmentation<\/jats:italic>\u2014segment boundary detection with hard decision\u2014and<jats:italic>segment-directed attention<\/jats:italic>\u2014information aggregation within the segment with soft attention. The boundary detection is conducted along the time axis from left to right, and a decision is made for each input frame about whether it is a segment boundary or not. When a boundary is detected, the decoder generates an output symbol by attending the inputs within the corresponding segment. With the proposed attention mechanism, online speech recognition can be realized. The experimental results on TIMIT and WSJ dataset show that our proposed attention mechanism achieves comparable online performance with state-of-the-art models.<\/jats:p>","DOI":"10.1186\/s13636-020-0170-z","type":"journal-article","created":{"date-parts":[[2020,1,30]],"date-time":"2020-01-30T14:03:04Z","timestamp":1580392984000},"update-policy":"https:\/\/doi.org\/10.1007\/springer_crossmark_policy","source":"Crossref","is-referenced-by-count":8,"title":["Segment boundary detection directed attention for online end-to-end speech recognition"],"prefix":"10.1186","volume":"2020","author":[{"ORCID":"https:\/\/orcid.org\/0000-0003-3635-5332","authenticated-orcid":false,"given":"Junfeng","family":"Hou","sequence":"first","affiliation":[]},{"given":"Wu","family":"Guo","sequence":"additional","affiliation":[]},{"given":"Yan","family":"Song","sequence":"additional","affiliation":[]},{"given":"Li-Rong","family":"Dai","sequence":"additional","affiliation":[]}],"member":"297","published-online":{"date-parts":[[2020,1,30]]},"reference":[{"key":"170_CR1","unstructured":"D. Bahdanau, K. Cho, Y. Bengio, Neural machine translation by jointly learning to align and translate. arXiv preprint (2014). arXiv:1409.0473."},{"key":"170_CR2","unstructured":"K. Xu, J. L. Ba, R. Kiros, K. Cho, A. Courville, R. Salakhudinov, R. S. Zemel, Y. Bengio, in 32nd International Conference on Machine Learning, ICML 2015, 3. Show, attend and tell: Neural image caption generation with visual attention (International Machine Learning Society (IMLS), 2015), pp. 2048\u20132057. https:\/\/nyuscholars.nyu.edu\/en\/publications\/show-attend-and-tell-neural-imagecaption-generation-with-visual-."},{"key":"170_CR3","first-page":"577","volume-title":"NIPS\u016015: Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1","author":"J. K. Chorowski","year":"2015","unstructured":"J. K. Chorowski, D. Bahdanau, D. Serdyuk, K. Cho, Y. Bengio, in NIPS\u016015: Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 1. Attention-based models for speech recognition (MIT PressCambridge, 2015), pp. 577\u2013585. https:\/\/dl.acm.org\/doi\/proceedings\/10.5555\/2969239."},{"key":"170_CR4","doi-asserted-by":"publisher","unstructured":"D. Bahdanau, J. Chorowski, D. Serdyuk, P. Brakel, Y. Bengio, in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). End-to-end attention-based large vocabulary speech recognition, (2016), pp. 4945\u20134949. https:\/\/doi.org\/10.1109\/icassp.2016.7472618.","DOI":"10.1109\/icassp.2016.7472618"},{"key":"170_CR5","doi-asserted-by":"publisher","unstructured":"W. Chan, N. Jaitly, Q. Le, O. Vinyals, in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Listen, attend and spell: a neural network for large vocabulary conversational speech recognition, (2016), pp. 4960\u20134964. https:\/\/doi.org\/10.1109\/icassp.2016.7472621.","DOI":"10.1109\/icassp.2016.7472621"},{"issue":"8","key":"170_CR6","doi-asserted-by":"publisher","first-page":"1240","DOI":"10.1109\/JSTSP.2017.2763455","volume":"11","author":"S. Watanabe","year":"2017","unstructured":"S. Watanabe, T. Hori, S. Kim, J. R. Hershey, T. Hayashi, Hybrid CTC\/attention architecture for end-to-end speech recognition. IEEE J. Sel. Top. Signal Process.11(8), 1240\u20131253 (2017).","journal-title":"IEEE J. Sel. Top. Signal Process."},{"issue":"8","key":"170_CR7","doi-asserted-by":"publisher","first-page":"1735","DOI":"10.1162\/neco.1997.9.8.1735","volume":"9","author":"S. Hochreiter","year":"1997","unstructured":"S. Hochreiter, J. Schmidhuber, Long short-term memory. Neural Comput.9(8), 1735\u20131780 (1997).","journal-title":"Neural Comput."},{"key":"170_CR8","doi-asserted-by":"crossref","unstructured":"H. Sak, A. Senior, F. Beaufays, in INTERSPEECH-2014. Long short-term memory recurrent neural network architectures for large scale acoustic modeling, (2014), pp. 338\u2013342. https:\/\/www.isca-speech.org\/archive\/interspeech_2014\/i14_0338.html.","DOI":"10.21437\/Interspeech.2014-80"},{"key":"170_CR9","doi-asserted-by":"publisher","unstructured":"K. Cho, van Merrienboer Bart, G. Caglar, B. Dzmitry, B. Fethi, S. Holger, B. Yoshua, in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Learning phrase representations using RNN encoder\u2013decoder for statistical machine translation, (2014), pp. 1724\u20131734. https:\/\/doi.org\/10.3115\/v1\/d14-1179.","DOI":"10.3115\/v1\/d14-1179"},{"key":"170_CR10","unstructured":"A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, I. Polosukhin, in Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA. Attention is All you Need, (2017), pp. 5998\u20136008. https:\/\/dblp.uni-trier.de\/rec\/bibtex\/conf\/nips\/VaswaniSPUJGKP17."},{"key":"170_CR11","doi-asserted-by":"publisher","unstructured":"W. Chan, I. Lane, in Proc. Interspeech 2016. On online attention-based speech recognition and joint Mandarin character-Pinyin training, (2016), pp. 3404\u20133408. https:\/\/doi.org\/10.21437\/interspeech.2016-334.","DOI":"10.21437\/interspeech.2016-334"},{"key":"170_CR12","unstructured":"N. Jaitly, Q. V. Le, O. Vinyals, I. Sutskever, D. Sussillo, S. Bengio, in Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, December 5-10, 2016, Barcelona, Spain. An Online Sequence-to-Sequence Model Using Partial Conditioning, (2016), pp. 5067\u20135075. https:\/\/dblp.uni-trier.de\/rec\/bibtex\/conf\/nips\/JaitlyLVSSB16."},{"key":"170_CR13","doi-asserted-by":"publisher","unstructured":"N. Moritz, T. Hori, J. L. Roux, in ICASSP 2019 - 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Triggered attention for end-to-end speech recognition, (2019), pp. 5666\u20135670. https:\/\/doi.org\/10.1109\/icassp.2019.8683510.","DOI":"10.1109\/icassp.2019.8683510"},{"key":"170_CR14","doi-asserted-by":"publisher","unstructured":"Y. Luo, C. Chiu, N. Jaitly, I. Sutskever, in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Learning online alignments with continuous rewards policy gradient, (2017), pp. 2801\u20132805. https:\/\/doi.org\/10.1109\/icassp.2017.7952667.","DOI":"10.1109\/icassp.2017.7952667"},{"key":"170_CR15","unstructured":"C. Chiu, D. Lawson, Y. Luo, G. Tucker, K. Swersky, I. Sutskever, N. Jaitly, An online sequence-to-sequence model for noisy speech recognition. arXiv preprint (2017). arXiv:1706.06428."},{"key":"170_CR16","unstructured":"C. Raffel, T. Luong, P. J. Liu, R. J. Weiss, D. Eck, Online and linear-time attention by enforcing monotonic alignments. arXiv preprint (2017). arXiv:1704.00784."},{"key":"170_CR17","unstructured":"C. Chiu, C. Raffel, Monotonic chunkwise attention. arXiv preprint (2017). arXiv:1712.05382."},{"key":"170_CR18","doi-asserted-by":"publisher","unstructured":"L. Lu, L. Kong, C. Dyer, N. A. Smith, S. Renals, in Proc. Interspeech 2016. Segmental recurrent neural networks for end-to-end speech recognition, (2016), pp. 385\u2013389. https:\/\/doi.org\/10.21437\/interspeech.2016-40.","DOI":"10.21437\/interspeech.2016-40"},{"key":"170_CR19","doi-asserted-by":"publisher","unstructured":"E. Beck, M. Hannemann, P. D\u00f6tsch, R. Schl\u00fcter, H. Ney, in Proc. Interspeech 2018. Segmental encoder-decoder models for large vocabulary automatic speech recognition, (2018), pp. 766\u2013770. https:\/\/doi.org\/10.21437\/interspeech.2018-1212.","DOI":"10.21437\/interspeech.2018-1212"},{"key":"170_CR20","unstructured":"W. Zaremba, I. Sutskever, Reinforcement learning neural turing machines-revised. arXiv preprint (2015). arXiv:1505.00521."},{"key":"170_CR21","first-page":"2204","volume-title":"Proceedings of the 27 th International Conference on Neural Information Processing Systems - Volume 2","author":"V. Mnih","year":"2014","unstructured":"V. Mnih, N. Heess, A. Graves, K. Kavukcuoglu, in Proceedings of the 27 th International Conference on Neural Information Processing Systems - Volume 2. Recurrent Models of Visual Attention (MIT PressCambridge, 2014), pp. 2204\u20132212. https:\/\/dl.acm.org\/doi\/10.5555\/2969033.2969073."},{"key":"170_CR22","doi-asserted-by":"publisher","unstructured":"S. Mathe, A. Pirinen, C. Sminchisescu, in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Reinforcement learning for visual object detection, (2016), pp. 2894\u20132902. https:\/\/doi.org\/10.1109\/cvpr.2016.316.","DOI":"10.1109\/cvpr.2016.316"},{"key":"170_CR23","unstructured":"D. Zhang, H. Maei, X. Wang, Y. -F. Wang, Deep reinforcement learning for visual object tracking in videos. arXiv preprint (2017). arXiv:1701.08936."},{"key":"170_CR24","unstructured":"C. Wang, Y. Wang, P. -S. Huang, A. Mohamed, D. Zhou, L. Deng, in Proceedings of the 34th International Conference on Machine Learning - Volume 70. Sequence modeling via segmentations, (2017), pp. 3674\u20133683."},{"key":"170_CR25","doi-asserted-by":"publisher","unstructured":"L. Yu, J. Buys, P. Blunsom, in Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP). Online segment to segment neural transduction, (2016), pp. 1307\u20131316. https:\/\/doi.org\/10.18653\/v1\/d16-1138.","DOI":"10.18653\/v1\/d16-1138"},{"key":"170_CR26","unstructured":"L. Kong, C. Dyer, N. A. Smith, Segmental recurrent neural networks. arXiv preprint (2015). arXiv:1511.06018."},{"key":"170_CR27","doi-asserted-by":"publisher","unstructured":"D. Lawson, C. Chiu, G. Tucker, C. Raffel, K. Swersky, N. Jaitly, in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Learning hard alignments with variational inference, (2018), pp. 5799\u20135803. https:\/\/doi.org\/10.1109\/icassp.2018.8461977.","DOI":"10.1109\/icassp.2018.8461977"},{"key":"170_CR28","first-page":"3528","volume-title":"Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2","author":"J. Schulman","year":"2015","unstructured":"J. Schulman, N. Heess, T. Weber, P. Abbeel, in Proceedings of the 28th International Conference on Neural Information Processing Systems - Volume 2. Gradient Estimation Using Stochastic Computation Graphs (MIT PressCambridge, 2015), pp. 3528\u20133536. https:\/\/dl.acm.org\/doi\/10.5555\/2969442.2969633."},{"key":"170_CR29","doi-asserted-by":"publisher","unstructured":"Y. -H. Wang, C. -T. Chung, H. -Y. Lee, in Proc. Interspeech 2017. Gate activation signal analysis for gated recurrent neural networks and its correlation with phoneme boundaries, (2017), pp. 3822\u20133826. https:\/\/doi.org\/10.21437\/interspeech.2017-877.","DOI":"10.21437\/interspeech.2017-877"},{"key":"170_CR30","doi-asserted-by":"publisher","first-page":"602","DOI":"10.1016\/j.neunet.2005.06.042","volume":"18","author":"A. Graves","year":"2005","unstructured":"A. Graves, J. Schmidhuber, Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural. Netw. Off. J. Int. Neural. Netw. Soc.18:, 602\u201310 (2005).","journal-title":"Neural. Netw. Off. J. Int. Neural. Netw. Soc."},{"issue":"3","key":"170_CR31","doi-asserted-by":"publisher","first-page":"241","DOI":"10.1080\/09540099108946587","volume":"3","author":"R. J. Williams","year":"1991","unstructured":"R. J. Williams, J. Peng, Function optimization using connectionist reinforcement learning algorithms. Connect. Sci.3(3), 241\u2013268 (1991).","journal-title":"Connect. Sci."},{"key":"170_CR32","doi-asserted-by":"crossref","unstructured":"J. S. Garofolo, L. F. Lamel, W. M. Fisher, J. G. Fiscus, D. S. Pallett, DARPA TIMIT acoustic-phonetic continous speech corpus CD-ROM. NIST speech disc 1-1.1. NASA STI\/Recon technical report n. 93 (1993).","DOI":"10.6028\/NIST.IR.4930"},{"key":"170_CR33","doi-asserted-by":"publisher","unstructured":"D. B. Paul, J. M. Baker, in Proceedings of the Workshop on Speech and Natural Language. The design for the Wall Street Journal-based CSR corpus, (1992), pp. 357\u2013362. https:\/\/doi.org\/10.6028\/nist.ir.4930.","DOI":"10.6028\/nist.ir.4930"},{"key":"170_CR34","unstructured":"J. Chung, \u00c7. G\u00fcl\u00e7ehre, K. Cho, Y. Bengio, Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint (2014). arXiv:1412.3555."},{"key":"170_CR35","unstructured":"M. D. Zeiler, Adadelta: an adaptive learning rate method. arXiv preprint (2012). arXiv:1212.5701."},{"key":"170_CR36","first-page":"1929","volume":"15","author":"N. Srivastava","year":"2014","unstructured":"N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, R. Salakhutdinov, Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res.15:, 1929\u20131958 (2014).","journal-title":"J. Mach. Learn. Res."},{"key":"170_CR37","unstructured":"W. Zaremba, I. Sutskever, O. Vinyals, Recurrent neural network regularization. arXiv preprint (2014). arXiv:1409.2329."},{"key":"170_CR38","first-page":"2348","volume-title":"Proceedings of the 24th International Conference on Neural Information Processing Systems","author":"A. Graves","year":"2011","unstructured":"A. Graves, in Proceedings of the 24th International Conference on Neural Information Processing Systems. Practical Variational Inference for Neural Networks (Curran Associates Inc.Red Hook, 2011), pp. 2348\u20132356. https:\/\/dl.acm.org\/doi\/10.5555\/2986459.2986721."},{"key":"170_CR39","unstructured":"Theano Development Team, Theano: A Python framework for fast computation of mathematical expressions. arXiv preprint (2016). arXiv:1605.02688."},{"key":"170_CR40","doi-asserted-by":"publisher","unstructured":"J. Hou, S. Zhang, L. -R. Dai, in Proc. Interspeech 2017. Gaussian prediction based attention for online end-to-end speech recognition, (2017), pp. 3692\u20133696. https:\/\/doi.org\/10.21437\/interspeech.2017-751.","DOI":"10.21437\/interspeech.2017-751"},{"key":"170_CR41","doi-asserted-by":"publisher","unstructured":"A. Graves, A. Mohamed, G. Hinton, in 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Speech recognition with deep recurrent neural networks, (2013), pp. 6645\u20136649. https:\/\/doi.org\/10.1109\/icassp.2013.6638947.","DOI":"10.1109\/icassp.2013.6638947"},{"key":"170_CR42","doi-asserted-by":"publisher","unstructured":"G. Huang, Z. Liu, L. v. d. Maaten, K. Q. Weinberger, in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Densely connected convolutional networks, (2017), pp. 2261\u20132269. https:\/\/doi.org\/10.1109\/cvpr.2017.243.","DOI":"10.1109\/cvpr.2017.243"},{"key":"170_CR43","unstructured":"C. Y. Li, N. T. Vu, Densely connected convolutional networks for speech recognition. arXiv preprint (2018). arXiv:1808.03570."},{"key":"170_CR44","doi-asserted-by":"crossref","unstructured":"Z. Ding, R. Xia, J. Yu, X. Li, J. Yang, Densely connected bidirectional lstm with applications to sentence classification. arXiv preprint arXiv:1802.00889 (2018).","DOI":"10.1007\/978-3-319-99501-4_24"},{"key":"170_CR45","first-page":"448","volume-title":"Proceedings of the 32nd International Conference on Machine Learning","author":"S. Ioffe","year":"2015","unstructured":"S. Ioffe, C. Szegedy, in Proceedings of the 32nd International Conference on Machine Learning. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift (PMLRLille, 2015), pp. 448\u2013456. http:\/\/proceedings.mlr.press\/v37\/ioffe15.html."},{"key":"170_CR46","unstructured":"C. Chiu, T. N. Sainath, Y. Wu, R. Prabhavalkar, P. Nguyen, Z. Chen, A. Kannan, R. J. Weiss, K. Rao, E. Gonina, N. Jaitly, B. Li, J. Chorowski, M. Bacchiani, in 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). State-of-the-Art Speech Recognition with Sequence-to-Sequence Models, (2018), pp. 4774\u20134778. https:\/\/ieeexplore.ieee.org\/abstract\/document\/8462105."},{"key":"170_CR47","unstructured":"R. Sennrich, B. Haddow, A. Birch, in ACL. Neural machine translation of rare words with subword units, (2016), pp. 1715\u20131725."},{"key":"170_CR48","unstructured":"A. Zeyer, K. Irie, R. Schl\u00fcter, H. Ney, Improved training of end-to-end attention models for speech recognition. arXiv preprint (2018). arXiv:1805.03294."},{"key":"170_CR49","unstructured":"D. P. Kingma, J. Ba, in 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Adam: A Method for Stochastic Optimization, (2015). https:\/\/dblp.org\/rec\/bibtex\/journals\/corr\/KingmaB14."},{"key":"170_CR50","unstructured":"G. Pereyra, G. Tucker, J. Chorowski, L. Kaiser, G. E. Hinton, Regularizing neural networks by penalizing confident output distributions. arXiv preprint (2017). arXiv:1701.06548."},{"key":"170_CR51","unstructured":"M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Man\u00e9, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Vi\u00e9gas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, X. Zheng, TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems (2015). https:\/\/github.com\/tensorflow\/docs\/blob\/master\/site\/en\/about\/bib.md."},{"key":"170_CR52","unstructured":"C. Wang, D. Yogatama, A. Coates, T. X. Han, A. Y. Hannun, B. Xiao, in Workshop Extended Abstracts of the 4th International Conference on Learning Representations. Lookahead Convolution Layer for Unidirectional Recurrent Neural Networks, (2016). https:\/\/www.semanticscholar.org\/paper\/Lookahead-Convolution-Layer-for-Unidirectional-Wang-Yogatama\/a0d864d73189101a0bffc6656aa907f3b2193cfa."},{"key":"170_CR53","unstructured":"D. Povey, A. Ghoshal, G. Boulianne, L. Burget, O. Glembek, N. Goel, M. Hannemann, P. Motlicek, Y. Qian, P. Schwarz, J. Silovsky, G. Stemmer, K. Vesely, in IEEE 2011 Workshop on Automatic Speech Recognition and Understanding. The Kaldi Speech Recognition Toolkit (IEEE Signal Processing Society, 2011). http:\/\/kaldi-asr.org\/doc\/about.html."},{"key":"170_CR54","doi-asserted-by":"publisher","unstructured":"V. Panayotov, G. Chen, D. Povey, S. Khudanpur, in 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Librispeech: an ASR corpus based on public domain audio books, (2015), pp. 5206\u20135210. https:\/\/doi.org\/10.1109\/icassp.2015.7178964.","DOI":"10.1109\/icassp.2015.7178964"}],"container-title":["EURASIP Journal on Audio, Speech, and Music Processing"],"original-title":[],"language":"en","link":[{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1186\/s13636-020-0170-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/link.springer.com\/article\/10.1186\/s13636-020-0170-z\/fulltext.html","content-type":"text\/html","content-version":"vor","intended-application":"text-mining"},{"URL":"http:\/\/link.springer.com\/content\/pdf\/10.1186\/s13636-020-0170-z.pdf","content-type":"application\/pdf","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2022,10,13]],"date-time":"2022-10-13T14:22:35Z","timestamp":1665670955000},"score":1,"resource":{"primary":{"URL":"https:\/\/asmp-eurasipjournals.springeropen.com\/articles\/10.1186\/s13636-020-0170-z"}},"subtitle":[],"short-title":[],"issued":{"date-parts":[[2020,1,30]]},"references-count":54,"journal-issue":{"issue":"1","published-print":{"date-parts":[[2020,12]]}},"alternative-id":["170"],"URL":"https:\/\/doi.org\/10.1186\/s13636-020-0170-z","relation":{},"ISSN":["1687-4722"],"issn-type":[{"type":"electronic","value":"1687-4722"}],"subject":[],"published":{"date-parts":[[2020,1,30]]},"assertion":[{"value":"7 April 2019","order":1,"name":"received","label":"Received","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"13 January 2020","order":2,"name":"accepted","label":"Accepted","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"30 January 2020","order":3,"name":"first_online","label":"First Online","group":{"name":"ArticleHistory","label":"Article History"}},{"value":"The authors declare that they have no competing interests.","order":1,"name":"Ethics","group":{"name":"EthicsHeading","label":"Competing interests"}}],"article-number":"3"}}