{"status":"ok","message-type":"work","message-version":"1.0.0","message":{"indexed":{"date-parts":[[2025,10,16]],"date-time":"2025-10-16T06:59:37Z","timestamp":1760597977628,"version":"3.41.0"},"reference-count":49,"publisher":"Association for Computing Machinery (ACM)","issue":"3","license":[{"start":{"date-parts":[[2020,8,17]],"date-time":"2020-08-17T00:00:00Z","timestamp":1597622400000},"content-version":"vor","delay-in-days":0,"URL":"https:\/\/www.acm.org\/publications\/policies\/copyright_policy#Background"}],"content-domain":{"domain":["dl.acm.org"],"crossmark-restriction":true},"short-container-title":["ACM Trans. Archit. Code Optim."],"published-print":{"date-parts":[[2020,9,30]]},"abstract":"<jats:p>\n            The increase in computational power of embedded devices and the latency demands of novel applications brought a paradigm shift on how and where the computation is performed. Although AI inference is slowly moving from the cloud to end-devices with limited resources, time-centric recurrent networks like Long-Short Term Memory remain too complex to be transferred on embedded devices without extreme simplifications and limiting the performance of many notable applications. To solve this issue, the Reservoir Computing paradigm proposes sparse, untrained non-linear networks, the Reservoir, that can embed temporal relations without some of the hindrances of Recurrent Neural Networks training, and with a lower memory occupation. Echo State Networks (ESN) and Liquid State Machines are the most notable examples. In this scenario, we propose\n            <jats:bold>EchoBay<\/jats:bold>\n            , a comprehensive C++ library for ESN design and training. EchoBay is architecture-agnostic to guarantee maximum performance on different devices (whether embedded or not), and it offers the possibility to optimize and tailor an ESN on a particular case study, reducing at the minimum the effort required on the user side. This can be done thanks to the Bayesian Optimization (BO) process, which efficiently and automatically searches hyper-parameters that maximize a fitness function. Additionally, we designed different optimization techniques that take in consideration resource constraints of the device to minimize memory footprint and inference time. Our results in different scenarios show an average speed-up in training time of 119x compared to Grid and Random search of hyper-parameters, a decrease of 94% of trained models size and 95% in inference time, maintaining comparable performance for the given task. The EchoBay library is Open Source and publicly available at https:\/\/github.com\/necst\/Echobay.\n          <\/jats:p>","DOI":"10.1145\/3404993","type":"journal-article","created":{"date-parts":[[2020,8,17]],"date-time":"2020-08-17T13:24:45Z","timestamp":1597670685000},"page":"1-24","update-policy":"https:\/\/doi.org\/10.1145\/crossmark-policy","source":"Crossref","is-referenced-by-count":12,"title":["EchoBay"],"prefix":"10.1145","volume":"17","author":[{"ORCID":"https:\/\/orcid.org\/0000-0001-8571-3971","authenticated-orcid":false,"given":"L.","family":"Cerina","sequence":"first","affiliation":[{"name":"Dipartimento di Elettronica Informazione e Bioingegneria, Politecnico di Milano, Milano, Italy"}]},{"given":"M. D.","family":"Santambrogio","sequence":"additional","affiliation":[{"name":"Dipartimento di Elettronica Informazione e Bioingegneria, Politecnico di Milano, Milano, Italy"}]},{"given":"G.","family":"Franco","sequence":"additional","affiliation":[{"name":"Department of Computer Science, University of Pisa, Pisa, Italy"}]},{"given":"C.","family":"Gallicchio","sequence":"additional","affiliation":[{"name":"Department of Computer Science, University of Pisa, Pisa, Italy"}]},{"given":"A.","family":"Micheli","sequence":"additional","affiliation":[{"name":"Department of Computer Science, University of Pisa, Pisa, Italy"}]}],"member":"320","published-online":{"date-parts":[[2020,8,17]]},"reference":[{"key":"e_1_2_1_1_1","doi-asserted-by":"publisher","DOI":"10.1155\/2016\/3917892"},{"key":"e_1_2_1_2_1","volume-title":"Echo state networks: Analysis, training and predictive control. CoRR abs\/1902.01618","author":"Armenio Luca Bugliari","year":"2019","unstructured":"Luca Bugliari Armenio , Enrico Terzi , Marcello Farina , and Riccardo Scattolini . 2019. Echo state networks: Analysis, training and predictive control. CoRR abs\/1902.01618 ( 2019 ). arxiv:1902.01618 http:\/\/arxiv.org\/abs\/1902.01618. Luca Bugliari Armenio, Enrico Terzi, Marcello Farina, and Riccardo Scattolini. 2019. Echo state networks: Analysis, training and predictive control. CoRR abs\/1902.01618 (2019). arxiv:1902.01618 http:\/\/arxiv.org\/abs\/1902.01618."},{"key":"e_1_2_1_3_1","doi-asserted-by":"publisher","DOI":"10.1109\/72.846741"},{"key":"e_1_2_1_4_1","doi-asserted-by":"publisher","DOI":"10.1007\/s00521-013-1364-4"},{"key":"e_1_2_1_5_1","article-title":"Random search for hyper-parameter optimization","author":"Bergstra James","year":"2012","unstructured":"James Bergstra and Yoshua Bengio . 2012 . Random search for hyper-parameter optimization . Journal of Machine Learning Research 13 (Feb 2012), 281--305. James Bergstra and Yoshua Bengio. 2012. Random search for hyper-parameter optimization. Journal of Machine Learning Research 13 (Feb 2012), 281--305.","journal-title":"Journal of Machine Learning Research 13"},{"key":"e_1_2_1_6_1","doi-asserted-by":"publisher","DOI":"10.1145\/3089801.3089804"},{"key":"e_1_2_1_7_1","volume-title":"Proceedings of the 2019 European Symposium on Artificial Neural Networks (ESANN'19)","author":"Cerina Luca","year":"2019","unstructured":"Luca Cerina , Giuseppe Franco , and Marco Domenico Santambrogio . 2019 . Lightweight autonomous bayesian optimization of Echo-State Networks . In Proceedings of the 2019 European Symposium on Artificial Neural Networks (ESANN'19) . 637--642. Luca Cerina, Giuseppe Franco, and Marco Domenico Santambrogio. 2019. Lightweight autonomous bayesian optimization of Echo-State Networks. In Proceedings of the 2019 European Symposium on Artificial Neural Networks (ESANN'19). 637--642."},{"key":"e_1_2_1_8_1","doi-asserted-by":"publisher","DOI":"10.1109\/MC.2018.2381119"},{"key":"e_1_2_1_9_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2019.03.012"},{"key":"e_1_2_1_10_1","volume-title":"Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555","author":"Chung Junyoung","year":"2014","unstructured":"Junyoung Chung , Caglar Gulcehre , KyungHyun Cho , and Yoshua Bengio . 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 ( 2014 ). Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014)."},{"key":"e_1_2_1_11_1","doi-asserted-by":"publisher","DOI":"10.21105\/joss.00545"},{"key":"e_1_2_1_12_1","doi-asserted-by":"publisher","DOI":"10.1109\/TNNLS.2015.2479117"},{"key":"e_1_2_1_13_1","volume-title":"A tutorial on Bayesian optimization. arXiv preprint arXiv:1807.02811","author":"Frazier Peter I.","year":"2018","unstructured":"Peter I. Frazier . 2018. A tutorial on Bayesian optimization. arXiv preprint arXiv:1807.02811 ( 2018 ). Peter I. Frazier. 2018. A tutorial on Bayesian optimization. arXiv preprint arXiv:1807.02811 (2018)."},{"key":"e_1_2_1_14_1","volume-title":"Proceedings of European Symposium on Artificial Neural Networks, ESANN","author":"Gallicchio Claudio","year":"2019","unstructured":"Claudio Gallicchio . 2019 . Chasing the echo state property . Proceedings of European Symposium on Artificial Neural Networks, ESANN , 2019. Claudio Gallicchio. 2019. Chasing the echo state property. Proceedings of European Symposium on Artificial Neural Networks, ESANN, 2019."},{"key":"e_1_2_1_15_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2011.02.002"},{"key":"e_1_2_1_16_1","volume-title":"Proceedings of European Symposium on Artificial Neural Networks, ESANN","author":"Gallicchio Claudio","year":"2016","unstructured":"Claudio Gallicchio and Alessio Micheli . 2016 . Deep reservoir computing: A critical analysis . Proceedings of European Symposium on Artificial Neural Networks, ESANN , 2019. Claudio Gallicchio and Alessio Micheli. 2016. Deep reservoir computing: A critical analysis. Proceedings of European Symposium on Artificial Neural Networks, ESANN, 2019."},{"key":"e_1_2_1_17_1","doi-asserted-by":"publisher","DOI":"10.1609\/aaai.v34i04.5803"},{"key":"e_1_2_1_18_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neucom.2016.12.089"},{"key":"e_1_2_1_19_1","first-page":"2019","volume-title":"Proceedings of European Symposium on Artificial Neural Networks, ESANN","author":"Gallicchio Claudio","year":"2019","unstructured":"Claudio Gallicchio , Alessio Micheli , and Luca Pedrelli . 2019 . Comparison between DeepESNs and gated RNNs on multivariate time-series prediction . Proceedings of European Symposium on Artificial Neural Networks, ESANN , 2019 (2019), 619--624. Claudio Gallicchio, Alessio Micheli, and Luca Pedrelli. 2019. Comparison between DeepESNs and gated RNNs on multivariate time-series prediction. Proceedings of European Symposium on Artificial Neural Networks, ESANN, 2019 (2019), 619--624."},{"key":"e_1_2_1_20_1","volume-title":"Weigend","author":"Gershenfeld Neil A.","year":"1993","unstructured":"Neil A. Gershenfeld and Andreas S . Weigend . 1993 . The Future of Time Series. Technical Report. Xerox Corporation, Palo Alto Research Center . Neil A. Gershenfeld and Andreas S. Weigend. 1993. The Future of Time Series. Technical Report. Xerox Corporation, Palo Alto Research Center."},{"key":"e_1_2_1_21_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.patcog.2017.10.013"},{"key":"e_1_2_1_22_1","doi-asserted-by":"publisher","DOI":"10.1016\/0375-9601(75)90353-9"},{"key":"e_1_2_1_23_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jfranklin.2016.05.004"},{"key":"e_1_2_1_25_1","volume-title":"Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science 304, 5667","author":"Jaeger Herbert","year":"2004","unstructured":"Herbert Jaeger and Harald Haas . 2004. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science 304, 5667 ( 2004 ), 78--80. Herbert Jaeger and Harald Haas. 2004. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication. Science 304, 5667 (2004), 78--80."},{"key":"e_1_2_1_26_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2007.04.016"},{"key":"e_1_2_1_27_1","doi-asserted-by":"publisher","DOI":"10.1145\/363707.363723"},{"key":"e_1_2_1_28_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2019.01.002"},{"key":"e_1_2_1_29_1","volume-title":"Kremer","author":"Kolen John F.","year":"2001","unstructured":"John F. Kolen and Stefan C . Kremer . 2001 . A Field Guide to Dynamical Recurrent Networks. John Wiley 8 Sons. John F. Kolen and Stefan C. Kremer. 2001. A Field Guide to Dynamical Recurrent Networks. John Wiley 8 Sons."},{"volume-title":"Clustering of spectral images using Echo state networks. In 2013 IEEE INISTA","author":"Koprinkova-Hristova Petia","key":"e_1_2_1_30_1","unstructured":"Petia Koprinkova-Hristova , Donka Angelova , Denitsa Borisova , and Georgi Jelev . 2013. Clustering of spectral images using Echo state networks. In 2013 IEEE INISTA . IEEE , 1--5. Petia Koprinkova-Hristova, Donka Angelova, Denitsa Borisova, and Georgi Jelev. 2013. Clustering of spectral images using Echo state networks. In 2013 IEEE INISTA. IEEE, 1--5."},{"key":"e_1_2_1_31_1","volume-title":"Fastgrnn: A fast, accurate, stable and tiny kilobyte sized gated recurrent neural network. In Advances in Neural Information Processing Systems. 9017--9028.","author":"Kusupati Aditya","year":"2018","unstructured":"Aditya Kusupati , Manish Singh , Kush Bhatia , Ashish Kumar , Prateek Jain , and Manik Varma . 2018 . Fastgrnn: A fast, accurate, stable and tiny kilobyte sized gated recurrent neural network. In Advances in Neural Information Processing Systems. 9017--9028. Aditya Kusupati, Manish Singh, Kush Bhatia, Ashish Kumar, Prateek Jain, and Manik Varma. 2018. Fastgrnn: A fast, accurate, stable and tiny kilobyte sized gated recurrent neural network. In Advances in Neural Information Processing Systems. 9017--9028."},{"key":"e_1_2_1_32_1","first-page":"011015","article-title":"High-speed photonic reservoir computing using a time-delay-based architecture: Million words per second classification","author":"Larger Laurent","year":"2017","unstructured":"Laurent Larger , Antonio Bayl\u00f3n-Fuentes , Romain Martinenghi , Vladimir S. Udaltsov , Yanne K. Chembo , and Maxime Jacquot . 2017 . High-speed photonic reservoir computing using a time-delay-based architecture: Million words per second classification . Physical Review X 7, 1 (2017), 011015 . Laurent Larger, Antonio Bayl\u00f3n-Fuentes, Romain Martinenghi, Vladimir S. Udaltsov, Yanne K. Chembo, and Maxime Jacquot. 2017. High-speed photonic reservoir computing using a time-delay-based architecture: Million words per second classification. Physical Review X 7, 1 (2017), 011015.","journal-title":"Physical Review"},{"key":"e_1_2_1_33_1","doi-asserted-by":"publisher","DOI":"10.1109\/JIOT.2017.2683200"},{"volume-title":"Neural Networks: Tricks of the Trade","author":"Luko\u0161evi\u010dius Mantas","key":"e_1_2_1_34_1","unstructured":"Mantas Luko\u0161evi\u010dius . 2012. A practical guide to applying echo state networks . In Neural Networks: Tricks of the Trade . Springer , 659--686. Mantas Luko\u0161evi\u010dius. 2012. A practical guide to applying echo state networks. In Neural Networks: Tricks of the Trade. Springer, 659--686."},{"key":"e_1_2_1_35_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.cosrev.2009.03.005"},{"key":"e_1_2_1_36_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.jcss.2004.04.001"},{"key":"e_1_2_1_37_1","doi-asserted-by":"publisher","DOI":"10.1126\/science.267326"},{"key":"e_1_2_1_38_1","volume-title":"Exploring sparsity in recurrent neural networks. CoRR abs\/1704.05119","author":"Narang Sharan","year":"2017","unstructured":"Sharan Narang , Gregory F. Diamos , Shubho Sengupta , and Erich Elsen . 2017. Exploring sparsity in recurrent neural networks. CoRR abs\/1704.05119 ( 2017 ). arxiv:1704.05119 http:\/\/arxiv.org\/abs\/1704.05119. Sharan Narang, Gregory F. Diamos, Shubho Sengupta, and Erich Elsen. 2017. Exploring sparsity in recurrent neural networks. CoRR abs\/1704.05119 (2017). arxiv:1704.05119 http:\/\/arxiv.org\/abs\/1704.05119."},{"key":"e_1_2_1_39_1","volume-title":"International Conference on Machine Learning. 1310--1318","author":"Pascanu Razvan","year":"2013","unstructured":"Razvan Pascanu , Tomas Mikolov , and Yoshua Bengio . 2013 . On the difficulty of training recurrent neural networks . In International Conference on Machine Learning. 1310--1318 . Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the difficulty of training recurrent neural networks. In International Conference on Machine Learning. 1310--1318."},{"key":"e_1_2_1_40_1","doi-asserted-by":"publisher","DOI":"10.1109\/TNN.2010.2089641"},{"key":"e_1_2_1_41_1","doi-asserted-by":"publisher","DOI":"10.1162\/NECO_a_00297"},{"key":"e_1_2_1_42_1","volume-title":"Ryosho Nakane, Naoki Kanazawa, Seiji Takeda, Hidetoshi Numata, Daiju Nakano, and Akira Hirose.","author":"Tanaka Gouhei","year":"2019","unstructured":"Gouhei Tanaka , Toshiyuki Yamane , Jean Benoit H\u00e9roux , Ryosho Nakane, Naoki Kanazawa, Seiji Takeda, Hidetoshi Numata, Daiju Nakano, and Akira Hirose. 2019 . Recent advances in physical reservoir computing: A review. Neural Networks ( 2019). Gouhei Tanaka, Toshiyuki Yamane, Jean Benoit H\u00e9roux, Ryosho Nakane, Naoki Kanazawa, Seiji Takeda, Hidetoshi Numata, Daiju Nakano, and Akira Hirose. 2019. Recent advances in physical reservoir computing: A review. Neural Networks (2019)."},{"key":"e_1_2_1_43_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2019.02.001"},{"key":"e_1_2_1_44_1","doi-asserted-by":"publisher","DOI":"10.1023\/A:1010972803901"},{"volume-title":"Perspectives of Neural-symbolic Integration","author":"Ti\u0148o Peter","key":"e_1_2_1_45_1","unstructured":"Peter Ti\u0148o , Barbara Hammer , and Mikael Bod\u00e9n . 2007. Markovian bias of neural-based architectures with feedback connections . In Perspectives of Neural-symbolic Integration . Springer , 95--133. Peter Ti\u0148o, Barbara Hammer, and Mikael Bod\u00e9n. 2007. Markovian bias of neural-based architectures with feedback connections. In Perspectives of Neural-symbolic Integration. Springer, 95--133."},{"key":"e_1_2_1_46_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.neunet.2007.04.003"},{"key":"e_1_2_1_47_1","doi-asserted-by":"publisher","DOI":"10.1016\/j.knosys.2015.06.003"},{"key":"e_1_2_1_48_1","first-page":"1","article-title":"Memristor-based echo state network with online least mean square","volume":"99","author":"Wen Shiping","year":"2018","unstructured":"Shiping Wen , Rui Hu , Yin Yang , Tingwen Huang , Zhigang Zeng , and Yong-Duan Song . 2018 . Memristor-based echo state network with online least mean square . IEEE Transactions on Systems, Man, and Cybernetics: Systems 99 (2018), 1 -- 10 . Shiping Wen, Rui Hu, Yin Yang, Tingwen Huang, Zhigang Zeng, and Yong-Duan Song. 2018. Memristor-based echo state network with online least mean square. IEEE Transactions on Systems, Man, and Cybernetics: Systems 99 (2018), 1--10.","journal-title":"IEEE Transactions on Systems, Man, and Cybernetics: Systems"},{"key":"e_1_2_1_49_1","volume-title":"Proceedings of the 2005 IEEE International Joint Conference on Neural Networks","volume":"3","author":"Xu Dongming","year":"2005","unstructured":"Dongming Xu , Jing Lan , and Jos\u00e9 C. Principe . 2005. Direct adaptive control: An echo state network and genetic algorithm approach . In Proceedings of the 2005 IEEE International Joint Conference on Neural Networks , 2005 , Vol. 3 . IEEE, 1483--1486. Dongming Xu, Jing Lan, and Jos\u00e9 C. Principe. 2005. Direct adaptive control: An echo state network and genetic algorithm approach. In Proceedings of the 2005 IEEE International Joint Conference on Neural Networks, 2005, Vol. 3. IEEE, 1483--1486."},{"key":"e_1_2_1_50_1","doi-asserted-by":"publisher","DOI":"10.5555\/2770422.2770455"}],"container-title":["ACM Transactions on Architecture and Code Optimization"],"original-title":[],"language":"en","link":[{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3404993","content-type":"unspecified","content-version":"vor","intended-application":"text-mining"},{"URL":"https:\/\/dl.acm.org\/doi\/pdf\/10.1145\/3404993","content-type":"unspecified","content-version":"vor","intended-application":"similarity-checking"}],"deposited":{"date-parts":[[2025,6,17]],"date-time":"2025-06-17T21:32:09Z","timestamp":1750195929000},"score":1,"resource":{"primary":{"URL":"https:\/\/dl.acm.org\/doi\/10.1145\/3404993"}},"subtitle":["Design and Optimization of Echo State Networks under Memory and Time Constraints"],"short-title":[],"issued":{"date-parts":[[2020,8,17]]},"references-count":49,"journal-issue":{"issue":"3","published-print":{"date-parts":[[2020,9,30]]}},"alternative-id":["10.1145\/3404993"],"URL":"https:\/\/doi.org\/10.1145\/3404993","relation":{},"ISSN":["1544-3566","1544-3973"],"issn-type":[{"type":"print","value":"1544-3566"},{"type":"electronic","value":"1544-3973"}],"subject":[],"published":{"date-parts":[[2020,8,17]]},"assertion":[{"value":"2019-10-01","order":0,"name":"received","label":"Received","group":{"name":"publication_history","label":"Publication History"}},{"value":"2020-06-01","order":1,"name":"accepted","label":"Accepted","group":{"name":"publication_history","label":"Publication History"}},{"value":"2020-08-17","order":2,"name":"published","label":"Published","group":{"name":"publication_history","label":"Publication History"}}]}}