EchoBay
EchoBay: Design and Optimization of Echo State Networks under Memory and Time Constraints
The increase in computational power of embedded devices and the latency demands of novel applications brought a paradigm shift on how and where the computation is performed. Although AI inference is slowly moving from the cloud to end-devices with limited resources, time-centric recurrent networks like Long-Short Term Memory remain too complex to be transferred on embedded devices without extreme simplifications and limiting the performance of many notable applications. To solve this issue, the Reservoir Computing paradigm proposes sparse, untrained non-linear networks, the Reservoir, that can embed temporal relations without some of the hindrances of Recurrent Neural Networks training, and with a lower memory occupation. Echo State Networks (ESN) and Liquid State Machines are the most notable examples. In this scenario, we propose EchoBay, a comprehensive C++ library for ESN design and training. EchoBay is architecture-agnostic to guarantee maximum performance on different devices (whether embedded or not), and it offers the possibility to optimize and tailor an ESN on a particular case study, reducing at the minimum the effort required on the user side. This can be done thanks to the Bayesian Optimization (BO) process, which efficiently and automatically searches hyper-parameters that maximize a fitness function. Additionally, we designed different optimization techniques that take in consideration resource constraints of the device to minimize memory footprint and inference time. Our results in different scenarios show an average speed-up in training time of 119x compared to Grid and Random search of hyper-parameters, a decrease of 94% of trained models size and 95% in inference time, maintaining comparable performance for the given task. The EchoBay library is Open Source and publicly available at https://github.com/necst/Echobay.