Exploiting transient dynamics of a time-multiplexed reservoir to boost the system performance

Goldmann, Mirko; Mirasso, Claudio R.; Fischer, Ingo; Soriano, Miguel C.
International Joint Conference on Neural Networks 2021, IEEE Computational Intelligence Society, International Neural Network Society, , (2021)

Delay-based reservoir computing is an unconventional information processing method that allows the implementation of recurrent neural networks on different kinds of hardware substrates. It facilitates machine learning based on the transient dynamics of a single nonlinear node through time-multiplexing. Here, we explore the interplay of the driving strength of the nonlinear node and the modulation rate of the time-multiplexing. We find two contrasting combinations of input gain and node separation, each yielding the best performance in a different prediction task, respectively. A weak input gain and large node separation is superior in a near-future prediction of a chaotic Mackey-Glass system, while a high input gain and short node separation leads to the best performance in a far-future prediction. Furthermore, for increasing input gains, we obtain that the node separation yielding the best performance decreases significantly below the characteristic time scale of the underlying delay system. This allows the realization of large networks with up to one thousand nodes even for high processing rates. We investigate the parameter’s relation further by analyzing the average state entropy and computing the information processing capacity in a time-multiplexed reservoir for one input mask. This supports an in-depth understanding of the interplay between node separation and input gain.


This web uses cookies for data collection with a statistical purpose. If you continue browsing, it means acceptance of the installation of the same.


More info I agree