Delay-based reservoir computing is an unconventional information processing method that allows the implementation of recurrent neural networks on different kinds of hardware substrates. It facilitates machine learning based on the transient dynamics of a single nonlinear node through time-multiplexing. Here, we explore the interplay of the driving strength of the nonlinear node and the modulation rate of the time-multiplexing. We find two contrasting combinations of input gain and node separation, each yielding the best performance in a different prediction task, respectively. A weak input gain and large node separation is superior in a near-future prediction of a chaotic Mackey-Glass system, while a high input gain and short node separation leads to the best performance in a far-future prediction. Furthermore, for increasing input gains, we obtain that the node separation yielding the best performance decreases significantly below the characteristic time scale of the underlying delay system. This allows the realization of large networks with up to one thousand nodes even for high processing rates. We investigate the parameter’s relation further by analyzing the average state entropy and computing the information processing capacity in a time-multiplexed reservoir for one input mask. This supports an in-depth understanding of the interplay between node separation and input gain.
Esta web utiliza cookies para la recolección de datos con un propósito estadístico. Si continúas navegando, significa que aceptas la instalación de las cookies.