Deep Neural Networks (DNNs) are a useful tool for a wide range of tasks such as image classification, object detection, image resizing or text generation. They are extremely powerful, but they require sufficient computing power and large data sets to train them. Deep Neural Networks typically consist of multiple layers of neurons coupled in feed-forward architectures. In these architectures, where you can easily end up with hundreds of thousands of neurons, the information moves in only one direction: forward, from the input to the output.
An international team of researchers, including scientists from IFISC (UIB-CSIC), has developed an approach to emulate a full deep neural network using only a single artificial neuron with feedback-modulated delay loops. The large network of many interacting elements is replaced by a single element, representing different elements in time by interacting with its own delayed states. The paper has been published in Nature Communications and has been selected for the Nature Communications Editors’ Highlights webpage in the section of recent multidisciplinary research called “AI and machine learning”.
This new approach, named Folded-in-Time Deep Neural Network (Fit-DNN) by the authors, can reduce the required hardware drastically and offers a new perspective on how to construct trainable complex systems, since only a single neuron and several delay lines need to be implemented. Delay systems inherently possess an infinite-dimensional phase space, so just one neuron with feedback is sufficient to fold and capture the entire complexity of the network.
The Fit-DNN approach is very useful for hardware implementations, where the neural network itself is built using physical substrates. In a typical DNN you would need to implement each neuron with a unique hardware element, but by capturing the behavior of the whole system with a single neuron, the need to implement a full network is removed. The authors exemplify this with an optoelectronic scheme. In this type of implementation, only one light emitter is required, plus some standard telecommunication components. Fit-DNNs allow a balance to be found between computational speed and the need for more hardware components, as it would be very expensive to implement a complete DNN. However, the approach does not reduce the training process time, since a traditional computer is still required.
This Fit-DNN approach provides an alternative view on neural networks: the entire topological complexity of feed-forward multilayer neural networks can be folded into the temporal domain by the delay-loop architecture.
Deep neural networks using a single neuron: folded-in-time architecture using feedback-modulated delay loops. Stelzer, Florian; Röhm, Andre; Vicente, Raul; Fischer, Ingo and Yanchuk, Serhiy. Nature Communications 12, 5164 (1-10) (2021). Doi: https://doi.org/10.1038/s41467-021-25427-4