Spiking neural network
Spiking neural networks (SNNs) fall into the third generation of neural network models, increasing the level of realism in a neural simulation.[1] In addition to neuronal and synaptic state, SNNs also incorporate the concept of time into their operating model. The idea is that neurons in the SNN do not fire at each propagation cycle (as it happens with typical multi-layer perceptron networks), but rather fire only when a membrane potential – an intrinsic quality of the neuron related to its membrane electrical charge – reaches a specific value. When a neuron fires, it generates a signal which travels to other neurons which, in turn, increase or decrease their potentials in accordance with this signal.
In the context of spiking neural networks, the current activation level (modeled as some differential equation) is normally considered to be the neuron's state, with incoming spikes pushing this value higher, and then either firing or decaying over time. Various coding methods exist for interpreting the outgoing spike train as a real-value number, either relying on the frequency of spikes, or the timing between spikes, to encode information.
Beginnings
The first scientific model of a spiking neuron was proposed by Alan Lloyd Hodgkin and Andrew Huxley in 1952. This model describes how action potentials are initiated and propagated. Spikes, however, are not generally transmitted directly between neurons. Communication requires the exchange of chemical substances in the synaptic gap, called neurotransmitters. The complexity and variability of biological models have resulted in various neuron models, such as the integrate-and-fire (1907), FitzHugh–Nagumo model (1961–1962) and Hindmarsh–Rose model (1984).
From the information theory point of view, the problem is to propose a model that explains how information is encoded and decoded by a series of trains of pulses, i.e. action potentials. Thus, one of the fundamental questions of neuroscience is to determine if neurons communicate by a rate or temporal code.[2] Temporal coding suggests that a single spiking neuron can replace hundreds of hidden units on a sigmoidal neural net.[1]
Applications
This kind of neural network can in principle be used for information processing applications the same way as traditional artificial neural networks.[3] In addition, spiking neural networks can model the central nervous system of a virtual insect for seeking food without the prior knowledge of the environment.[4] However, due to their more realistic properties, they can also be used to study the operation of biological neural circuits. Starting with a hypothesis about the topology of a biological neuronal circuit and its function, the electrophysiological recordings of this circuit can be compared to the output of the corresponding spiking artificial neural network simulated on computer, determining the plausibility of the starting hypothesis.
In practice, there is a major difference between the theoretical power of spiking neural networks and what has been demonstrated. They have proved useful in neuroscience, but not (yet) in engineering. Some large scale neural network models have been designed that take advantage of the pulse coding found in spiking neural networks, these networks mostly rely on the principles of reservoir computing. However, the real world application of large scale spiking neural networks has been limited because the increased computational costs associated with simulating realistic neural models have not been justified by commensurate benefits in computational power. As a result, there has been little application of large scale spiking neural networks to solve computational tasks of the order and complexity that are commonly addressed using rate coded (second generation) neural networks. In addition it can be difficult to adapt second generation neural network models into real time, spiking neural networks (especially if these network algorithms are defined in discrete time). It is relatively easy to construct a spiking neural network model and observe its dynamics. It is much harder to develop a model with stable behavior that computes a specific function.
Software
There is diverse range of application software to simulate spiking neural networks. This software can be classified according to the use of the simulation:
- Software used primarily to simulate spiking neural networks which are present in the biology to study their operation and characteristics. In this group we can find simulators such as GENESIS (the GEneral NEural SImulation System) developed in James Bower's laboratory at Caltech; NEURON, mainly developed by Michael Hines, John W. Moore and Ted Carnevale in Yale University and Duke University; Brian, developed by Romain Brette and Dan Goodman at the École Normale Supérieure; and NEST developed by the NEST Initiative. This type of application software usually supports the simulation of complex neural models with a high level of detail and accuracy. However large networks usually require very time-consuming simulations.
- Software which addresses information processing tasks to solve problems. Computer programs such as SpikeNET are in this group, which has been developed by Delorme and Thorpe in collaboration between Centre de Recherche Cerveau et Cognition and SpikeNet Technology. This type of application software usually runs very fast simulations but does not allow the simulation of very complex or biologically-realistic neural models.
- Software which provides capabilities to support the simulation of relatively complex neural models efficiently so that it can also be convenient for information processing tasks. This software can exploit biological neuron characteristics to perform computation functions and at the same time allows the study of the functionality of these neural characteristics. In this software group we can find EDLUT which has been developed in the University of Granada. This application software must be efficient enough to run fast simulations, sometimes even in real time, and at the same time it must support neural models which are detailed and biologically plausible.
Hardware
Neurogrid, built at Stanford University, is a board that can simulate spiking neural networks directly in hardware. SpiNNaker (Spiking Neural Network Architecture), designed at the University of Manchester, uses ARM processors as the building blocks of a massively parallel computing platform based on a six-layer thalamocortical model.[5]
Another implementation is the TrueNorth processor from IBM. This processor contains 5.4 billion transistors, but is designed to consume very little power, only 70 milliwatts; most processors in personal computers contain about 1.4 billion transistors and require 35 watts or more. IBM refers to the design principle behind TrueNorth as neuromorphic computing. Its primary purpose is pattern recognition; while critics say the chip isn't powerful enough, its supporters point out that this is only the first generation, and the capabilities of improved iterations will become clear.[6]
See also
- CoDi
- Cognitive architecture
- Cognitive map
- Computational neuroscience
- Neural coding
- Neural correlate
- Neural decoding
- Neuroethology
- Neuroinformatics
- Models of neural computation
- Motion perception
- Systems neuroscience
References
- 1 2 Maas, Wolfgang (1996). "Networks of Spiking Neurons: The Third Generation of Neural Network Models". doi:10.1016/S0893-6080(97)00011-7.
- ↑ Wulfram Gerstner (2001). "Spiking Neurons". In Wolfgang Maass; Christopher M. Bishop. Pulsed Neural Networks. MIT Press. ISBN 0-262-63221-7.
- ↑ Alnajjar, F.; Murase, K. "A simple Aplysia-like spiking neural network to generate adaptive behavior in autonomous robots". Adaptive Behavior. 14 (5): 306–324. doi:10.1177/1059712308093869.
- ↑ X Zhang; Z Xu; C Henriquez; S Ferrari (Dec 2013). "Spike-based indirect training of a spiking neural network-controlled virtual insect". Decision and Control (CDC), IEEE: 6798–6805. doi:10.1109/CDC.2013.6760966.
- ↑ Xin Jin; Furber, S. B.; Woods, J. V. (2008). "Efficient modelling of spiking neural networks on a scalable chip multiprocessor". 2008 IEEE International Joint Conference on Neural Networks (IEEE World Congress on Computational Intelligence). pp. 2812–2819. doi:10.1109/IJCNN.2008.4634194. ISBN 978-1-4244-1820-6.
- ↑ Markoff, John, A new chip functions like a brain, IBM says, New York Times, August 8, 2014, p.B1
External links
- Full text of the book Spiking Neuron Models. Single Neurons, Populations, Plasticity by Wulfram Gerstner and Werner M. Kistler (ISBN 0-521-89079-9)