The sound of Small Brain Circuits: Plasticity and Synchronisation in the Neurogranular Sampler

This paper explores some control strategies using synaptic plasticity in the simulation of an artificial network of spiking neurons in the Neurogranular sampler -a musical instrument which triggers grains of sampled sound when the neurons 'fire.'

Author(s)

Introduction

The Neurogranular Sampler is a software musical instrument which triggers grains of live sampled audio when any one of a network of artificial spiking neurons ‘fires’ [10,11]. The level of synchronisation in distributed systems is often controlled by the strength of interaction between the individual elements. If the elements are neurons in small brain circuits, the characteristic event is the ‘firing time’ of a particular neuron. In this paper we propose how we might ‘neuroengineer’ the collective firing behaviour of small networks of artificial neurons and therefore also engineer the sound of the Neurogranular sampler by exploiting a counter-intuitive property of Neuronal Plasticity.

Plasticity

The term ‘plasticity’ in the neurosciences refers to the ability of neurons or nerve cells to adapt their connectivity according to the electrical activity of the other cells in the network. The ability for cells to make new physical synaptic connections via axonal growth and synaptic growth and decay (synaptogenesis) is known as structural plasticity. [1] In this scenario, the axons (long tubular structures) from neurons grow towards other active cells through an induced chemical gradient, a process which has a timescale of hours to days. A different category of plasticity is known as ‘Spike Timing Dependent Plasticity’ (STDP) and refers to the millisecond strengthening and weakening of connections between neurons as a result of the transmission and reception of causal spike signals between neurons. [2]

This ‘causal’ effect was initially proposed by Donald Hebb and is known generally in the literature as ‘Hebbian Learning.’ [3] Essentially, the idea is that connections between neurons become post-synaptically strengthened (i.e in the direction of the motion of the spike signal) if the pre-synaptic neuron fires before the post-synaptic neuron. In the Spike Timing Dependent Plasticity scenario, this has been refined such that the change in the strength of connections is dependent upon the relative timings of pre-synaptic inputs and post-synaptic spikes. [4]

This continuous strengthening and weakening of neuronal connections resulting from the timing of neuronal stimulation and the relative timing of the resultant spiking behaviour coupled with the effects of the differing transit times of spike signals according to axonal topologies, has led to the idea of ‘Polychronisation’ (not at the same time, but in clusters). This term put forward by Izhikevich and others describes the formation of groups of neurons which fire according to particular sensual and cortical inputs. [5] A neuron can be a member of any number of such groups, meaning that it is not simply the number of neurons which is involved in neuronal processing, but the combinatorial number of possible polychronous groups, which in the human brain is a number larger than the total number of elementary particles in the Universe.

The idea that patterns and sequences of neuronal firing might be associated with particular sensory inputs has been around for a long time (see for example [6]), but it has only been relatively recently that this has begun to be understood at the level of the micro-dynamics of networks of cells. At this dynamical level, the interplay between model parameters associated with neuronal topologies, spike transit times (often called ‘delays’ in the literature), sensory input, synaptic plasticity and global ‘noisy’ inputs crucially affect the robustness and formation of polychronous groups and the associated spike timings.

These models of small brain circuits provide us with a very rich dynamical palette with which to experiment on the controlling of sound, by using the ‘spike’ signal of an individual neuron to trigger sonic events. Typically, in neuro-technological or neuro-engineering contexts, the spiking output of a network of artificial spiking neurons goes through an ‘encoding process’ and is sent to a ‘motor’ control, such as those which might control the movements of a robot, for example. [7] One example of this encoding process, is called ‘rate coding’ in which the frequency of the spikes generated in the output of the network is interpreted and used as a control parameter, the resulting behaviour from which is fed back to the network. [8] In our work, directed towards sonic control, the spiking output is the motor output, and in a sense the artificial neurons in our system have triple sensory, cortical and motor character. [9] 

The Neurogranular Sampler

In the Neurogranular Sampler, [10] the spike signal from any artificial neuron from a number of cells specified by the user triggers a single ‘grain’ of sound, either from a ‘live’ microphone, or a pre-recorded sound file. Typically, these grains can be between 20 milliseconds and one second in duration. We can choose different kinds of neurons, which exhibit different kinds of spiking behaviour (Regular Spiking or Bursting, for example) and choose either a homogenous group of neurons or a heterogeneous group (a selection of different types). If the group of neurons is chosen to be homogeneous and of ‘Regular Spiking’ variety, we find that the network of spiking neurons rapidly enters a dynamical state in which the neurons fire together –almost in synchrony (see Fig 1). In the diagram Fig 1, the firing activity in a simulated network of 64 neurons (labelled on the ‘Neuron Index’ axis) is shown over a period of 2000 milliseconds, or two seconds. A dot on the diagram, or ‘Raster Plot’ as it is known in the neuroscience literature, indicates a firing ‘event’ from that particular neuron. Vertical lines on this diagram indicate synchronous firing behavious meaning that the instrument will exhibit pulse-like behaviour, the frequency of which can be controlled by a ‘stretching’ or compression of the audio signal. We can move away from this synchronous dynamical state in several ways; one way is to introduce heterogeneity into the system, i.e by introducing different kinds of artificial neurons. This acts as a kind of structural disorder, making the synchronous state impossible. Perhaps surprisingly, another way of moving away from the synchronous state is to exploit a recently discovered property of synaptic plasticity in small brain circuits by Lubenov and Siappas. [11]

Controlling Synchrony

Lubenov and Siappas showed that if the neurons in a network of artificial regular spiking Izhikevich neurons are all initially in a synchronous regular spiking state, the introduction of Hebbian Spike Timing Dependent Plasticity into the model network rapidly takes the network into a very uncorrelated state, in which the spiking patterns are almost indistinguishable from a random pattern (lubenov ref). As the neurons’ firing times are already initially synchronised, the adaptation of relative spike times due to the changing of the connection strengths introduced by the plasticity algorithm can only have the effect of taking the spikes out of synchrony! The network subsequently gradually re-aligns itself temporally and self-organises to a state at the ‘border between randomness and synchrony.’ [11]

We can exploit this in our Neurogranular sampler instrument –in this way synaptic plasticity is being used as a control mechanism to ‘de-synchronize’ the network of neurons. It is possible to use an Anti-Hebbian algorithm [11] in order to re-establish the initial regular spiking synchronized state and we can follow the correlation in the network spiking behaviour using an ‘order parameter’ which is a function and a phrase borrowed from condensed matter physics.

 

John Matthias

Associate Professor in Sonic Arts

Plymouth University

John.matthias@plymouth.ac.uk

 

Tim Hodgson

Plymouth University

tim@tadn.net

 

Kevin McCracken

Plymouth University

Kevin.McCracken@plymouth.ac.uk

References and Notes: 
  1. R. Lamprecht and J. LeDoux, “Structural Plasticity and Memory,” in Nature Reviews Neuroscience 5 (2004): 45-54
  2. S. Song, K. D. Miller and L. F. Abbott., “Competitive Hebbian Learning through Spike-Timing-Dependent Synaptic Plasticity,” in Nature Neuroscience 3 (2000): 921-926.
  3. D. Hebb, The Organization of Behavior (New York: Wiley & Sons, 1949).
  4. H. Markram, J. Lübke, M. Frotscher and B. Sakmann B, "Regulation of Synaptic Efficacy by Coincidence of Postsynaptic APs and EPSPs," in Science 275, no. 5297 (1997): 213-215.
  5. E. Izhikevich, J. Gally and G. Edelman, "Spike-Timing Dynamics of Neuronal Groups," in Cereb Cortex 14, no. 8 (2004): 933-944.
  6. M. Abeles, Corticonics: Neural Circuits of the Cerebral Cortex (Cambridge: Cambridge University Press, 1991).
  7. E. Nichols, L. McDaid and N. H. Siddique, “Case Study on a Self-Organizing Spiking Neural Network for Robot Navigation,” in International Journal of Neural Systems 20, no. 6 (2010): 501-508.
  8. P. Dayan and L. F. Abbott, Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems (Cambridge, MA: The MIT Press, 2001).
  9. J. Grant, J. Matthias, T. Hodgson and E. Miranda, “Hearing Thinking,” in EvoWorkshops 2009: Lecture Notes in Computer Science 5484, ed. M. Giacobini et al. (Berlin: Springer-Verlag, 2009).
  10. E. R. Miranda and J. R. Matthias, "Granular Sampling using a Pulse-Coupled Network of Spiking Neurons," in EvoWorkshops 2005: Lecture Notes in Computer Science 3449, ed. F. Rothlauf et al., 539-544 (Berlin: Springer-Verlag, 2005).
  11. E. V. Lubenov and A. G. Siapas, "Decoupling through Synchrony in Neuronal Circuits with Propagation Delays," in Neuron 58, no. 1 (2008): 118-131.