Romain Brasselet, Tuebingen, Germany
Metrical Information Theory and Metrical Analysis of Microneurography data
In order to study neurotransmission, we propose an extension of Shannon information, dubbed metrical information, that explicitly embeds the metrical relations between signals through a spike train metrics. The notion of information is thus no longer purely topological but depends on the particular geometry of spike trains responses. The metrics is interpreted as a projection of the properties of the decoder onto the space of spike trains. Thus, it allows to predict what are the optimal parameters of the neurons that receive these signals to transmit a maximal amout of information. We apply this method to microneurography recordings of fingertip mechanoreceptors with the Victor-Purpura spike train metrics. We show that after a few tens of milliseconds, it is possible to reach a maximal metrical information.
Then, we design a new spike train distance that is inspired by the processing of signals by real neurons. It is based on a convolution of spike trains with decaying exponentials and then on a sigmoidal filtering of the convolution, so as to obtain a time-dependent quantity that is akin to the probability of firing. We show with this distance that, with appropriate parameters, the space of the population spike trains can be made isometric to the stimulus space. This metrical organization can be seen as a basis for the ability of the central nervous system to generalize to previously unknown stimulations. In addition, this suggest a new view on the neural code as invariant under transformations that preserve the geometry of the spiking signals space.
Daniel Chicharro, Barcelona, Spain
What spike train distances can tell us about the neural code
A particular class of measures designed to study the neural code are time scale parametric spike train distances [1, 2]. These spike train distances have in common that they depend on some parameter that determines the temporal scale in the spike trains up to which the distance is sensitive. In the two extremes of the parameter range, the distances reflect the assumption of a rate code and a coincidence code, respectively.
Time scale parametric spike train distances are often applied to study neural stimuli discrimination . A discrimination performance is obtained by combining a measure of spike train similarity with a classifier to assign each spike train to the stimulus which is most likely to have elicited it. The discriminative precision is determined as the time scale for which the discrimination performance is maximized. The relevance of temporal coding is evaluated by comparing the optimal discrimination performance with the one achieved when assuming a rate code.
We here consider the specificity and the interpretation of the measures quantifying the discrimination performance, the discriminative precision, and the relevance of temporal coding. In particular, we show that the quantification of the discrimination performance is more robust than the other two. Furthermore, we evaluate the information these quantities provide about the neural code, and discuss the interpretation of the discriminative precision in terms of the time scales relevant for encoding.
According to our results, the time scale parametric nature of these measures is mainly an advantage in the sense that it allows maximizing the mutual information across a whole set of measures with different sensitivities. This is in contrast to the view that the main advantage is the possibility to calculate the discriminative precision and examine temporal coding. Therefore, to find the maximal mutual information, more elaborated classifiers could be employed. Similarly, for spike trains which contain more than one relevant time scale, time scale adaptive spike train distances  could be more appropriate to obtain the maximal mutual information.
 Victor JD and Purpura KP. J Neurophysiol 76:1310-1326 (1996).
 van Rossum MCW. Neural Comput 13:751-763 (2001).
 Victor JD. Curr Opin Neurobiol 15:585-592 (2005).
 Kreuz T, Chicharro D, Greschner M, Andrzejak RG. J Neurosci Methods 195:92-106 (2011).
Daniel Gardner, New York, NY, USA
Enabling wider adoption of contemporary spike-train analyses
Analyses of neural coding require multiple methods, because neural systems use many kinds of representations, and many analytic methods require specific types or amounts of data. To make contemporary but not widely known methods available to the broad neuroscience communities, we developed and released open source via neuroanalysis.org the downloadable STAToolkit suite of information-theoretic algorithms. These include guidance for algorithmic development, use, and applicability to neural systems (Goldberg et al. Neuroinformatics 7, 165-178, 2009.
STAToolkit, with over 2,000 downloads, contains multiple information and entropy estimation methods adapted from colleagues and the literature. Examples and demonstrations serve to inform and guide neurophysiologists to select STAToolkit methods.
Recently, a very new and very affordable technology has become available: inexpensive massively-parallel drop-in cards adapted from graphical processor units (GPUs) providing supercomputerlevel 500 GFLOPS to 1 TFLOPS performance. Although this novel architecture is compatible with the standards and goals of highperformance scientific computing and highly appropriate to the analytic needs of high-impact neurophysiology, it is not now generally used—because software to properly leverage this hardware for multielectrode analyses is not yet available. A parellel workshop explores these advanced techniques.
We are beginning development such software, to be delivered open source as our new Neurophysiology Extended Analysis Tool. NEAT will adapt code from STAToolkit and other essential analytics so that they leverage this new GPU-parallel technology, enabling a wide range of multielectrode labs to perform high-throughput informative analyses at very low cost.
STAToolkit and NEAT are existing and in-development components of neuroanalysis.org, itself one component of an integrated set of resources designed to aid wider utilization of neuroinformatic tools and capabilities by the computational and systems neuroscience communities. Complementing these, our neurodatabase.org provides access to a range of actual datasets from multiple neurophysiological preparations, along with a virtual oscilloscope for examination of datasets and extensive descriptive metadata aiding application of analytic techniques.
Initial phases supported by Human Brain Project/Neuroinformatics MH068012 from NIMH, NINDS, NIA, NIBIB and NSF and MH057153 from NIMH and NINDS.
Sonja Gruen, RIKEN, Japan
The Challenge of Assembly Detection in Massively Parallel Spike Trains
The assembly hypothesis (Hebb, 1949) implies that entities of thought or perception are represented by the coordinated activity of (large) neuronal groups. However, whether or not the dynamic formation of cell assemblies constitutes a fundamental principle of cortical information processing remains a controversial issue of current research. While initially mainly technical problems limited the experimental surge for support of the assembly hypothesis, the recent advent of multi-electrode array reveals fundamental shortcomings of available analysis tools.
Although larger samplings of simultaneous recordings from the cortical tissue are expected to ease the observation of assembly activity, it implies on the other hand an increase in the number of parameters to be estimated. It is usually infeasible to simply extend existing methods (e.g. Unitary Event analysis, Grün et al, 2002a,b) to massively parallel data due to a combinatorial explosion and a lack of reliable statistics if individual spike patterns are considered. Due to limitations in the length of experimental data, in particular in respect to stationarity, all parameters of the full system cannot be estimated. Thus new concepts need to be developed and we developed new routes that allow the analysis of massively parallel (hundred or more) spike trains for correlated activities. A meaningful interpretation of the resulting multivariate data, however, presents a serious challenge. In particular, the estimation of higher-order correlations that characterize the cooperative dynamics of groups of neurons is impeded by the combinatorial explosion of the parameter space. In this talk I will review approaches to correlation analysis of massively parallel spike trains we developed in the last years, discuss their underlying assumptions, and applicability.
Berger, Borgelt, Louis, Morrison, Grün (2010). Efficient Identification of Assembly Neurons within Massively Parallel Spike Trains. Computational Intelligence and Neuroscience Vol. 2010, Article ID 439648, 18 pages.
Berger, Warren, Normann, Arieli, Grün. Spatially organized spike correlation in cat visual cortex. Neurocomputing 70, 2112-2116 (2007)
Grün, Abeles, Diesmann (2008) Impact of higher-order correlations on coincidence distributions of massively parallel data. Lecture Notes in Computer Science, 5286, 96-114
Louis Borgelt, Grün (2010) Complexity distribution as a measure for assembly size and temporal precision, Neural Networks, 23: 705-712
Schrader, Grün, Diesmann, Gerstein (2008) Detecting synfire chain activity using massively parallel spike train recording. Journal of Neurophysiology 100(4), 2165-2176
Shimazaki, Amari, Brown, Grün (2009) State-space analysis on time-varying correlations in parallel spike sequences. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 3501-3504
Staude, Rotter, Grün (2010) CuBIC: cumulant based inference of higher-order correlations in massively parallel spike trains, J Comput Neurosci, 29 (1-2): 327-350, DOI: 10.1007/s10827-009-0195-x
Staude, Grün, Rotter (2010) Higher-order correlations in non-stationary parallel spike trains: statistical modeling and inference. Front. Comput. Neurosci. 4:16. doi:10.3389/fncom.2010.00016
Richard Naud, Lausanne, Switzerland
Improved similarity measures for small sets of spike trains
Multiple measures have been developed to quantify the similarity between two spike trains. These measures have been used for the quantification of the mismatch between neuron models and experiments as well as for the classification of neuronal responses in neuroprosthetic devices or electrophysiological experiments. Frequently, only a few spike trains are available in each class. We show that existing spike train similarity measures are strongly biased if the number of samples per class is small. We propose improved bias-corrected measures and show that these can be successfully used for fitting neuron models to experimental data and for objective comparisons of spike trains. Our approach enables us to derive bias-corrected versions of the gamma-coincidence factor of Kistler, the Victor-Purpura metrics and the van Rossum distance. We demonstrate that when similarity measures are used for fitting mathematical models, all previous methods systematically underestimate the noise unless bias-correctionis performed. Finally, we show a striking implication of the small-sample bias by re- evaluating the results of the Single-Neuron Prediction Challenge.
Ralph G. Andrzejak, Barcelona, Spain
Detecting directional couplings from spike trains
A reliable detection of the direction of interactions between neurons from spike trains appears essential for an advanced understanding of neuronal information processing. For continuous time series so-called nonlinear interdependence measures can be used to detect directional couplings between the underlying dynamical systems. However, these measures cannot readily be applied to point processes such as spike trains. On the other hand, spike train distances can be used to quantify the dissimilarity between spike trains. These, in turn however, do not allow extracting the direction of the interaction.
Here we propose a combination of nonlinear interdependence measures and spike train distances which allows detecting directional couplings from spike trains. Using different examples of deterministic point processes (such as uni-directionally coupled Hindmarsh-Rose neurons) we show that our approach detects directional couplings with a high sensitivity. By means of a simple permutation test we demonstrate that our approach is also specific. Furthermore, we show that this specifity is not affected by asymmetries in the structure of the two systems.
We expect this novel approach to be of wide applicability for the study of neuronal spike trains.
Andrew Bogaard, Boston, MA, USA
Current relevance of the cross correlation and secondary analyses
The cross correlation is a widely recognized operation in the field of signal processing, and is often applied in neuroscience to analyze the relationships between neuronal spike trains. Despite the frequency of its use, studies continue to describe different methods for the calculation of the cross correlation and its variants, some of which are more effective or efficient than others. Also, the theoretical relevance of the cross correlation is changing because the development of secondary analyses is ongoing. For example, the cross correlogram of the spike trains of two unique cells recorded simultaneously is useful in determining functional connectivity , modulation by shared network rhythms , and cell identification . The autocorrelation of a single spike train has been used to extract burst frequency [1, 2, 5] to infer phase precession  and reveal behavioral correlates which are more subtle than straightforward rate coding shown in tuning functions of encoded variables such as head direction .
We review an efficient algorithm for the calculation of the cross correlation for arbitrarily small bin sizes and long recording durations, which also extends to spikes which occur during multiple discontinuous epochs. We also review conditions for normalization and thereby propose guidelines for publishing cross correlation analyses. Finally, we review current secondary analyses and interpretations of the cross-correlation and auto-correlation by showing the results of different metrics for simulated spike trains and data.
1. Brandon MP, Bogaard AR, Libby CP, Connerney MA, Gupta K, Hasselmo ME. (2011) Reduction of theta rhythm dissociates grid cell spatial periodicity from directional tuning. Science; 332(6029):595-9.
2. Jeewajee A, Barry C, O'Keefe J, Burgess N (2008) Grid cells and theta as oscillatory interference: electrophysiological data from freely moving rats. Hippocampus 18(12):1175-85.
3. Mizuseki, K., Sirota, A., Pastalkova, E., and Buzsaki, G. (2009). Theta oscillations provide temporal windows for local circuit computation in the entorhinal-hippocampal loop. Neuron 64, 267-280.
4. O'Keefe J, Recce ML (1993) Phase relationship between hippocampal place units and the EEG theta rhythm. Hippocampus 3(3):317-30.
5. Royer S, Sirota A, Patel J, Buzsáki G. (2010) Distinct representations and theta dynamics in dorsal and ventral hippocampus. J Neurosci.30(5):1777-87.
6. Welday AC, Shlifer IG, Bloom ML, Blair HT, Zhang K (2010) Cosine directional tuning of theta cell burst frequencies in anterior thalamus: Evidence for an oscillatory path integration circuit in the rat brain, Neuroscience Meeting Planner, Society for Neuroscience 2010. 203.20/KKK63.
7. Wierzynski, Casimir M.; Lubenov, Evgueniy V.; Gu, Ming; Siapas, Athanassios G. (2009) State-Dependent Spike-Timing Relationships between Hippocampal and Prefrontal Circuits during Sleep Neuron 10.1016.
Matthias H. Hennig, Edinburgh, UK
Variability in large neural populations: insights from high density multielectrode array recordings
A natural connection exists between the dynamical state of a neural network and its sensitivity to inputs and capacity for information processing or storage. For instance, a prevailing hypothesis is that neural populations of a given cell type are relatively homogeneous in their properties, and that a dynamic balance between excitation and inhibition is maintained as a stable regime [e.g. 1, 2]. So far however, an experimental investigation of such ideas has been limited by the small number of neurons that could be recorded simultaneously. Here, we present an analysis of spiking activity of thousands of neurons recorded with the 4,096 channel APS array [3, 4]. Activity was recorded from two preparations, cortical cell cultures grown on the array, and from the intact developing neonatal mouse retina at different developmental stages. The analysis of spike train metrics such as the Fano and Allan factor reveals a substantial degree of variability between individual neurons in both preparations. Neurons with high average firing rates consistently show strong signs of firing rate modulation, whereas neuron with low average rates behave more Poisson-like. We further show that the Allan factor can be used to distinguish between regular and bursting activity, and to estimate the frequency and duration of such bursts. Finally, we present an analysis of culture recordings with second-order maximum entropy models . These models suggest that the functional connectedness of neurons changes in a distance-dependent manner, a property that is not detectable by analysing pairwise correlations directly. In summary, high-density MEA recordings have allowed us, for the first time, to directly analyse the neural dynamics of complete neural circuits in vitro. The substantial variability of neurons found in these preparations may be the result of general principles underlying the organisation of neural circuits, since it is reminiscent of the neural diversity found in other intact systems such as the primary visual cortex .
 M. Tsodyks and T. Sejnowski, Network 6, 111 (1995)
 Y. Shu, A. Hasenstaub and D. A. McCormick, Nature 423, 288 (2003)
 K. Imfeld, S. Neukom, A. Maccione, Y. Bornat, S. Martinoia, P.A. Farine, M. Koudelka-Hep, L. Berdondini, IEEE Trans Biomed Eng 55, 2064 (2008)
 L. Berdondini, K. Imfeld, A. Maccione, M. Tedesco, S. Neukom, M. Koudelka-Hep, S. Martinoia, Lab Chip 9, 2644 (2009)
 E. Schneidman, W. Bialek, Nature 440,1007 (2006)
 C. Monier, F. Chavane, P. Baudot, L.J. Graham, Y. Frégnac, Neuron 37,663 (2003)
Mikail Rubinov, Sydney, Australia
Maximized directed information transfer in critical neuronal networks
Critical dynamics in complex systems emerge at the transition from random to ordered dynamics and are characterized by power-law distributions of spatial and temporal properties of system events. The occurrence of critical dynamics in neuronal networks is increasingly supported by multielectrode array recordings of spontaneous activity in organotypic cortical slice cultures . System events in these neuronal networks are typically defined as activations of neuronal ensembles, or “neuronal avalanches”. Interestingly, studies associate critical neuronal network avalanche dynamics with optimized information transfer [1, 2]. However, studies have not previously examined the directed nature of information transfer in these networks.
Here, we present three novel transfer-entropy  based measures of directed information transfer between neuronal avalanches. Our measures compute the amount of predictive information present in avalanches properties (avalanche size, avalanche duration and inter-avalanche period) of the source region about avalanche properties of the destination region and are suitable for detecting information transfer at multiple spatial scales, from individual neurons to neuronal ensembles. We apply these measures to compute directed information transfer in large, sparse, modular networks of leaky integrate-and-fire neurons with spike timing-dependent synaptic plasticity and axonal conduction delays. We characterize dynamics in our networks by distributions of neuronal avalanches and assess these distributions for power-law scaling. We compute directed information transfer between two halves of each network, and we normalize this transfer by the null-model transfer, generated by randomly rotating avalanche times for each avalanche vector, and so destroying any present relationships between groups of vectors.
Dynamics in our networks change from subcritical to critical to supercritical as the modular topology of the networks is progressively randomized. All three measures peak at criticality in all examined networks and hence show that directed neuronal information transfer is maximized at criticality in our model. These findings pave the way for the application of our measures to empirical multielectrode recording data.
1. Beggs JM, Plenz D (2003) Neuronal avalanches in neocortical circuits. J Neurosci 23(35): 11167-11177.
2. Shew WL, Yang H, Yu S, Roy R, Plenz D (2011) Information capacity and transmission are maximized in balanced cortical networks with neuronal avalanches. J Neurosci 31:55-63.
3. Schreiber T (2000) Measuring Information Transfer. Phys Rev Lett 85:461–464.
Juan Carlos Vasquez, Sophia-Antipolis, France
Analysis of Spike-train Statistics with Gibbs Distributions: Theory, Implementation and Applications
We propose a generalization of the existing maximum entropy models used for spike trains statistics analysis. We bring a simple method to estimate Gibbs distributions, generalizing existing approaches based on Ising model or one step Markov chains to arbitrary parametric potentials. Our method enables one to take into account memory effects in dynamics. It provides directly the Kullback-Leibler divergence between the empirical statistics and the statistical model. It does not assume a specific Gibbs potential form and does not require the assumption of detailed balance. Furthermore, it enables the comparison of different statistical models and offers a control of finite-size sampling effects, inherent to empirical statistics, by using large deviations results. A numerical validation of the method is proposed. Applications to biological data of multi-electrode recordings from retina ganglion cells in animals are presented.