# CNS*2013

# New approaches to spike train analysis and neuronal coding

### Call for contributed talks (closed)

Our workshop has been scheduled for July 17, 2013.

There are a few (2-4) slots available for shorter contributed talks (15+5 min).

Please send applications (including title and abstract) to the organizers

Conor Houghton (conor.houghton@bristol.ac.uk) and

Thomas Kreuz (thomas.kreuz@cnr.it)

Deadline is May 13, 2013 (which is also the workshop deadline for CNS).

*** The call is now closed. ***

### General information

Twenty-second Annual Computational Neuroscience Meeting CNS 2013

July 13, 2013: Tutorials

July 14-16, 2013: Main meeting

July 17/18, 2013: Workshops

July 17, 2013: This workshop

### Scope and objective

### Spike trains are central to signaling and computation in the brain; they are frequently the data collected in neuroscientific experiments and recent advances in electrophysiological techniques mean that they are now being collected across large neuronal populations and for synaptically connected neurons. As a consequence, describing and analyzing spike trains and quantifying their properties is a common challenge and one that is important in our efforts to understanding how the brain codes, integrates and processes information. Nonetheless, spike train analysis remains difficult and even immediate questions such as the degree to which spike trains carry a temporal or rate code are not only difficult to answer, they are difficult to ask in an unambiguous way. The purpose of this workshop is to discuss how different approaches, such as measures of spike train (dis)similarity and methods from information theory, can be used to define quantitative properties of neuronal signaling. Such properties could be used to analyze the large quantities of experimental data now available in a way that would help specify and address questions about neuronal coding and processing. Contributions will include experimental and theoretical studies, data analysis as well as modeling.

### Workshop format

The workshop will include presentations by seven invited speakers (35 + 5 min each). In addition, three slots for contributed talks (15 + 5 min) have been assigned. At the end ample time for discussion will be provided.

### Contributing speakers (15 + 5 min)

### Schedule [ Abstracts can be found below ]

Time Speaker Title

09:00-09:10 Conor Houghton Introduction

09:10-09:50 Jonathan Victor High-order image statistics: Regularities in natural scenes, perception, and neural coding

09:50-10:30 Friederice Pirschel Small is beautiful – encoding of tactile stimuli in the leech CNS

10:30-10:50 Coffee break

10:50-11:30 Florian Mormann Measuring spike-field coherence and spike train synchrony

11:30-12:10 Rodrigo Quian Quiroga Extracting information in time patterns and correlations with wavelets

12:10-12:30 Viola Priesemann Learning more by sampling less: Subsampling advances model selection

12:30-14:00 Lunch break

14:00-14:40 Il Memming Park Kernel methods for spike trains

14:40-15:10 Conor Houghton Averaging spike trains

15:10-15:30 Taşkın Deniz Kernel Density Estimate via Diffusion: Applications to Spike train Analysis

15:40-16:00 Coffee break

16:00-16:40 Ralph G Andrzejak Detecting directional couplings between spiking signals and time-continuous signals

16:40-17:00 Adrià Tauste Campo Estimation of directed information between simultaneous spike trains in decision making

17:00- Conor Houghton Discussion

### Workshop organizers

Conor Houghton

Department of Computer Science

University of Bristol

Bristol, UK

+353-1-896-3542 (w)

+353-1-896-2282 (f)

+353-87-9923877 (m/c)

conor.houghton@bristol.ac.uk

http://www.maths.tcd.ie/~houghton/

Thomas Kreuz

Institute for Complex Systems (ISC)

National Research Council (CNR)

Florence, Italy

+39-055-522-6630 (w)

+39-055-522-6683 (f)

+39-349-0748506 (m/c)

thomas.kreuz@cnr.it

Abstracts:

### Ralph Andrzejak, Barcelona, Spain

Detecting directional couplings between spiking signals and time-continuous signals

Simultaneous recordings comprising spiking times of neurons and time-continuous variables such as the electroencephalogram or the local field potential are becoming increasingly available. The characterization of causal interactions from such hybrid electrophysiological signals can be key to an advanced understanding of the underlying neuronal dynamics. We therefore recently introduced a unified approach to characterize directional couplings between point processes, between flows, as well as between point processes and flows [1]. For this purpose we showed and exploited the generality of the asymmetric state similarity conditioning principle. We used Hindmarsh-Rose neuron models and Lorenz oscillators to illustrate the high sensitivity and specificity of our approach. We here review this novel methodology and discuss possible applications to recordings from epilepsy patients.

Reference

[1] R.G. Andrzejak and T. Kreuz

Characterizing unidirectional couplings between point processes and flows

Eur Phys Lett 96, 50012 (2011).

### Florian Mormann, Bonn, Germany

Measuring spike-field coherence and spike train synchrony

Extracellular microelectrode recordings of brain activity comprise continuous oscillatory background activity (local field potential, LFP) as well as discrete series of action potentials (spikes), both contained in the same recorded signal. In order to characterize phase relationships between discrete spike trains and ongoing oscillatory field potentials, it is necessary to minimize the influence of the individual spikes on the spectral properties and instantaneous phase of the LFP signal. We propose a methodology to compare various spike removal techniques and assess their robustness with respect to type 1 and type 2 errors [1].

Another important task in neuroscience is to estimate the degree of synchrony or reliability between two or more spike trains. In recent years, many different methods have been proposed that typically compare the timing of spikes on a certain pre-defined time scale. We here propose a parameter-free and timescale-independent measure of spike train synchrony, termed SPIKE disctance [2], that allows us to track changes in instantaneous clustering, i.e., time-localized patterns of (dis)similarity among multiple spike trains. Additional new features include selective and triggered temporal averaging as well as the instantaneous comparison of spike train groups. The SPIKE-distance can also be defined in a way that the instantaneous values of dissimilarity rely on past information only so that time-resolved spike train synchrony can be estimated in real time.

References

1. Zanos TP, Mineault PJ, Pack CC. Removal of spurious correlations between spikes and local field potentials. J Neurophysiol 105:474-86 (2010)

2. Kreuz T, Chicharro D, Houghton C, Andrzejak RG, Mormann F. Monitoring spike train synchrony. J Neurophysiol 109:1457-72 (2013)

### Il Memming Park, Austin, Texas, USA

Kernel methods for spike trains

The space of spike trains lacks natural algebraic structure thus making it difficult to directly apply conventional machine learning techniques. We propose to embed spike trains in a Hilbert space using positive definite spike train kernels. This allows us to use kernel methods that include a wide range of powerful machine learning tools---e.g., regression, classification, dimensionality reduction,

filtering and hypothesis testing. We design new spike train kernels that capture prior notion of similarity between spike trains, and yet provide provably nonparametric inference. These spike train kernels can distinguish arbitrary statistical differences (by defining a divergence), and allow recovery of arbitrary functional relationships for regression and filtering. We show several applications in detail to demonstrate how spike train kernels can be used to answer neuroscience questions.

Reference

Il Memming Park, Sohan Seth, António R. C. Paiva, Lin Li, José C. Príncipe.

Kernel methods on spike train space for neuroscience: a tutorial.

IEEE Signal Processing Magazine (in press, July 2013) [arXiv:1302.5964]

### Friederice Pirschel, Oldenburg, Germany

Small is beautiful – encoding of tactile stimuli in the leech CNS

We investigate how different neurons of the leech encode information of tactile stimuli properties. The small nervous system of the leech is well suited for neuroscientific questions dealing with information processing in a neuronal network. Neurons are individually characterized and easily accessible. We use a semi-intact preparation to investigate a surprisingly precise behavior [1], called the local bend response. The animal reacts to touch stimuli applied to its skin by turning its body locally away.

Sensory information is processed in the underlying network: spike trains of mechanosensory cells are integrated into graded signals of a layer of interneurons, which transfer the information onto the motor neurons [2]. At each location on the skin, 4 to 6 mechanosensory cells respond to touch stimuli. With the aim to describe this minimal population coding, we record intracellularly from two to three cells at the same time. Simultaneously, the skin is stimulated with a tactile stimulus which varies in location, intensity and duration.

For data analysis, stimulus reconstruction is used to estimate these stimulus properties based on spike count, response latency and interspike intervals of the mechanosensory cell responses. Since action potentials of all mechanosensory cells lead to graded signals in the postsynaptic interneurons, stimulus reconstruction at this network level relies on integral and relative amplitude of interneuron EPSPs.

We can show for all cell types that several response features are influenced by the stimulus properties used in this study. To reliably estimate stimuli in a multi-dimensional feature space, responses of at least pairs of cells need to be combined.

References

1. Thomson EE, Kristan WB: Encoding and decoding touch location in the leech CNS. J Neurosci 2006, 26: 8009-8016.

2. Kristan WB, Calabrese RL, Friesen WO: Neuronal control of leech behavior. J Progress in Neurobiology 2005, 76: 279-327.

### Jose Principe, Gainesville, FL, USA

Learning Multi-scale Neural Metrics via Entropy Minimization

In order to judiciously compare neural responses between repeated trials or stimuli, a well-suited distance metric is necessary. For multi-electrode recordings, a neural response is a spatio-temporal pattern, but each dimension of space or time should not be treated equally. In order to both understand which input dimensions are more discriminative and to improve the classification performance, we propose a metric learning approach that can be used across scales. Here, multi-scale metrics or kernels are learned as the weighted combinations of different metrics or kernels for all the neural response’s dimensions. Preliminary results are explored on a cortical recording of a rat during a tactile stimulation experiment. Metrics on both local field potential and spiking data are explored, the learned weighting reveals the important dimensions of the response and the learned metrics improve classification performance.

### Rodrigo Quian Quiroga, Leicester, UK

Extracting information in time patterns and correlations with wavelets

We propose a method that combines the wavelet transform and information theory to extract information in time patterns in spike trains. We also show that the method could be used to denoise the spike trains –i.e. to improve the visualization of the time patterns containing relevant information – and to quantify the amount of information in the correlated firing across neurons. We quantify performance using simulated datasets and show better results compared to principal components analysis. Finally, we show the utility of the method with two real datasets: i) data from the monkey auditory cortex and ii) data from the rat barrel cortex.

### Jonathan Victor, New York, NY, USA

High-order image statistics: Regularities in natural scenes, perception, and neural coding

Barlow’s principle of efficient coding is a powerful framework for understanding the design principles of early sensory processing, especially in vision. Whether a similar notion applies to cortical processing is less clear, as there is no “bottleneck” comparable to the optic nerve, and much redundancy has already been removed by the front end. Here, we present convergent psychophysical and physiological evidence that this is the case: previously, we have shown that the high-order statistics (HOS’s) of natural scenes have specific kinds of regularities[1], and that this specificity corresponds to the kinds of HOS’s that had previously been shown to be visually salient [2]; here we identify the neural locus for this specificity.

We made multi-tetrode recordings in anesthetized macaque primary and secondary visual cortices, while presenting 320-ms snapshots of binary images with one of seven kinds of HOS: standard random binary checkerboards, two kinds of third-order structure, and four kinds of fourth-order structure. We determined whether responses of individual neurons distinguished between different kinds of HOS’s, and if so, at what latency. In V1 granular (input) layers, 25% of cells could distinguish any given HOS type from one of the others. The latency distribution was bimodal: one peak at 80 to 130 ms, and a later peak at 150 to 250 ms. Outside of the granular layers, a larger fraction of cells showed HOS sensitivity: about 45% of supragranular cells, and 55% of infragranular cells. In supragranular layers, latencies were similar to the early granular mode (80 to 110 ms), but had an earlier peak. In infragranular layers, latencies corresponded to the later granular mode. In V2, 70% of cells in the granular layer were sensitive to HOS’s, and this fraction increased further in the extragranular layers (>80%). V2 latencies were generally longer than V1 latencies, and more dispersed. In both V1 and V2, the pattern of neuronal selectivity was similar to human psychophysics: the HOS types that were visually salient were the ones that most often led to distinguishable neuronal responses.

Thus, neural selectivity for HOS’s appears first in V1 supragranular layers and later in its input layers, and then becomes more prominent in V2. The pattern of laminar dynamics suggests that the overall feedback architecture of the cortical microcircuit plays a critical role in these computations.

References:

1. Tkačik, G., Prentice, J., Victor, J.D., and Balasubramanian, V. (2010) Local statistics in natural scenes predict the saliency of synthetic textures. Proc. Natl. Acad. Sci. USA 107, 18149-18154.

2. Victor, J.D., and Conte, M.M. (1991) Spatial organization of nonlinear interactions in form perception. Vision Research 31, 1457-488.

Collaborator:

Yunguo Yu

### Taşkın Deniz, Freiburg, Germany

Kernel Density Estimate via Diffusion: Applications to Spike train Analysis

Kernel smoothing is a powerful methodology to gain insight into data. It has wide applications in many different fields, ranging from economics to neurosciences. The most important basic application of kernel smoothing in neuroscience is estimation of time-dependent firing rates from neuronal spike trains. Traditionally, this is achieved by the PSTH (Peri-Stimulus Time Histogram) or, alternatively, smoothing with a fixed kernel. The PSTH relies on the availability of multiple trials for averaging out trial-to-trial fluctuations. However, one can obtain a plausible estimate from a single trial as well, using kernel smoothing methods, where the bandwidth of the kernel is a parameter to be determined in analogy to the bin size of the histogram. The form of the kernel is rather unimportant, provided it is smooth, unimodal and normalized. Its bandwidth, in contrast, defines how smooth the resultant rate would be (Nawrot et al., 1999). A suboptimal kernel may result in over-smoothing or under-smoothing, where the optimal kernel is defined by a minimal deviation from the true rate profile. There may be no globally optimal kernel for strongly changing Poisson rates, though. As a cure to this problem one can optimize the estimate by locally adaptive bandwidth selection. To this end, Shimazaki and Shinomoto (2009) suggested a high dimensional optimization of MISE (mean square integrated error) as a method of local bandwidth estimation. This method, although effective, is computationally very costly. Instead, we suggest application of a new method Kernel Density Estimate via Diffusion by Botev et al. (2010). The diffusion method offers a fast completely data driven algorithm for local bandwidth selection, avoiding the boundary bias and normal reference rule. In addition to the local diffusion term in Fokker-Planck equation a local drift term helps estimating peaks and troughs of the rate profile.

In this talk, I will first explain the main idea of the diffusion method a la Botev et al. and compare it to its best performing alternatives to give an account of advantages and disadvantages. Second, I will mention selected applications of the method in spike train analysis. Finally, I will discuss our assumptions on the underlying statistics and possibilities of an extension therein.

References

1. Nawrot M, Aertsen A, Rotter S (1999) Single-trial estimation of neuronal firing rates - From single neuron spike trains to population activity. Journal of Neuroscience Methods 94(1): 81-92

2. Botev ZI, Grotowski JF, Kroese DP (2010) Kernel density estimation via diffusion. Annals of Statistics 38(5): 2916-2957

3. Shimazaki H, Shinomoto S (2009) Kernel bandwidth optimization in spike rate estimation. Journal of Computational Neuroscience 29(1-2): 171-182

4. Jones MC, Marron JS, Sheather SJ (1996) A Brief Survey of Bandwidth Selection for Density Estimation. Journal of the American Statistical Association 91(433): 401-407

### Viola Priesemann, Frankfurt, Germany

Learning more by sampling less: Subsampling advances model selection

When studying real world complex networks, one rarely has full access to all their components. For example, the human brain consists of 10^11 densely connected neurons, but only a few hundred can be recorded in parallel (subsampling). However, under subsampling the observed network activity can differ tremendously from the real network activity, and consequently inferences about network properties remain limited.

We studied how subsampling affects the observed multi site activity in neural network models and intracranial recordings (monkey, human). For systematic subsampling, we changed the number and distance between sampled units in the model and between electrodes in the neural tissue analogously.

We found that the observed activity depended on all the following factors: The network topology, the interaction rules between the units, the number and the distance between sampled units. Hence, each model showed specific subsampling effects. Most interestingly, only one model matched the neural activity. Thus systematic subsampling offers a promising new approach to model selection.

References

1. Priesemann V, Munk MHJ, Wibral M: Subsampling effects in neuronal avalanche distributions recorded in vivo. BMC Neurosci 2009, 10:40.

2. Priesemann V, Valderrama M, Wibral M, LeVan Quyen M: Neuronal

avalanches differ from wakefulness to deep sleep. - Evidence from intracranial depth recordings in humans. PLoS Comp Biol. in press

3. Gerhard F, Pipa G, Lima B, Neuenschwander S, Gerstner W: Extraction of Network Topology From Multi-Electrode Recordings: Is there a Small-World Effect? Front Comput Neurosci 2011, 5.

### Adrià Tauste Campo, Barcelona, Spain

Estimation of directed information between simultaneous spike trains in decision making

We infer causal dependencies in simultaneously recorded spike trains and study how these dependencies correlate with stimuli and behavior in a decision making task. To do so, we estimated the directed information between pairs of recordings with a Bayesian procedure based on a universal data compression algorithm. The underlying algorithm provides a sequential probability assignment with linear complexity in time for data generated by processes with arbitrarily long memory and without any prior knowledge about the true distribution process [1]. Then, the distribution is plugged into a conveniently defined directed information estimator [2]. The method applies to jointly stationary and ergodic finite-alphabet processes that incorporate large-scale dependencies and thus, it is suited to analyze spike-train recordings. Moreover, the estimator is proven to be consistent and to outperform the bias and the variance of other known estimators without further corrections [3].

We discuss the application of the estimator to pairs of mutually correlated processes and evaluate the impact of the delay, memory and directionality of the interaction into the final estimate. Finally, we apply the method to simultaneously recorded spike trains in two somatosensory areas and three pre-motor and motor areas during a vibro-tactile discriminative task [4]. Specifically, we quantify the significance of causal dependencies between the five brain areas with respect to the final decision and characterize the space-time scale (neuron pairs, task period and delay) at which this significance is manifested.

References

1. F. Willems, Y. Shtarkov, and T. Tjalkens, “The context-tree weighting method: Basic properties,” IEEE Transactions on Information Theory, vol. 41, no. 3, pp. 653–664, 1995.

2. J. Jiao, H. Permuter, L. Zhao, Y. Kim, and T. Weissman, “Universal estimation of directed information,” To appear in IEEE Transactions on Information Theory, 2013.

3. Y. Gao, I. Kontoyiannis, and E. Bienenstock, “Estimating the entropy of binary time series: Methodology, some theory and a simulation study,” Entropy, vol. 10, no. 2, pp. 71–99, 2008.

4. R. Romo and E. Salinas, “Flutter discrimination: neural codes, perception, memory and decision making,” Nature Reviews Neuroscience, vol. 4, no. 3, pp. 203–218, 2003.

Collaborators:

Marina Martínez-García1, Verónica Nácher, Gustavo Deco and Ranulfo Romo