Neuronal Dynamics (12)

Neuronal Populations

The online version of this chapter:


Chapter 12 Neuronal Populations https://neuronaldynamics.epfl.ch/online/Ch12.html


The aim of this chapter is to provide the foundation of the notions of 'neuronal population' and 'population activity'.

Columnar organization

We present in this section a short introduction into the structural organization and functional characterization of cortex.

Receptive fields

Simple cells in visual cortex are sensitive to the orientationn of a light bar.

In this and the following chapters, we exploit the fact that neighboring neurons in visual cortex have similar receptive fields.

Example: Cortical Maps

Neighboring neurons have similar receptive fields, but the exact characteristics of the receptive fields change slightly as one moves parallel to the cortical surface.

How many populations?

Inside a column neurons are organized in different layers. Each layer contains one or several types of neurons. From shallow to deep, neurons becomes indistinguishable. Besides, the number of populations that a theoretician takes into account depends on the level of 'coarse-graining' that he is ready to accept, as well as on the amount of information that is available from experiments.

Distributed assemblies

The mathematical notion of population does not require that neurons need to form local groups to qualify as a homogeneous populaton.

Donald Hebb introduced the notion of neuronal assemblies, i.e., groups of cells which get activated together so as to represent a mental concept. An assembly can be a group of neurons which are distributed across one or several areas. However, such an assignment of a neuron to a population is not fixed, but can depend on the stimulus.

Identical Neurons: A Mathematical Abstraction

In a population of \(N\) neurons, the population activity is \[ A(t)=\lim_{\Delta t \to 0}\frac{1}{\Delta t}\frac{n_{act}(t;t+\Delta t)}{N}=\frac{1}{N}\sum_{j=1}^{N} \sum_{f}^{} \delta(t-t_j^{(f)}), \tag{12.1} \]

For the sake of notational simplicity, we do not distinguish the observed activity from its expectation value and denote in the following the expected activity by \(A(t)\).

Homogeneous networks

By homogeneous we mean that - all neurons \(1\leqslant i\leqslant N\) are identical; - all neurons receive the same external input \(I^{ext}_i(t)=I^{ext}(t)\); - the interaction strength \(w_{ij}\) for the connection between any pair \(j,i\) of pre- and postsynaptic neurons is 'statistically uniform'.

Homogeneous population of integrate-and-fire neurons

We assume that a neuron is coupled to all others as well as to itself with coupling strength \(w_{ij}=w_0\). The input current \(I_i\) takes care of both the external drive and synaptic coupling \[ I_i=\sum_{j=1}^{N} \sum_{f}^{} w_{ij} \alpha(t-t_j^{(f)})+I^{ext}(t). \tag{12.3} \] Here we have assumed that each input spike generates a postsynaptic current with some generic time course \(\alpha(t-t_j^{(f)})\).

Using (12.1), we find a total input current, \[ I(t)=w_0 N \int_{0}^{\infty} \alpha(s)A(t-s) \mathrm{d}s+I^{ext}(t), \tag{12.4} \] which is independent of the neuronal index \(i\). Thus, the input current at time \(t\) depends on the past population activity and is the same for all neurons.

Connectivity Schemes

In the following we discuss some schemes with a special focus on the scaling behavior induced by each choice of coupling scheme. Here, scaling behavior refers to a change in the number \(N\) of neurons that participate in the population.

Full connectivity

All-to-all connectivity, all connections have the same strength. An appropriate scaling law is \[ w_{ij}=\frac{J_0}{N}. \tag{12.6} \]

A slightly more intricate all-to-all coupling scheme is the following: weights \(w_{ij}\) are drawn from a Gaussian distribution with mean \(J_0/N\) and standard deviation \(\sigma/\sqrt{N}\). The fluctuations of the membrane potential are of the order \(\sigma_0\) even in the limit of large \(N\).

Random coupling: Fixed coupling probability

Experimentally the probability \(p\) that a neuron inside a cortical column makes a functional connection to another neuron in the same column is in the range of 10%, but varies across layers.

The number of presynaptic input links \(C_j\) to a postsynaptic neuron \(j\) has a mean value of \(\langle C_j \rangle=pN\), but fluctuates between one neuron and the next with variance \(p(1-p)N\).

Alternatively, we can take one model neuron \(j=1,2,3,\cdots N\) after the other and choose randomly \(C=pN\) presynaptic partners for it.

It is useful to scale the strengh of the connections as \[ w_{ij}=\frac{J_0}{C}=\frac{J_0}{pN}, \tag{12.7} \]

Random coupling: Fixed number of presynaptic partners

We pick one model neuron \(j=1,2,3,\cdots N\) after the other and choose randomly its \(C\) presynaptic partners. Whenever the network size \(N\) is much bigger than \(C\), the inputs to a given neuron can be thought of as random samples from the current network activity. No scaling of the connections with the population size \(N\) is necessary.

Balanced excitation and inhibition

If the total amount of excitation and inhibition cancel each other, so that excitation and inhibition are 'balanced'. The resulting network is called a balanced network or a population with balanced excitation and inhibition.

We can scale synaptic weights so as to control specifically the amount of fluctuations of the input current around zero. An appropriate choice is \[ w_{ij}=\frac{J_0}{\sqrt{C}}=\frac{J_0}{\sqrt{pN}}. \tag{12.8} \]

Interacting Populations

We assume that neurons are homogeneous within each pool. The activity of neurons in pool \(n\) is \[ A_n(t)=\frac{1}{N_n}\sum_{j \in \Gamma_n}^{} \sum_{f}^{} \delta(t-t_j^{(f)}), \tag{12.9} \] where \(N_n\) is the number of neurons in pool \(n\) and \(\Gamma_n\) denotes the set of neurons that belong to pool \(n\). Each neuron \(i\) in pool \(n\) receives input from all neuons \(j\) in pool \(m\) with strength \(w_{ij}=J_{nm}/N_m\). The time course \(\alpha_{ij}(s)\) caused by a spike of a presynaptic neuron \(j\) may depend on the synapse type. The input current to a neuron \(i\) in group \(\Gamma_n\) is generated by the spikes of all neurons in the network, \[ I_{i,n}=\sum_{j}^{} \sum_{f}^{} w_{ij}\alpha_{ij}(t-t_j^{(f)})=\sum_{m}^{} J_{nm}\int_{0}^{\infty} \alpha_{nm}\sum_{j\in \Gamma_m}^{} \sum_{f}^{} \frac{\delta(t-t_j^{(f)}-s)}{N_m} \mathrm{d}s, \] (12.10)

where \(\alpha_{nm}(t-t_j^{(f)})\) denotes thhe time course of a postsynaptic current caused by spike firing at time \(t_j^{(f)}\) of the presynaptic neuron \(j\) which is part of population \(m\). So \[ I_n=\sum_{m}^{} J_{nm}\int_{0}^{\infty} \alpha(s)A_m(t-s) \mathrm{d}s. \tag{12.11} \] We have dropped the index \(i\) since the input current is the same for all neurons in pool \(n\).

Distance dependent connectivity

For models of distance-dependent connectivity it is necessary to assign to each model neuron \(i\) a location \(x(i)\) on the two-dimensional cortical sheet.

Two different algorithmic procedures can be used to assign distance-dependent connectivity. The first one assumes full connectivity with a strength \(w_{ij}\) which falls off with distance

\[ w_{ij}=w(\lvert x(i)-x(j) \rvert ), \tag{12.12} \]

One may assume finite support so that \(w\) vanishes for distances \(\lvert x(i)-x(j) \rvert >d\).

The second alternative is to give all connections the same weight, but to assume that the probability \(P\) of forming a connection depends on the distance \[ \operatorname{Pr}(w_{ij}=1)=P(\lvert x(i)-x(j) \rvert ), \tag{12.13} \]

Spatial Continuum Limite

For neurons organized in a spatially extended multidimensional network, a description by discrete pool does not seem appropriate. However, a transition from discrete pools to a continuous population is possible.

We consider a population of neurons that extends along a one-dimensional axis and discretize space in segments of size \(d\). The number of neurons in the interval \([nd,(n+1)d]\) is \(N_n=\rho d\) where \(\rho\) is the spatial density. Neurons in that interval form the group \(\Gamma_n\).

We replace our notation \[ A_m(t) \longrightarrow A(md,t) =A(y,t). \tag{12.14} \]

We have \(J_{nm}=\rho d w(nd,md)\). Use (12.11) and find \[ I(nd,t)=\rho \sum_{m}^{} d w(nd,md) \int_{0}^{\infty} \alpha(s)A(md,t-s) \mathrm{d}s, \tag{12.15} \] where \(\alpha(s)\) describes the time course of the postsynaptic current caused by spike firing in one of the presynaptic neurons. For \(d \to 0\), we arrive at \[ I(x,t)=\rho \int_{}^{} w(x,y)\int_{0}^{\infty} \alpha(s)A(y,t-s) \mathrm{d}s \mathrm{d}y, \tag{12.16} \]

To rephrase (12.16) in words, the input to neurons at location \(x\) depends on the spatial distribution of the population activity convolved with the spatial coupling filter \(w(x,y)\) and the temporal filter \(\alpha(s)\). The population activity \(A(y,t-s)\Delta s\) is the number of spikes in a short interval \(\Delta s\) summed across neurons in the neighborhood around \(y\) normalized by the number of neurons in that neighborhood.

From Microscopic to Macroscopic

We now make the transition from the properties of single spiking neurons to the population activity in a homogeneous group of neurons.

Stationary activity and asynchronous firing

We define asynchronous firing of a neuronal population as a macroscopic firing state with constant activity \(A(t)=A_0\). We will see that the only relevant single-neuron property is its gain function, i.e. its mean firing rate as a function of input.

If the filter is kept fixed, while the population size is increased, the population activity in the stationary state of asynchronous firing approaches the constant value \(A_0\).

Stationary Activity as Single-Neuron Firing Rate

In a finite population, the empirical activity fluctuates and we can predict the expectation value \[ \langle A_0\rangle =\nu_i. \tag{12.18} \] The mean firing rate is given by the gain function \[ \nu_i=g_{\sigma}(I_0), \tag{12.19} \] where the subscript \(\sigma\) is intended to remind the reader that the shape of the gain function depends on the level of noise.

Activity of a fully connected network

We know \[ \langle A_0\rangle =g_{\sigma}(I). \tag{12.21} \] The gain function in the absence of any noise (fluctuation amplitude \(\sigma=0\)) will be denoted by \(g_0\).

We can impose a normalization \(\int_{0}^{\infty} \alpha(s) \mathrm{d}s=1\) and set \(\int_{0}^{\infty} \alpha(s)A(t-s) \mathrm{d}s=A_0\).

Therefore, the assumption of stationarity activity \(A_0\) combined with the assumption of constant external input \(I^{ext}(t)=I_0^{ext}\) yields a constant total driving current \[ I_0=w_0 NA_0+I_0^{ext}. \tag{12.23} \]

Together with (12.21) we arrive at an implicit equation for the population activity \(A_0\), \[ A_0=g_0(J_0A_0+I_0^{ext}). \tag{12.24} \] where \(g_0\) is the noise-free gain function of single neurons and \(J_0=w_0N\).

Example: Leaky integrate-and-fire model with diffusive noise

We consider a large and fully connected network of identical leaky integrate-and-fire neurons with homogeneous coupling \(w_{ij}=J_0/N\) and normalized postsynaptic currents \(\int_{0}^{\infty} \alpha(s) \mathrm{d}s=1\). In the state of asynchronous firing, the total input current driving a typical neuron of the network is then

\[ I_0=I^{ext}+J_0A_0. \tag{12.25} \] In addition, each neuron receives individual diffusive noise of variance \(\sigma^{2}\) that could represent spike arrival from other populations. The single-neuron gain function in the presence of diffusive noise has been stated in (8.54). We use the formula of the gain function to calculate the population acitvity \[ A_0=g_{\sigma}(I_0)=\left\{ \tau_m \sqrt{\pi}\int_{\frac{u_r-RI_0}{\sigma}}^{\frac{\theta-RI_0}{\sigma}} \exp (u^{2})[1+\text{erf}(u)] \mathrm{d}u\right\}^{-1}, \tag{12.26} \]

(Siegert-formula)

Activity of a randomly connected network

In this subsection, we discuss how to mathematically treat the additional noise arising from the network.

If all neurons fire at a rate \(\nu\) then the mean input current to neuron \(i\) generated by its \(C_{pre}\) presynaptic partners is \[ \langle I_0\rangle =C_{pre}qw\nu+I_0^{ext}, \tag{12.27} \] where \(q=\int_{0}^{\infty} \alpha(s) \mathrm{d}s\) denotes the integral over the postsynaptic current and can be interpreted as the total electric charge delivered by a single input spike.

The input current is not constant but fluctuates with a variance \(\sigma_{I}^{2}\) given by \[ \sigma_{I}^{2}=C_{pre} w^{2} q_2 \nu, \tag{12.28} \] where \(q_2=\int_{0}^{\infty} \alpha^{2}(s) \mathrm{d}s\).

Brunel network: excitatory and inhibitory populations

We assume that excitatory and inhibitory neurons have the same parameters \(\theta, \tau_m, R\) and \(u_r\). All neurons are driven by a common external current \(I^{ext}\). Each neuron in the population receives \(C_{E}\) synapses from excitatory neurons with weight \(w_{E}>0\) and \(C_{I}\) synapses from inhibitory neurons with weight \(w_{I}<0\).

If an input spike arrives at the synapses of neuron \(i\) from a presynaptic neuron \(j\), its membrane potential changes by an amount \(\Delta u_{E}=w_{E}qR/\tau_m\) if \(j\) is excitatory and \(\Delta u_{I}=\Delta u_{E} w_{I}/w_{E}\) if \(j\) is inhibitory. We set \[ \gamma=\frac{C_{I}}{C_{E}}, \quad g=-\frac{w_{I}}{w_{E}}=-\frac{\Delta u_{I}}{\Delta u_{E}}. \tag{12.30} \]

Since excitatory and inhibitory neurons receive the same number of input connections in our model, we assume that they fire with a common firing rate \(\nu\). The total input current generated by the external current and by the lateral couplings is \[ I_0=I^{ext}+q\sum_{j}^{} \nu_j w_j=I_0^{ext}+q\nu w_{E}C_{E}[1-\gamma g]. \tag{12.31} \]

We measure the noise strength by the variance \(\sigma_{u}^{2}\) of the membrane potential (as opposed to the variance \(\sigma_{I}^{2}\) of the input). From Chapter 8, we set \(\sigma_{u}^{2}=\frac{1}{2}\sigma^{2}\) where \[ \sigma^{2}=\sum_{j}^{} \nu_j \tau(\Delta u_j^{2})=\nu(\Delta u_{E})^{2} C_{E}[1+\gamma g^{2}]. \tag{12.32} \] The stationary firing rate \(A_0\) of the population with mean input \(I_0\) and variance \(\sigma\) is copied from (12.26) and repeated here for convenience

\[ A_0=\nu=g_{\sigma}(I_0)=\frac{1}{\tau_m}\left\{\sqrt{\pi}\int_{\frac{u_r-RI_0}{\sigma}}^{\frac{\theta-RI_0}{\sigma}} \exp (u^{2})[1+\text{erf}(u)] \mathrm{d}u\right\}^{-1}, \tag{12.33} \]

Numerical solutions of (12.31)-(12.33) have been obtained by Amit and Brunel.

Example: Inhibition dominated network

Suppose the mean feedback is dominated by inhibition. The effective coupling \(J^{eff}=\tau C_{E}\Delta u_{E}(1-\gamma g)\). In this case (12.31) is to be replaced by \[ h_0=\tau_m \nu \Delta u_{E}C_{E}[1-\gamma g]+\tau_m \nu_{ext}\Delta u_{ext} C_{ext}, \tag{12.34} \] with \(C_{ext}\) the number of connections that a neuron receives from neurons outside the population, \(\Delta u_{ext}\) their typical coupling strength characterized by the amplitude of the voltage jump, and \(\nu_{ext}\) their spike arrival rate. Due to the extra stochasticity in the input, the variance \(\sigma_u^{2}\) of the membrane voltage is larger \[ \sigma_u^{2}=\frac{1}{2}\sigma^{2}=\frac{1}{2}\tau_m \nu(\Delta u_{E})^{2}C_{E}[1+\gamma g^{2}]+\frac{1}{2}\tau_m \nu_{ext}(\Delta u_{ext})^{2} C_{ext} \tag{12.35} \]

(12.33)-(12.35) can be solved numerically.

Example: Vogels-Abbott network

Excitatory and inhibitory model neurons have the same parameters and are connected with the same probability \(p\) within and across the two sub-populations. The two difference to the Brunel network are - the choice of random connectivity in the Vogels-Abbott network does not preserve the number of presynaptic partners per neuron so that some neurons receive more and others less than \(pN\) connections - neurons in the Vogels-Abbott network communicates with each other by conductance-based synapses. A spike fired at time \(t_j^{(f)}\) causes a change in conductance \[ \tau_g \frac{\mathrm{d}g}{\mathrm{d}t}=-g+\tau_g \Delta g \sum_{f}^{} \delta(t-t_j^{(f)}). \tag{12.36} \] Thus, a synaptic input causes for \(t>t_j^{(f)}\) a contribution to the conductance \(g(t)=\Delta g \exp [-(t-t_j^{(f)})/\tau_g]\).

The dominant effect of conductance based input is a decrease of the effective membrane time constant. The mean input current \(I_0\) and the fluctuations \(\sigma\) of the membrane voltage also enter into the time constant \(\tau_{eff}\).

The Siegert formula holds only for short time constants for the conductances (\(\tau_{E}\to 0\) and \(\tau_{I}\to 0\)).


Neuronal Dynamics (12)
http://example.com/2022/10/05/Neuronal-Dynamics-12/
Author
John Doe
Posted on
October 5, 2022
Licensed under