Markovian Spiking Neural Network

Markovian Spiking Neural Network

Surrogate for linear integrate-and-fire (LIF) model

See original code.

Surrogate for FitzHugh-Nagumo (FHN) model

We refer to the system \[ \begin{cases} \frac{\mathrm{d}V}{\mathrm{d}t} &= -\frac{1}{\tau}\left((V-V_R)(V-V_{th})(V-V_M)+w \right) + I_{ext}(t) \\ \frac{\mathrm{d}w}{\mathrm{d}t} &= b(V-c)-\gamma w \end{cases} \tag{1} \]

while we'll see the parameters in the evolution of \(w\) does not matter in the Markovian surrogate. We set \(V_R=-70\)mV the resting potential, \(V_{th}=-50\)mV the firing threshold, \(V_M=40\)mV the magnitude of the peak of the spike.

The discretization of the state space is as follows: \(V\) space is discretized into \(V_M-V_I-1\) states (I found it difficult to deal with the boundary, or the stable fixed point), \([V_I+1,V_M-1]\). Here, \(V_I\) is the magnitude of hyperpolarization after a spike. It's not arbitrary and must be negative enough (so it's not ideal, and I think it's inherited from the deficit of the original FitzHugh-Nagumo model). \(w\) space is discretized into \(2\) states, \(0\) and \(w_M\). \(w_M\) is determined by \(V_I\), given by \[ w_{M} = -(V_{I}-V_{R})(V_{I}-V_{th})(V_{I}-V_{M})>0 \]

So we have \(V_{I}<V_{R}<V_{th}<V_{M}\). Now we explain how the membrane potential evolves.

  • Each neuron receives independent Poisson kicks from an external source with rate \(\lambda^{E}\) or \(\lambda^{I}\).
  • Each neuron receives synaptic input from within the population. Similar to the original Markovian model, denote \(H^{E}\) and \(H^{I}\) as numbers of kicks 'in play'.
  • The membrane potential evolves as follows:
    • If \(V_{I}< V < V_{th}\), then \(w=0\) and \(V\) evolves under the cubic term, external Poisson kicks and synaptic input from other neurons.
    • If \(V_{th}\leq V < V_{M}\), then \(V\) evolves solely under the cubic term.
    • If \(V=V_{M}\), then \(w\) is set to \(w_{M}\) with a rate \(\frac{1}{\tau_{w}}\), we think the neuron fires.
    • If \(V_{I}<V\leqslant V_{M}\) and \(w=w_{M}\), then the neuron is in the phase of re-polarization and hyperpolarization. \(V\) evolves solely under the cubic term. We think the neuron is in a refractory period.
    • If \(V=V_{I}\), then \(w\) is set to \(0\) with the rate \(\frac{1}{\tau_{w}}\).

The evolution of \(V\) is questionable and needs to be discussed.

Issue: questions about the original code

  • Is the code really match the paper? Modelling the independent exponential clock of a synaptic input in play as a term in the rate matrix?

Issue: linear non-superimposability

The construction of the transition matrix involves calculation of the rate of a leaky/cubic term. Separating the leaky term is straightforward in leaky integrate-and-fire model, but it's more complicated if the leaky term is not linear.

In leaky integrate-and-fire model, the leaky term is \[ \frac{\mathrm{d}V}{\mathrm{d}t} = -\frac{1}{\tau}V \]

Notice that we deliberately set the resting potential to \(0\). In the case \(V>0\),

\[ V(t)=V(0)\exp \left( -\frac{t}{\tau} \right) \]

Set \(V(t)=V-1, V(0)=V\), then \[ t = \tau \ln \left( 1+\frac{1}{V-1} \right) \]

So the rate is given by \[ \frac{1}{t} = \frac{1}{\tau\ln \left( 1+\frac{1}{V-1} \right) } \tag{2} \]

Similarly, in the case \(V<0\), the rate of \(V\) to \(V+1\) is given by \[ \frac{1}{t} = \frac{1}{\tau \ln \left( 1-\frac{1}{V+1} \right) } \tag{3} \]

However, such rate is not linearly additive. Since the 'leaky' term in \((1)\) is a cubic term, how to calculate the rate it represents is a question.

To illustrate the linear non-superimposability, consider a quadratic 'leaky' term \[ \frac{\mathrm{d}V}{\mathrm{d}t} = - V(V+1) \]

and \(V>0\). Consider \(V(V+1)\) as a whole, the rate of \(V\) to \(V-1\) is given by \[ \frac{1}{2\ln V-\ln (V-1)-\ln (V+1)} \tag{4} \]

Separate \(V(V+1)\) into \(V^{2}\) and \(V\), the rate of \(V\) to \(V-1\) is given by \[ \frac{1}{\ln V -\ln (V-1)} + V(V-1) \tag{5} \]

Obviously, \((4)\) and \((5)\) are not equal though they are both quite close to \(V(V+1)\)

Several options to calculate the rate of the cubic term in \((1)\) are: - Consider \((V-V_R)(V-V_{th})(V-V_M)\) as a whole, separate it with \(w\). - Separate \(O(V^{3})\), \(O(V^{2})\), \(O(V)\), \(O(1)\) and \(w\) terms. - Directly use the absolute value of RHS of \((1)\) as the rate.

Issue: boundary & stable fixed points

We must be really careful of the boundary and fixed points, i.e., \(V=V_{I}, V=V_{R}, V=V_{th}, V=V_{M}\) though \(V=V_{I}\) and \(V=V_{M}\) are impossible due to our choices of state space.

In \((2)\), \(V\) cannot be \(0,1\). In \((3)\), \(V\) cannot be \(-1,0\). This is relatively simple. However, if we use the first option above to calculate the rate of the cubic term, then two functions are \[ \frac{1}{\tau \left( p \log (1+\frac{1}{V-V_{R}-1})+q\log (1+\frac{1}{V-V_{th}-1}) + r\log (1+\frac{1}{V-V_{M}-1}) \right) } \tag{6} \]

where \(p,q,r\) is the solution to \[ \begin{pmatrix} 1 & 1 & 1 \\ V_{th}+V_{M} & V_{R} + V_{M} & V_{R} + V_{th} \\ V_{th}V_{M} & V_{R}V_{M} & V_{R}V_{th} \end{pmatrix} \begin{pmatrix} p \\ q \\ r \end{pmatrix} = \begin{pmatrix} 0 \\ 0 \\ 1 \end{pmatrix} \]

Similarly, the rate of \(V\) to \(V+1\) is given by \[ \frac{1}{\tau \left( p \log (1-\frac{1}{V-V_{R}+1})+q\log (1-\frac{1}{V-V_{th}+1}) + r\log (1-\frac{1}{V-V_{M}+1}) \right) } \tag{7} \]

In \((6)\), \(V\) cannot be \(V_{R}, V_{R}+1, V_{th}, V_{th}+1, V_{M}\). In \((7)\), \(V\) cannot be \(V_{R}-1, V_{R}, V_{th}-1, V_{th}, V_{M}-1, V_{M}\). It's laborious to tackle all the cases.

Current approach to deal with the boundary: The state space of \(V\) is narrowed to \([V_{I}+1, V_{M}-1]\). The neuron evolves on the cubic term, Poisson kicks and synaptic inputs around \(V_{R}\).

  • Around \(V=V_{R}\), the cubic term does not contribute if \(V=V_{R}, V_{R}+1\) when \(V\) is going to \(V-1\), and \(V=V_{R}, V_{R}-1\) when \(V\) is going to \(V+1\).
  • Around \(V=V_{th}\), the cubic term does not contribute if \(V=V_{th}, V_{th}+1\) when \(V\) is going to \(V-1\), and \(V=V_{th}, V_{th}-1\) when \(V\) is going to \(V+1\). Moreover, since \(V=V_{th}\) is a fixed point when \(w=0\), the neuron still receives Poisson kicks and synaptic inputs at \(V=V_{th}\), and evolves solely under the cubic term when \(V\geqslant V_{th}+1\).
  • Around \(V=V_{M}\), we think the neuron fires if \(V=V_{M}-1\), so there is no \(V\) to \(V+1\) issue in \((7)\).
  • Around \(V=V_{I}\), there is actually no issue since \(V=V_{I}\) is not a singular point of \((6)\). Recall that we separate the cubic term with \(w=w_{M}\), and the rate of \(w_{M}\) is given by \(w_M/\tau\).

Issue: parameters to make neuron fire

It's harder to find a set of parameters to make the neuron fire.