A Result of Metastability for an Infinite System of Spiking Neurons

Morgan André

In 2018, Ferrari et al. wrote a paper called “Phase Transition for Infinite Systems of Spiking Neurons” in which they introduced a continuous time stochastic model of interacting neurons. This model consists in a countable number of neurons, each of them having an integer-valued membrane potential, which value determine the rate at which the neuron spikes. This model has also a parameter 𝛾, corresponding to the rate of the leak times of the neurons, that is, the times at which the membrane potential of a given neuron is spontaneously reset to its resting value (which is 0 by convention). As its title says, it was proven in this previous article that this model presents a phase transition phenomenon with respect to 𝛾. Here we prove that this model also exhibits a metastable behavior. By this we mean that if   đť›ľ  is small enough, then the re-normalized time of extinction of a finite version of this system converges toward an exponential random variable of mean 1 as the number of neurons goes to infinity.

Information theory applications in neuroscience

Vinícius Lima Cordeiro, Rodrigo Felipe de Oliveira Pena, Cesar Augusto Celis Ceballos, Renan Oliveira Shimoura and Antonio Carlos Roque

Neurons respond to external stimuli by emitting sequences of action potentials (spike trains). In this way, one can say that the spike train is the neuronal response to an input stimulus. Action potentials are “all-or-none” phenomena, which means that a spike train can be represented by a sequence of zeros and ones. In the context of information theory, one can then ask: how much information about a given stimulus the spike train conveys? Or rather, what aspects of the stimulus are encoded by the neuronal response? In this article, an introduction to information theory is presented which consists of historical aspects, fundamental concepts of the theory, and applications to neuroscience. The connection to neuroscience is made with the use of demonstrations and discussions of different methods of the theory of information. Examples are given through computer simulations of two neuron models, the Poisson neuron and the integrate-and-fire neuron, and a cellular automata network model. In the latter case, it is shown how one can use information theory measures to retrieve the connectivity matrix of a network.

Asymmetrical voltage response in resonant neurons shaped by nonlinearities

R. F. O. Pena, V. Lima, R. O. Shimoura, C. C. Ceballos, H. G. Rotstein and A. C. Roque

The conventional impedance profile of a neuron can identify the presence of resonance and other properties of the neuronal response to oscillatory inputs, such as nonlinear response amplifications, but it cannot distinguish other nonlinear properties such as asymmetries in the shape of the voltage response envelope. Experimental observations have shown that the response of neurons to oscillatory inputs preferentially enhances either the upper or lower part of the voltage envelope in different frequency bands. These asymmetric voltage responses arise in a neuron model when it is submitted to high enough amplitude oscillatory currents of variable frequencies. We show how the nonlinearities associated to different ionic currents or present in the model as captured by its voltage equation lead to asymmetrical response and how high amplitude oscillatory currents emphasize this response. We propose a geometrical explanation for the phenomenon where asymmetries result not only from nonlinearities in their activation curves but also from nonlinearites captured by the nullclines in the phase-plane diagram and from the system’s time-scale separation. In addition, we identify an unexpected frequency-dependent pattern which develops in the gating variables of these currents and is a product of strong nonlinearities in the system as we show by controlling such behavior by manipulating the activation curve parameters. The results reported in this paper shed light on the ionic mechanisms by which brain embedded neurons process oscillatory information.

Transcranial magnetic stimulation: a brief review on the principles and applications

Renan Hiroshi Matsuda, Gabriela Pazin Tardelli, Carlos Otávio Guimarães, Victor Hugo Souza and Oswaldo Baffa Filho

Transcranial magnetic stimulation is a noninvasive method of the human cortex stimulation. Known as TMS, the technique was introduced by Barker et al. in 1985. Its operation is based on the Faraday’s Law, in which an intense magnetic feld that varies rapidly is able to induce an electric feld in the surface of the brain, depolarizing the neurons in the cerebral cortex. Due to its versatility, TMS is currently used for both research and clinical applications. Among the clinical applications, TMS is used as a diagnostic tool and also as a therapeutic technique for some neurodegenerative diseases and psychiatric disorders such as depression, Parkinson’s disease and tinnitus. As for the diagnostic tool, motor mapping is a technique to delineate the area of representation of the target muscle in its cortical surface, whose applicability may be in studies of the cerebral physiology to evaluate damage to the motor cortex and corticospinal tract. This review aims to introduce the physics, the basic elements, the biological principles and the main applications of transcranial magnetic stimulation.

Asymptotically Deterministic Time of Extinction for a Stochastic System of Spiking Neurons

Morgan André

We consider a countably infinite system of spiking neurons. In this model each neuron has a membrane potential which takes value in the non-negative integers. Each neuron is also associated with two point processes. The first one is a Poisson process of some parameter γ, representing the \textit{leak times}, that is the times at which the membrane potential of the neuron is spontaneously reset to 0. The second point process, which represents the \textit{spiking times}, has a non-constant rate which depends on the membrane potential of the neuron at time t. This model was previously proven to present a phase transition with respect to the parameter γ. It was also proven that the renormalized time of extinction of a finite version of the system converges in law toward an exponential random variable when the number of neurons goes to infinity, which indicates a metastable behavior. Here we prove a result which is in some sense the symmetrical of this last result: we prove that when γ>1 (super-critical) the renormalized time of extinction converges in probability to 1.

NeuroCineMat
Featuring this week:
Newsletter

Stay informed on our latest news!



Previous issues

Podcast A Matemática do Cérebro
Podcast A Matemática do Cérebro
NeuroMat Brachial Plexus Injury Initiative
Logo of the NeuroMat Brachial Plexus Injury Initiative
Neuroscience Experiments System
Logo of the Neuroscience Experiments System
NeuroMat Parkinson Network
Logo of the NeuroMat Parkinson Network
NeuroMat's scientific-dissemination blog
Logo of the NeuroMat's scientific-dissemination blog