Iara Frondana, Rodrigo R.S. Carvalho, Florencia Leonardi
Epilepsy has been a central topic in computational neuroscience, and in silico models have shown to be excellent tools to integrate and evaluate findings from animal and clinical settings. Among the different languages and tools for computational modeling development, NEURON stands out as one of the most used and mature neurosimulators. However, despite the vast quantity of models developed with NEURON, a fragmentation problem is evident in the great majority of models related to the same type of cell or cell properties. This fragmentation causes a lack of interoperability between the models because of differences in parameters.
Cecilia Romaro, Fernando Araujo Najman, Morgan André
In this paper we present a numerical study of a mathematical model of spiking neurons introduced by Ferrari et al. in an article entitled Phase transition forinfinite systems of spiking neurons. In this model we have a countable number of neurons linked together in a network, each of them having a membrane potential taking value in the integers, and each of them spiking over time at a rate which depends on the membrane potential through some rate function ϕ. Beside being affected by a spike each neuron can also be affected by leaking. At each of these leak times, which occurs for a given neuron at a fixed rate γ, the membrane potential of the neuron concerned is spontaneously reset to 0.
Antonio Galves, Eva Löcherbach, Christophe Pouzat, Errico Presutti
In this paper we present a simple microscopic stochastic model describing short term plasticity within a large homogeneous network of interacting neurons. Each neuron is represented by its membrane potential and by the residual calcium concentration within the cell at a given time. Neurons spike at a rate depending on their membrane potential. When spiking, the residual calcium concentration of the spiking neuron increases by one unit. Moreover, an additional amount of potential is given to all other neurons in the system. This amount depends linearly on the current residual calcium concentration within the cell of the spiking neuron. In between successive spikes, the potentials and the residual calcium concentrations of each neuron decrease at a constant rate.
In 2018, Ferrari et al. wrote a paper called “Phase Transition for Infinite Systems of Spiking Neurons” in which they introduced a continuous time stochastic model of interacting neurons. This model consists in a countable number of neurons, each of them having an integer-valued membrane potential, which value determine the rate at which the neuron spikes. This model has also a parameter 𝛾, corresponding to the rate of the leak times of the neurons, that is, the times at which the membrane potential of a given neuron is spontaneously reset to its resting value (which is 0 by convention). As its title says, it was proven in this previous article that this model presents a phase transition phenomenon with respect to 𝛾. Here we prove that this model also exhibits a metastable behavior. By this we mean that if 𝛾 is small enough, then the re-normalized time of extinction of a finite version of this system converges toward an exponential random variable of mean 1 as the number of neurons goes to infinity.
Vinícius Lima Cordeiro, Rodrigo Felipe de Oliveira Pena, Cesar Augusto Celis Ceballos, Renan Oliveira Shimoura and Antonio Carlos Roque
Neurons respond to external stimuli by emitting sequences of action potentials (spike trains). In this way, one can say that the spike train is the neuronal response to an input stimulus. Action potentials are “all-or-none” phenomena, which means that a spike train can be represented by a sequence of zeros and ones. In the context of information theory, one can then ask: how much information about a given stimulus the spike train conveys? Or rather, what aspects of the stimulus are encoded by the neuronal response? In this article, an introduction to information theory is presented which consists of historical aspects, fundamental concepts of the theory, and applications to neuroscience. The connection to neuroscience is made with the use of demonstrations and discussions of different methods of the theory of information. Examples are given through computer simulations of two neuron models, the Poisson neuron and the integrate-and-fire neuron, and a cellular automata network model. In the latter case, it is shown how one can use information theory measures to retrieve the connectivity matrix of a network.
Featuring this week:
Stay informed on our latest news!
|Follow Us on Facebook|