Stochastic Induction of Long-Term Potentiation and Long-Term Depression

G. Antunes, A. C. Roque & F. M. Simoes-de-Souza

Long-term depression (LTD) and long-term potentiation (LTP) of granule-Purkinje cell synapses are persistent synaptic alterations induced by high and low rises of the intracellular calcium ion concentration ([Ca2+]), respectively. The occurrence of LTD involves the activation of a positive feedback loop formed by protein kinase C, phospholipase A2, and the extracellular signal-regulated protein kinase pathway, and its expression comprises the reduction of the population of synaptic AMPA receptors. Recently, a stochastic computational model of these signalling processes demonstrated that, in single synapses, LTD is probabilistic and bistable. Here, we expanded this model to simulate LTP, which requires protein phosphatases and the increase in the population of synaptic AMPA receptors. Our results indicated that, in single synapses, while LTD is bistable, LTP is gradual. Ca2+ induced both processes stochastically. The magnitudes of the Ca2+ signals and the states of the signalling network regulated the likelihood of LTP and LTD and defined dynamic macroscopic Ca2+ thresholds for the synaptic modifications in populations of synapses according to an inverse Bienenstock, Cooper and Munro (BCM) rule or a sigmoidal function. In conclusion, our model presents a unifying mechanism that explains the macroscopic properties of LTP and LTD from their dynamics in single synapses.

A stochastic version of the Potjans-Diesmann cortical column model

Vinicius L. Cordeiro, Renan O. Shimoura, Nilton L. Kamiji, Osame Kinouchi and Antonio C. Roque

Experimental evidence suggests that neurons and neural circuits display stochastic variability [1] and, therefore, it is important to have neural models that capture this stochasticity. There are basically two types of noise model for a neuron [2]: (1) spike generation is modeled deterministically and noise enters the dynamics via additional stochastic terms; or (2) spike generation is directly modeled as a stochastic process. Recently, Galves and Löcherbach [3] introduced a neural model of the latter type in which the firing of a neuron at a given time t is a random event with probability given by a monotonically increasing function of its membrane potential V. The model of Galves and Löcherbach (GL) has as one of its components a graph of interactions between neurons. In this work we consider that this graph has the structure of the Potjans and Diesmann network model of a cortical column [4]. The model of Potjans and Diesmann has four layers and two neuron types, excitatory and inhibitory, so that there are eight cell populations. The population-specific neuron densities and connectivity are taken from comprehensive anatomical and electrophysiological studies [5-6], and the model has approximately 80,000 neurons and 300,000,000 synapses. We adjusted the parameters of the firing probability of the GL model to reproduce the firing behavior of regular (excitatory) and fast (inhibitory) spiking neurons [7]. Then, we replaced the leaky integrate-and-fire neurons of the original Potjans-Diesmann model by these stochastic neurons to obtain a stochastic version of the Potjans-Diesmann model. The parameters of the model are the weights we and wi of the excitatory and inhibitory synaptic weights of the GL model [3]. We studied the firing patterns of the eight cell populations of the stochastic model in the absence of external input and characterized their behavior in the two-dimensional diagram spanned by the excitatory and inhibitory synaptic weights. For a balanced case in which the network activity is asynchronous and irregular the properties of the stochastic model are similar to the properties of the original Potjans-Diesmann model. Different neural populations have different firing rates and inhibitory neurons have higher firing rates than excitatory neurons. In particular, the stochastic model emulates the very low firing rates of layer 2/3 observed in the original model and also experimentally [4]. We also submitted the network to random input spikes applied to layers 4 and 6 to mimic thalamic inputs, as done by Potjans and Diesmann [4], and studied the propagation of activity across layers. In conclusion, the stochastic version of the Potjans-Diesmann model can be a useful replacement for the original Potjans-Diesmann model in studies that require a comparison between stochastic and deterministic models.

A Review of Guidelines and Models for Representation of Provenance Information from Neuroscience Experiments

Margarita Ruiz-Olazar, Evandro S. Rocha, Sueli S. Rabaça, Carlos Eduardo Ribas, Amanda S. Nascimento, Kelly R. Braghetto

To manage raw data from Neuroscience experiments we have to cope with the heterogeneity of data formats and the complexity of additional metadata, such as its provenance information, that need to be collected and stored. Although some progress has already been made toward the elaboration of a common description for Neuroscience experimental data, to the best of our knowledge, there is still no widely adopted standard model to describe this kind of data. In order to foster neurocientists to find and to use a structured and comprehensive model with a robust tracking of data provenance, we present a brief evaluation of guidelines and models for representation of raw data from Neuroscience experiments, focusing on how they support provenance tracking.

Motor planning of goal-directed action is tuned by the emotional valence of the stimulus: a kinematic study

P. O. Esteves, L. A. S. Oliveira, A. A. Nogueira-Campos, G. Saunier, T. Pozzo, J. M. Oliveira, E. C. Rodrigues, E. Volchan & C. D. Vargas

The basic underpinnings of homeostatic behavior include interacting with positive items and avoiding negative ones. As the planning aspects of goal-directed actions can be inferred from their movement features, we investigated the kinematics of interacting with emotion-laden stimuli. Participants were instructed to grasp emotion-laden stimuli and bring them toward their bodies while the kinematics of their wrist movement was measured. The results showed that the time to peak velocity increased for bringing pleasant stimuli towards the body compared to unpleasant and neutral ones, suggesting higher easiness in undertaking the task with pleasant stimuli. Furthermore, bringing unpleasant stimuli towards the body increased movement time in comparison with both pleasant and neutral ones while the time to peak velocity for unpleasant stimuli was the same as for that of neutral stimuli. There was no change in the trajectory length among emotional categories. We conclude that during the “reach-to-grasp” and “bring-to-the-body” movements, the valence of the stimuli affects the temporal but not the spatial kinematic features of motion. To the best of our knowledge, we show for the first time that the kinematic features of a goal-directed action are tuned by the emotional valence of the stimuli.

The lower tail of random quadratic forms, with applications to ordinary least squares and restricted eigenvalue properties

Roberto Imbuzeiro Oliveira

Finite sample properties of random covariance-type matrices have been the subject of much research. In this paper we focus on the "lower tail" of such a matrix, and prove that it is subgaussian under a simple fourth moment assumption on the one-dimensional marginals of the random vectors. A similar result holds for more general sums of random positive semidefinite matrices, and the (relatively simple) proof uses a variant of the so-called PAC-Bayesian method for bounding empirical processes.
We give two applications of the main result. In the first one we obtain a new finite-sample bound for ordinary least squares estimator in linear regression with random design. Our result is model-free, requires fairly weak moment assumptions and is almost optimal. Our second application is to bounding restricted eigenvalue constants of certain random ensembles with "heavy tails". These constants are important in the analysis of problems in Compressed Sensing and High Dimensional Statistics, where one recovers a sparse vector from a small umber of linear measurements. Our result implies that heavy tails still allow for the fast recovery rates found in efficient methods such as the LASSO and the Dantzig selector. Along the way we strengthen, with a fairly short argument, a recent result of Rudelson and Zhou on the restricted eigenvalue property.

Synaptic Homeostasis and Restructuring across the Sleep-Wake Cycle

Wilfredo Blanco ,Catia M. Pereira ,Vinicius R. Cota ,Annie C. Souza ,César Rennó-Costa,Sharlene Santos,Gabriella Dias,Ana M. G. Guerreiro,Adriano B. L. Tort,Adrião D. Neto,Sidarta Ribeiro

Sleep is critical for hippocampus-dependent memory consolidation. However, the underlying mechanisms of synaptic plasticity are poorly understood. The central controversy is on whether long-term potentiation (LTP) takes a role during sleep and which would be its specific effect on memory. To address this question, we used immunohistochemistry to measure phosphorylation of Ca^2+/calmodulin-dependent protein kinase II (pCaMKIIα) in the rat hippocampus immediately after specific sleep-wake states were interrupted. Control animals not exposed to novel objects during waking (WK) showed stable pCaMKIIα levels across the sleep-wake cycle, but animals exposed to novel objects showed a decrease during subsequent slow-wave sleep (SWS) followed by a rebound during rapid-eye-movement sleep (REM). The levels of pCaMKIIα during REM were proportional to cortical spindles near SWS/REM transitions. Based on these results, we modeled sleep-dependent LTP on a network of fully connected excitatory neurons fed with spikes recorded from the rat hippocampus across WK, SWS and REM. Sleep without LTP orderly rescaled synaptic weights to a narrow range of intermediate values. In contrast, LTP triggered near the SWS/REM transition led to marked swaps in synaptic weight ranking. To better understand the interaction between rescaling and restructuring during sleep, we implemented synaptic homeostasis and embossing in a detailed hippocampal-cortical model with both excitatory and inhibitory neurons. Synaptic homeostasis was implemented by weakening potentiation and strengthening depression, while synaptic embossing was simulated by evoking LTP on selected synapses. We observed that synaptic homeostasis facilitates controlled synaptic restructuring. The results imply a mechanism for a cognitive synergy between SWS and REM, and suggest that LTP at the SWS/REM transition critically influences the effect of sleep: Its lack determines synaptic homeostasis, its presence causes synaptic restructuring.

Neural Networks with Dynamical Links and Self-Organized Criticality

João Guilherme Ferreira Campos, Ariadne de Andrade Costa, Mauro Copelli, Osame Kinouchi

In a recent work, mean-field analysis and computer simulations were employed to analyze critical self-organization in excitable cellular automata annealed networks, where randomly chosen links were depressed after each spike. Calculations agree with simulations of the annealed version, showing that the nominal \textit{branching ratio\/} σ converges to unity, and fluctuations vanish in the thermodynamic limit, as expected of a self-organized critical system. However, the question remains whether the same results apply to a biologically more plausible, quenched version, in which the neighborhoods are fixed, and only the active synapses are depressed. We show that simulations of the quenched model yield significant deviations from σ=1, due to spatio-temporal correlations. However, the model is shown to be critical, as the largest eigenvalue λ of the synaptic matrix is shown to approach unity, with fluctuations vanishing in the thermodynamic limit.

Non-parametric estimation of the spiking rate in systems of interacting neurons

Pierre Hodara, Nathalie Krell, Eva Löcherbach

We consider a model of interacting neurons where the membrane potentials of the neurons are described by a multidimensional piecewise deterministic Markov process (PDMP) with values in ℝN, where N is the number of neurons in the network. A deterministic drift attracts each neuron's membrane potential to an equilibrium potential m. When a neuron jumps, its membrane potential is reset to 0, while the other neurons receive an additional amount of potential 1/N. We are interested in the estimation of the jump (or spiking) rate of a single neuron based on an observation of the membrane potentials of the N neurons up to time t. We study a Nadaraya-Watson type kernel estimator for the jump rate and establish its rate of convergence in Lˆ2. This rate of convergence is shown to be optimal for a given H\"older class of jump rate functions. We also obtain a central limit theorem for the error of estimation. The main probabilistic tools are the uniform ergodicity of the process and a fine study of the invariant measure of a single neuron.

Modeling networks of spiking neurons as interacting processes with memory of variable length

Antonio Galves, Eva Löcherbach

We consider a new class of non Markovian processes with a countable number of interacting components, both in discrete and continuous time. Each component is represented by a point process indicating if it has a spike or not at a given time. The system evolves as follows. For each component, the rate (in continuous time) or the probability (in discrete time) of having a spike depends on the entire time evolution of the system since the last spike time of the component. In discrete time this class of systems extends in a non trivial way both Spitzer’s interacting particle systems, which are Markovian, and Rissanen’s stochastic chains with memory of variable length which have finite state space. In continuous time they can be seen as a kind of Rissanen’s variable length memory version of the class of self-exciting point processes which are also called “Hawkes processes”, however with infinitely many components. These features make this class a good candidate to describe the time evolution of networks of spiking neurons. In this article we present a critical reader’s guide to recent papers dealing with this class of models, both in discrete and in continuous time. We briefly sketch results concerning perfect simulation and existence issues, de-correlation between successive interspike intervals, the longtime behavior of finite systems and propagation of chaos in mean field systems.

Modelling intracellular competition for calcium: kinetic and thermodynamic control of different molecular modes of signal decoding

Gabriela Antunes, Antonio C. Roque, Fabio M. Simoes de Souza

Frequently, a common chemical entity triggers opposite cellular processes, which implies that the components of signalling networks must detect signals not only through their chemical natures, but also through their dynamic properties. To gain insights on the mechanisms of discrimination of the dynamic properties of cellular signals, we developed a computational stochastic model and investigated how three calcium ion (Ca2+)-dependent enzymes (adenylyl cyclase (AC), phosphodiesterase 1 (PDE1), and calcineurin (CaN)) differentially detect Ca2+ transients in a hippocampal dendritic spine. The balance among AC, PDE1 and CaN might determine the occurrence of opposite Ca2+-induced forms of synaptic plasticity, long-term potentiation (LTP) and long-term depression (LTD). CaN is essential for LTD. AC and PDE1 regulate, indirectly, protein kinase A, which counteracts CaN during LTP. Stimulations of AC, PDE1 and CaN with artificial and physiological Ca2+ signals demonstrated that AC and CaN have Ca2+ requirements modulated dynamically by different properties of the signals used to stimulate them, because their interactions with Ca2+ often occur under kinetic control. Contrarily, PDE1 responds to the immediate amplitude of different Ca2+ transients and usually with the same Ca2+ requirements observed under steady state. Therefore, AC, PDE1 and CaN decode different dynamic properties of Ca2+ signals.




The Research, Innovation and Dissemination Center for Neuromathematics is hosted by the University of São Paulo and funded by FAPESP (São Paulo Research Foundation).


User login



1010 Matão Street - Cidade Universitária - São Paulo - SP - Brasil. 05508-090. See map.

55 11 3091-1717

General contact email:

Media inquiries email: