Synaptic Homeostasis and Restructuring across the Sleep-Wake Cycle

Wilfredo Blanco ,Catia M. Pereira ,Vinicius R. Cota ,Annie C. Souza ,César Rennó-Costa,Sharlene Santos,Gabriella Dias,Ana M. G. Guerreiro,Adriano B. L. Tort,Adrião D. Neto,Sidarta Ribeiro

Sleep is critical for hippocampus-dependent memory consolidation. However, the underlying mechanisms of synaptic plasticity are poorly understood. The central controversy is on whether long-term potentiation (LTP) takes a role during sleep and which would be its specific effect on memory. To address this question, we used immunohistochemistry to measure phosphorylation of Ca^2+/calmodulin-dependent protein kinase II (pCaMKIIα) in the rat hippocampus immediately after specific sleep-wake states were interrupted. Control animals not exposed to novel objects during waking (WK) showed stable pCaMKIIα levels across the sleep-wake cycle, but animals exposed to novel objects showed a decrease during subsequent slow-wave sleep (SWS) followed by a rebound during rapid-eye-movement sleep (REM). The levels of pCaMKIIα during REM were proportional to cortical spindles near SWS/REM transitions. Based on these results, we modeled sleep-dependent LTP on a network of fully connected excitatory neurons fed with spikes recorded from the rat hippocampus across WK, SWS and REM. Sleep without LTP orderly rescaled synaptic weights to a narrow range of intermediate values. In contrast, LTP triggered near the SWS/REM transition led to marked swaps in synaptic weight ranking. To better understand the interaction between rescaling and restructuring during sleep, we implemented synaptic homeostasis and embossing in a detailed hippocampal-cortical model with both excitatory and inhibitory neurons. Synaptic homeostasis was implemented by weakening potentiation and strengthening depression, while synaptic embossing was simulated by evoking LTP on selected synapses. We observed that synaptic homeostasis facilitates controlled synaptic restructuring. The results imply a mechanism for a cognitive synergy between SWS and REM, and suggest that LTP at the SWS/REM transition critically influences the effect of sleep: Its lack determines synaptic homeostasis, its presence causes synaptic restructuring.

Neural Networks with Dynamical Links and Self-Organized Criticality

João Guilherme Ferreira Campos, Ariadne de Andrade Costa, Mauro Copelli, Osame Kinouchi

In a recent work, mean-field analysis and computer simulations were employed to analyze critical self-organization in excitable cellular automata annealed networks, where randomly chosen links were depressed after each spike. Calculations agree with simulations of the annealed version, showing that the nominal \textit{branching ratio\/} σ converges to unity, and fluctuations vanish in the thermodynamic limit, as expected of a self-organized critical system. However, the question remains whether the same results apply to a biologically more plausible, quenched version, in which the neighborhoods are fixed, and only the active synapses are depressed. We show that simulations of the quenched model yield significant deviations from σ=1, due to spatio-temporal correlations. However, the model is shown to be critical, as the largest eigenvalue λ of the synaptic matrix is shown to approach unity, with fluctuations vanishing in the thermodynamic limit.

Non-parametric estimation of the spiking rate in systems of interacting neurons

Pierre Hodara, Nathalie Krell, Eva Löcherbach

We consider a model of interacting neurons where the membrane potentials of the neurons are described by a multidimensional piecewise deterministic Markov process (PDMP) with values in ℝN, where N is the number of neurons in the network. A deterministic drift attracts each neuron's membrane potential to an equilibrium potential m. When a neuron jumps, its membrane potential is reset to 0, while the other neurons receive an additional amount of potential 1/N. We are interested in the estimation of the jump (or spiking) rate of a single neuron based on an observation of the membrane potentials of the N neurons up to time t. We study a Nadaraya-Watson type kernel estimator for the jump rate and establish its rate of convergence in Lˆ2. This rate of convergence is shown to be optimal for a given H\"older class of jump rate functions. We also obtain a central limit theorem for the error of estimation. The main probabilistic tools are the uniform ergodicity of the process and a fine study of the invariant measure of a single neuron.

Modeling networks of spiking neurons as interacting processes with memory of variable length

Antonio Galves, Eva Löcherbach

We consider a new class of non Markovian processes with a countable number of interacting components, both in discrete and continuous time. Each component is represented by a point process indicating if it has a spike or not at a given time. The system evolves as follows. For each component, the rate (in continuous time) or the probability (in discrete time) of having a spike depends on the entire time evolution of the system since the last spike time of the component. In discrete time this class of systems extends in a non trivial way both Spitzer’s interacting particle systems, which are Markovian, and Rissanen’s stochastic chains with memory of variable length which have finite state space. In continuous time they can be seen as a kind of Rissanen’s variable length memory version of the class of self-exciting point processes which are also called “Hawkes processes”, however with infinitely many components. These features make this class a good candidate to describe the time evolution of networks of spiking neurons. In this article we present a critical reader’s guide to recent papers dealing with this class of models, both in discrete and in continuous time. We briefly sketch results concerning perfect simulation and existence issues, de-correlation between successive interspike intervals, the longtime behavior of finite systems and propagation of chaos in mean field systems.

Modelling intracellular competition for calcium: kinetic and thermodynamic control of different molecular modes of signal decoding

Gabriela Antunes, Antonio C. Roque, Fabio M. Simoes de Souza

Frequently, a common chemical entity triggers opposite cellular processes, which implies that the components of signalling networks must detect signals not only through their chemical natures, but also through their dynamic properties. To gain insights on the mechanisms of discrimination of the dynamic properties of cellular signals, we developed a computational stochastic model and investigated how three calcium ion (Ca2+)-dependent enzymes (adenylyl cyclase (AC), phosphodiesterase 1 (PDE1), and calcineurin (CaN)) differentially detect Ca2+ transients in a hippocampal dendritic spine. The balance among AC, PDE1 and CaN might determine the occurrence of opposite Ca2+-induced forms of synaptic plasticity, long-term potentiation (LTP) and long-term depression (LTD). CaN is essential for LTD. AC and PDE1 regulate, indirectly, protein kinase A, which counteracts CaN during LTP. Stimulations of AC, PDE1 and CaN with artificial and physiological Ca2+ signals demonstrated that AC and CaN have Ca2+ requirements modulated dynamically by different properties of the signals used to stimulate them, because their interactions with Ca2+ often occur under kinetic control. Contrarily, PDE1 responds to the immediate amplitude of different Ca2+ transients and usually with the same Ca2+ requirements observed under steady state. Therefore, AC, PDE1 and CaN decode different dynamic properties of Ca2+ signals.

On Sequence Learning Models: Open-loop Control Not Strictly Guided by Hick’s Law

Rodrigo Pavão, Joice P. Savietto, João R. Sato, Gilberto F. Xavier, André F. Helene

According to the Hick’s law, reaction times increase linearly with the uncertainty of target stimuli. We tested the generality of this law by measuring reaction times in a human sequence learning protocol involving serial target locations which differed in transition probability and global entropy. Our results showed that sigmoid functions better describe the relationship between reaction times and uncertainty when compared to linear functions. Sequence predictability was estimated by distinct statistical predictors: conditional probability, conditional entropy, joint probability and joint entropy measures. Conditional predictors relate to closed-loop control models describing that performance is guided by on-line access to past sequence structure to predict next location. Differently, joint predictors relate to open-loop control models assuming global access of sequence structure, requiring no constant monitoring. We tested which of these predictors better describe performance on the sequence learning protocol. Results suggest that joint predictors are more accurate than conditional predictors to track performance. In conclusion, sequence learning is better described as an open-loop process which is not precisely predicted by Hick’s law.

Mechanisms of self-sustained oscillatory states in hierarchical modular networks with mixtures of electrophysiological cell types

Petar Tomov; Rodrigo F. Pena; Antonio C. Roque; Michael A. Zaks

In a network with a mixture of different electrophysiological types of neurons linked by excitatory and inhibitory connections, temporal evolution leads through repeated epochs of intensive global activity separated by intervals with low activity level. This behavior mimics ``up'' and ``down'' states, experimentally observed in cortical tissues in absence of external stimuli. We interpret global dynamical features in terms of individual dynamics of the neurons. In particular, we observe that the crucial role both in interruption and in resumption of global activity is played by distributions of the membrane recovery variable within the network. We also demonstrate that the behavior of neurons is more influenced by their presynaptic environment in the network than by their formal types, assigned in accordance with their response to constant current.

A Stochastic System with Infinite Interacting Components to Model the Time Evolution of the Membrane Potentials of a Population of Neurons

K. Yaginuma

We consider a new class of interacting particle systems with a countable number of interacting components. The system represents the time evolution of the membrane potentials of an infinite set of interacting neurons. We prove the existence and uniqueness of the process, using a perfect simulation procedure. We show that this algorithm is successful, that is, we show that the number of steps of the algorithm is almost surely finite. We also construct a perfect simulation procedure for the coupling of a process with a finite number of neurons and the process with an infinite number of neurons. As a consequence, we obtain an upper bound for the error that we make when sampling from a finite set of neurons instead of the infinite set of neurons.

A note on supersaturated set systems

Peter Frankl, Yoshiharu Kohayakawa, Vojtěch Rödl

A well-known theorem of Erdős, Ko and Rado implies that any family ℱ of k-element subsets of an n-element set with more than members must contain two members F and F' with |F∩F'| < t, as long as n is sufficiently large with respect to k and t. We investigate how many such pairs (F,F') ∈ ℱ×ℱ there must be in any such family ℱ with and α > 1.

Absolute continuity of the invariant measure in Piecewise Deterministic Markov Processes having degenerate jumps

Eva Löcherbach

We consider piecewise deterministic Markov processes with degenerate transition kernels of the "house-of-cards"-type. We use a splitting scheme based on jump times to prove the absolute continuity, as well as some regularity, of the invariant measure of the process. Finally, we obtain finer results on the regularity of the one-dimensional marginals of the invariant measure, using integration by parts with respect to the jump times.




O Centro de Pesquisa, Inovação e Difusão em Neuromatemática está sediado na Universidade de São Paulo e é financiado pela FAPESP (Fundação de Amparo à Pesquisa do Estado de São Paulo).


Login do usuário



Rua do Matão, 1010 - Cidade Universitária - São Paulo - SP - Brasil. 05508-090. Veja o mapa.

55 11 3091-1717


Contatos de mídia: