Publications

The role of negative conductances in neuronal subthreshold properties and synaptic integration

Cesar C. Ceballos, Antonio C. Roque and Ricardo M. Leão

Based on passive cable theory, an increase in membrane conductance produces a decrease in the membrane time constant and input resistance. Unlike the classical leak currents, voltage-dependent currents have a nonlinear behavior which can create regions of negative conductance, despite the increase in membrane conductance (permeability). This negative conductance opposes the effects of the passive membrane conductance on the membrane input resistance and time constant, increasing their values and thereby substantially affecting the amplitude and time course of postsynaptic potentials at the voltage range of the negative conductance. This paradoxical effect has been described for three types of voltage-dependent inward currents: persistent sodium currents, L- and T-type calcium currents and ligand-gated glutamatergic N-methyl-D-aspartate currents. In this review, we describe the impact of the creation of a negative conductance region by these currents on neuronal membrane properties and synaptic integration. We also discuss recent contributions of the quasi-active cable approximation, an extension of the passive cable theory that includes voltage-dependent currents, and its effects on neuronal subthreshold properties.

Variability in functional brain networks predicts expertise during action observation

Amoruso L, Ibáñez A, Fonseca B, Gadea S, Sedeño L, Sigman M, García AM, Fraiman R, Fraiman D.

Observing an action performed by another individual activates, in the observer, similar circuits as those involved in the actual execution of that action. This activation is modulated by prior experience; indeed, sustained training in a particular motor domain leads to structural and functional changes in critical brain areas. Here, we capitalized on a novel graph-theory approach to electroencephalographic data (Fraiman et al., 2016) to test whether variability in functional brain networks implicated in Tango observation can discriminate between groups differing in their level of expertise. We found that experts and beginners significantly differed in the functional organization of task-relevant networks. Specifically, networks in expert Tango dancers exhibited less variability and a more robust functional architecture. Notably, these expertise-dependent effects were captured within networks derived from electrophysiological brain activity recorded in a very short time window (2s). In brief, variability in the organization of task-related networks seems to be a highly sensitive indicator of long-lasting training effects. This finding opens new methodological and theoretical windows to explore the impact of domain-specific expertise on brain plasticity, while highlighting variability as a fruitful measure in neuroimaging research.

On Sequence Learning Models: Open-loop Control Not Strictly Guided by Hick’s Law

Rodrigo Pavão, Joice P. Savietto, João R. Sato, Gilberto F. Xavier and André F. Helene

According to the Hick’s law, reaction times increase linearly with the uncertainty of target stimuli. We tested the generality of this law by measuring reaction times in a human sequence learning protocol involving serial target locations which differed in transition probability and global entropy. Our results showed that sigmoid functions better describe the relationship between reaction times and uncertainty when compared to linear functions. Sequence predictability was estimated by distinct statistical predictors: conditional probability, conditional entropy, joint probability and joint entropy measures. Conditional predictors relate to closed-loop control models describing that performance is guided by on-line access to past sequence structure to predict next location. Differently, joint predictors relate to open-loop control models assuming global access of sequence structure, requiring no constant monitoring. We tested which of these predictors better describe performance on the sequence learning protocol. Results suggest that joint predictors are more accurate than conditional predictors to track performance. In conclusion, sequence learning is better described as an open-loop process which is not precisely predicted by Hick’s law.

Nonparametric statistics of dynamic networks with distinguishable nodes

Daniel Fraiman, Nicolas Fraiman and Ricardo Fraiman

The study of random graphs and networks had an explosive development in the last couple of decades. Meanwhile, techniques for the statistical analysis of sequences of networks were less developed. In this paper, we focus on networks sequences with a fixed number of labeled nodes and study some statistical problems in a nonparametric framework. We introduce natural notions of center and a depth function for networks that evolve in time. We develop several statistical techniques including testing, supervised and unsupervised classification, and some notions of principal component sets in the space of networks. Some examples and asymptotic results are given, as well as two real data examples.

Ih Equalizes Membrane Input Resistance in a Heterogeneous Population of Fusiform Neurons in the Dorsal Cochlear Nucleus

Cesar C. Ceballos, Shuang Li, Antonio C. Roque, Thanos Tzounopoulos and Ricardo M. Leão

In a neuronal population, several combinations of its ionic conductances are used to attain a specific firing phenotype. Some neurons present heterogeneity in their firing, generally produced by expression of a specific conductance, but how additional conductances vary along in order to homeostatically regulate membrane excitability is less known. Dorsal cochlear nucleus principal neurons, fusiform neurons, display heterogeneous spontaneous action potential activity and thus represent an appropriate model to study the role of different conductances in establishing firing heterogeneity. Particularly, fusiform neurons are divided into quiet, with no spontaneous firing, or active neurons, presenting spontaneous, regular firing. These modes are determined by the expression levels of an intrinsic membrane conductance, an inwardly rectifying potassium current (IKir). In this work, we tested whether other subthreshold conductances vary homeostatically to maintain membrane excitability constant across the two subtypes. We found that Ih expression covaries specifically with IKir in order to maintain membrane resistance constant. The impact of Ih on membrane resistance is dependent on the level of IKir expression, being much smaller in quiet neurons with bigger IKir, but Ih variations are not relevant for creating the quiet and active phenotypes. Finally, we demonstrate that the individual proportion of each conductance, and not their absolute conductance, is relevant for determining the neuronal firing mode. We conclude that in fusiform neurons the variations of their different subthreshold conductances are limited to specific conductances in order to create firing heterogeneity and maintain membrane homeostasis.

Multi-class oscillating systems of interacting neurons

Susanne Ditlevsen and Eva Löcherbach

We consider multi-class systems of interacting nonlinear Hawkes processes modeling several large families of neurons and study their mean field limits. As the total number of neurons goes to infinity we prove that the evolution within each class can be described by a nonlinear limit differential equation driven by a Poisson random measure, and state associated central limit theorems. We study situations in which the limit system exhibits oscillatory behavior, and relate the results to certain piecewise deterministic Markov processes and their diffusion approximations.

The maximum size of a non-trivial intersecting uniform family that is not a subfamily of the Hilton-Milner family

Jie Han and Yoshiharu Kohayakawa

The celebrated Erdos–Ko–Rado theorem determines the maximum size of a k-uniform intersecting family. The Hilton–Milner theorem determines the maximum size of a k-uniform intersecting family that is not a subfamily of the so-called Erdos–Ko–Rado family. In turn, it is natural to ask what the maximum size of an intersecting k-uniform family that is neither a subfamily of the Erdos–Ko–Rado family nor of the Hilton–Milner family is. For k ≥ 4, this was solved (implicitly) in the same paper by Hilton–Milner in 1967. We give a different and simpler proof, based on the
shifting method, which allows us to solve all cases k ≥ 3 and characterize all extremal families achieving the extremal value.

Self-Organized Supercriticality and Oscillations in Networks of Stochastic Spiking Neurons

Ariadne A. Costa, Ludmila Brochini and Osame Kinouchi

Networks of stochastic spiking neurons are interesting models in the area of Theoretical Neuroscience, presenting both continuous and discontinuous phase transitions. Here we study fully connected networks analytically, numerically and by computational simulations. The neurons have dynamic gains that enable the network to converge to a stationary slightly supercritical state (self-organized supercriticality or SOSC) in the presence of the continuous transition. We show that SOSC, which presents power laws for neuronal avalanches plus some large events, is robust as a function of the main parameter of the neuronal gain dynamics. We discuss the possible applications of the idea of SOSC to biological phenomena like epilepsy and dragon king avalanches. We also find that neuronal gains can produce collective oscillations that coexists with neuronal avalanches, with frequencies compatible with characteristic brain rhythms.

Electrophysiological Evidence That the Retrosplenial Cortex Displays a Strong and Specific Activation Phased with Hippocampal Theta during Paradoxical (REM) Sleep

Bruna Del Vechio Koike, Kelly Soares Farias, Francesca Billwiller, Daniel Almeida-Filho, Paul-Antoine Libourel, Alix Tiran-Cappello, Régis Parmentier, Wilfredo Blanco, Sidarta Ribeiro, Pierre-Herve Luppi and Claudio Marcos Queiroz

It is widely accepted that cortical neurons are similarly more activated during waking and paradoxical sleep (PS; aka REM) than during slow-wave sleep (SWS). However, we recently reported using Fos labeling that only a few limbic cortical structures including the retrosplenial cortex (RSC) and anterior cingulate cortex (ACA) contain a large number of neurons activated during PS hypersomnia. Our aim in the present study was to record local field potentials and unit activity from these two structures across all vigilance states in freely moving male rats to determine whether the RSC and the ACA are electrophysiologically specifically active during basal PS episodes. We found that theta power was significantly higher during PS than during active waking (aWK) similarly in the RSC and hippocampus (HPC) but not in ACA. Phase–amplitude coupling between HPC theta and gamma oscillations strongly and specifically increased in RSC during PS compared with aWK. It did not occur in ACA. Further, 68% and 43% of the units recorded in the RSC and ACA were significantly more active during PS than during aWK and SWS, respectively. In addition, neuronal discharge of RSC but not of ACA neurons increased just after the peak of hippocampal theta wave. Our results show for the first time that RSC neurons display enhanced spiking in synchrony with theta specifically during PS. We propose that activation of RSC neurons specifically during PS may play a role in the offline consolidation of spatial memories, and in the generation of vivid perceptual scenery during dreaming.

Hawkes processes with variable length memory and an infinite number of components

Pierre Hodara and Eva Löcherbach

In this paper we propose a model for biological neural nets where the activity of the network is described by Hawkes processes having a variable length memory. The particularity in this paper is that we deal with an infinite number of components. We propose a graphical construction of the process and build, by means of a perfect simulation algorithm, a stationary version of the process. To implement this algorithm, we make use of a Kalikow-type decomposition technique. Two models are described in this paper. In the first model, we associate to each edge of the interaction graph a saturation threshold that controls the influence of a neuron on another. In the second model, we impose a structure on the interaction graph leading to a cascade of spike trains. Such structures, where neurons are divided into layers, can be found in the retina.

Pages

 

NeuroMat

The Research, Innovation and Dissemination Center for Neuromathematics is hosted by the University of São Paulo and funded by FAPESP (São Paulo Research Foundation).

 

User login

 

Contact

Address:
1010 Matão Street - Cidade Universitária - São Paulo - SP - Brasil. 05508-090. See map.

Phone:
55 11 3091-1717

General contact email:
neuromat@numec.prp.usp.br

Media inquiries email:
comunicacao@numec.prp.usp.br