Publicações

Packing arborescences in random digraphs

Carlos Hoppen, Roberto F. Parente and Cristiane M. Sato

We study the problem of packing arborescences in the random digraph D(n,p), where each possible arc is included uniformly at random with probability p=p(n). Let λ (D(n,p)) denote the largest integer λ≥0 such that, for all 0≤ℓ≤λ, we have ∑i=0ℓ−1(ℓ−i)|{v:din(v)=i}|≤ℓ. We show that the maximum number of arc-disjoint arborescences in D(n,p) is λ(D(n,p)) a.a.s. We also give tight estimates for λ(D(n,p)) depending on the range of p.

Thought disorder measured as random speech structure classifies negative symptoms and schizophrenia diagnosis 6 months in advance

Natália B. Mota, Mauro Copelli and Sidarta Ribeiro

In chronic psychotic patients, word graph analysis shows potential as complementary psychiatric assessment. This analysis relies mostly on connectedness, a structural feature of speech that is anti-correlated with negative symptoms. Here we aimed to verify whether speech disorganization during the first clinical contact, as measured by graph connectedness, can correctly classify negative symptoms and the schizophrenia diagnosis 6 months in advance. Positive and negative syndrome scale scores and memory reports were collected from 21 patients undergoing first clinical contact for recent-onset psychosis, followed for 6 months to establish diagnosis, and compared to 21 well-matched healthy subjects. Each report was represented as a word-trajectory graph. Connectedness was measured by number of edges, number of nodes in the largest connected component and number of nodes in the largest strongly connected component. Similarities to random graphs were estimated. All connectedness attributes were combined into a single Disorganization Index weighted by the correlation with the positive and negative syndrome scale negative subscale, and used for classifications. Random-like connectedness was more prevalent among schizophrenia patients (64 × 5% in Control group, p = 0.0002). Connectedness from two kinds of memory reports (dream and negative image) explained 88% of negative symptoms variance (p  <  0.0001). The Disorganization Index classified low vs. high severity of negative symptoms with 100% accuracy (area under the receiver operating characteristic curve = 1), and schizophrenia diagnosis with 91.67% accuracy (area under the receiver operating characteristic curve = 0.85). The index was validated in an independent cohort of chronic psychotic patients and controls (N = 60) (85% accuracy). Thus, speech disorganization during the first clinical contact correlates tightly with negative symptoms, and is quite discriminative of the schizophrenia diagnosis.

The role of negative conductances in neuronal subthreshold properties and synaptic integration

Cesar C. Ceballos, Antonio C. Roque and Ricardo M. Leão

Based on passive cable theory, an increase in membrane conductance produces a decrease in the membrane time constant and input resistance. Unlike the classical leak currents, voltage-dependent currents have a nonlinear behavior which can create regions of negative conductance, despite the increase in membrane conductance (permeability). This negative conductance opposes the effects of the passive membrane conductance on the membrane input resistance and time constant, increasing their values and thereby substantially affecting the amplitude and time course of postsynaptic potentials at the voltage range of the negative conductance. This paradoxical effect has been described for three types of voltage-dependent inward currents: persistent sodium currents, L- and T-type calcium currents and ligand-gated glutamatergic N-methyl-D-aspartate currents. In this review, we describe the impact of the creation of a negative conductance region by these currents on neuronal membrane properties and synaptic integration. We also discuss recent contributions of the quasi-active cable approximation, an extension of the passive cable theory that includes voltage-dependent currents, and its effects on neuronal subthreshold properties.

Variability in functional brain networks predicts expertise during action observation

Amoruso L, Ibáñez A, Fonseca B, Gadea S, Sedeño L, Sigman M, García AM, Fraiman R, Fraiman D.

Observing an action performed by another individual activates, in the observer, similar circuits as those involved in the actual execution of that action. This activation is modulated by prior experience; indeed, sustained training in a particular motor domain leads to structural and functional changes in critical brain areas. Here, we capitalized on a novel graph-theory approach to electroencephalographic data (Fraiman et al., 2016) to test whether variability in functional brain networks implicated in Tango observation can discriminate between groups differing in their level of expertise. We found that experts and beginners significantly differed in the functional organization of task-relevant networks. Specifically, networks in expert Tango dancers exhibited less variability and a more robust functional architecture. Notably, these expertise-dependent effects were captured within networks derived from electrophysiological brain activity recorded in a very short time window (2s). In brief, variability in the organization of task-related networks seems to be a highly sensitive indicator of long-lasting training effects. This finding opens new methodological and theoretical windows to explore the impact of domain-specific expertise on brain plasticity, while highlighting variability as a fruitful measure in neuroimaging research.

On Sequence Learning Models: Open-loop Control Not Strictly Guided by Hick’s Law

Rodrigo Pavão, Joice P. Savietto, João R. Sato, Gilberto F. Xavier and André F. Helene

According to the Hick’s law, reaction times increase linearly with the uncertainty of target stimuli. We tested the generality of this law by measuring reaction times in a human sequence learning protocol involving serial target locations which differed in transition probability and global entropy. Our results showed that sigmoid functions better describe the relationship between reaction times and uncertainty when compared to linear functions. Sequence predictability was estimated by distinct statistical predictors: conditional probability, conditional entropy, joint probability and joint entropy measures. Conditional predictors relate to closed-loop control models describing that performance is guided by on-line access to past sequence structure to predict next location. Differently, joint predictors relate to open-loop control models assuming global access of sequence structure, requiring no constant monitoring. We tested which of these predictors better describe performance on the sequence learning protocol. Results suggest that joint predictors are more accurate than conditional predictors to track performance. In conclusion, sequence learning is better described as an open-loop process which is not precisely predicted by Hick’s law.

Nonparametric statistics of dynamic networks with distinguishable nodes

Daniel Fraiman, Nicolas Fraiman and Ricardo Fraiman

The study of random graphs and networks had an explosive development in the last couple of decades. Meanwhile, techniques for the statistical analysis of sequences of networks were less developed. In this paper, we focus on networks sequences with a fixed number of labeled nodes and study some statistical problems in a nonparametric framework. We introduce natural notions of center and a depth function for networks that evolve in time. We develop several statistical techniques including testing, supervised and unsupervised classification, and some notions of principal component sets in the space of networks. Some examples and asymptotic results are given, as well as two real data examples.

Ih Equalizes Membrane Input Resistance in a Heterogeneous Population of Fusiform Neurons in the Dorsal Cochlear Nucleus

Cesar C. Ceballos, Shuang Li, Antonio C. Roque, Thanos Tzounopoulos and Ricardo M. Leão

In a neuronal population, several combinations of its ionic conductances are used to attain a specific firing phenotype. Some neurons present heterogeneity in their firing, generally produced by expression of a specific conductance, but how additional conductances vary along in order to homeostatically regulate membrane excitability is less known. Dorsal cochlear nucleus principal neurons, fusiform neurons, display heterogeneous spontaneous action potential activity and thus represent an appropriate model to study the role of different conductances in establishing firing heterogeneity. Particularly, fusiform neurons are divided into quiet, with no spontaneous firing, or active neurons, presenting spontaneous, regular firing. These modes are determined by the expression levels of an intrinsic membrane conductance, an inwardly rectifying potassium current (IKir). In this work, we tested whether other subthreshold conductances vary homeostatically to maintain membrane excitability constant across the two subtypes. We found that Ih expression covaries specifically with IKir in order to maintain membrane resistance constant. The impact of Ih on membrane resistance is dependent on the level of IKir expression, being much smaller in quiet neurons with bigger IKir, but Ih variations are not relevant for creating the quiet and active phenotypes. Finally, we demonstrate that the individual proportion of each conductance, and not their absolute conductance, is relevant for determining the neuronal firing mode. We conclude that in fusiform neurons the variations of their different subthreshold conductances are limited to specific conductances in order to create firing heterogeneity and maintain membrane homeostasis.

Multi-class oscillating systems of interacting neurons

Susanne Ditlevsen and Eva Löcherbach

We consider multi-class systems of interacting nonlinear Hawkes processes modeling several large families of neurons and study their mean field limits. As the total number of neurons goes to infinity we prove that the evolution within each class can be described by a nonlinear limit differential equation driven by a Poisson random measure, and state associated central limit theorems. We study situations in which the limit system exhibits oscillatory behavior, and relate the results to certain piecewise deterministic Markov processes and their diffusion approximations.

The maximum size of a non-trivial intersecting uniform family that is not a subfamily of the Hilton-Milner family

Jie Han and Yoshiharu Kohayakawa

The celebrated Erdos–Ko–Rado theorem determines the maximum size of a k-uniform intersecting family. The Hilton–Milner theorem determines the maximum size of a k-uniform intersecting family that is not a subfamily of the so-called Erdos–Ko–Rado family. In turn, it is natural to ask what the maximum size of an intersecting k-uniform family that is neither a subfamily of the Erdos–Ko–Rado family nor of the Hilton–Milner family is. For k ≥ 4, this was solved (implicitly) in the same paper by Hilton–Milner in 1967. We give a different and simpler proof, based on the
shifting method, which allows us to solve all cases k ≥ 3 and characterize all extremal families achieving the extremal value.

Self-Organized Supercriticality and Oscillations in Networks of Stochastic Spiking Neurons

Ariadne A. Costa, Ludmila Brochini and Osame Kinouchi

Networks of stochastic spiking neurons are interesting models in the area of Theoretical Neuroscience, presenting both continuous and discontinuous phase transitions. Here we study fully connected networks analytically, numerically and by computational simulations. The neurons have dynamic gains that enable the network to converge to a stationary slightly supercritical state (self-organized supercriticality or SOSC) in the presence of the continuous transition. We show that SOSC, which presents power laws for neuronal avalanches plus some large events, is robust as a function of the main parameter of the neuronal gain dynamics. We discuss the possible applications of the idea of SOSC to biological phenomena like epilepsy and dragon king avalanches. We also find that neuronal gains can produce collective oscillations that coexists with neuronal avalanches, with frequencies compatible with characteristic brain rhythms.

Páginas

 

NeuroMat

O Centro de Pesquisa, Inovação e Difusão em Neuromatemática está sediado na Universidade de São Paulo e é financiado pela FAPESP (Fundação de Amparo à Pesquisa do Estado de São Paulo).

 

Login do usuário

 

Contato

Endereço:
Rua do Matão, 1010 - Cidade Universitária - São Paulo - SP - Brasil. 05508-090. Veja o mapa.

Telefone:
55 11 3091-1717

Email:
neuromat@numec.prp.usp.br

Contatos de mídia:
comunicacao@numec.prp.usp.br