Carlos Hoppen, Roberto F. Parente and Cristiane M. Sato

We study the problem of packing arborescences in the random digraph D(n, p), where each possible arc is included uniformly at random with probability p = p(n). Let λ(D(n, p)) denote the largest integer λ ≥ 0 such that, for all 0 ≤ ≤ λ, we have −1 i=0 ( − i)|{v : din(v) = i}| ≤ . We show that the maximum number of arc-disjoint arborescences in D(n, p) is λ(D(n, p)) a.a.s. We also give tight estimates for λ(D(n, p)) depending on the range of p. Keywords: random graph, random digraph, edge-disjoint spanning tree, spanning tree, packing arborescence.

Osame Kinouchi, Ludmila Brochini, Ariadne A. Costa, João Guilherme Ferreira Campos and Mauro Copelli

In the last decade, several models with network adaptive mechanisms (link deletion-creation, dynamic synapses, dynamic gains) have been proposed as examples of self-organized criticality (SOC) to explain neuronal avalanches. However, all these systems present stochastic oscillations hovering around the critical region that are incompatible with standard SOC. Here we make a linear stability analysis of the mean field fixed points of two self-organized quasi-critical systems: a fully connected network of discrete time stochastic spiking neurons with firing rate adaptation produced by dynamic neuronal gains and an excitable cellular automata with depressing synapses. We find that the fixed point corresponds to a stable focus that loses stability at criticality. We argue that when this focus is close to become indifferent, demographic noise can elicit stochastic oscillations that frequently fall into the absorbing state. This mechanism interrupts the oscillations, producing both power law avalanches and dragon king events, which appear as bands of synchronized firings in raster plots. Our approach differs from standard SOC models in that it predicts the coexistence of these different types of neuronal activity.

Ludmila Brochini, Ariadne de Andrade Costa, Miguel Abadi, Antônio C. Roque, Jorge Stolfi and Osame Kinouchi

Phase transitions and critical behavior are crucial issues both in theoretical and experimental neuroscience. We report analytic and computational results about phase transitions and self-organized criticality (SOC) in networks with general stochastic neurons. The stochastic neuron has a firing probability given by a smooth monotonic function Φ(*V*) of the membrane potential *V*, rather than a sharp firing threshold. We find that such networks can operate in several dynamic regimes (phases) depending on the average synaptic weight and the shape of the firing function Φ. In particular, we encounter both continuous and discontinuous phase transitions to absorbing states. At the continuous transition critical boundary, neuronal avalanches occur whose distributions of size and duration are given by power laws, as observed in biological neural networks. We also propose and test a new mechanism to produce SOC: the use of dynamic neuronal gains – a form of short-term plasticity probably located at the axon initial segment (AIS) – instead of depressing synapses at the dendrites (as previously studied in the literature). The new self-organization mechanism produces a slightly supercritical state, that we called SOSC, in accord to some intuitions of Alan Turing.

*Vinícius Lima Cordeiro, Rodrigo Felipe de Oliveira Pena, Cesar Augusto Celis Ceballos, Renan Oliveira Shimoura and Antonio Carlos Roque*

Neurons respond to external stimuli by emitting sequences of action potentials (spike trains). In this way, one can say that the spike train is the neuronal response to an input stimulus. Action potentials are “all-or-none” phenomena, which means that a spike train can be represented by a sequence of zeros and ones. In the context of information theory, one can then ask: how much information about a given stimulus the spike train conveys? Or rather, what aspects of the stimulus are encoded by the neuronal response? In this article, an introduction to information theory is presented which consists of historical aspects, fundamental concepts of the theory, and applications to neuroscience. The connection to neuroscience is made with the use of demonstrations and discussions of different methods of the theory of information. Examples are given through computer simulations of two neuron models, the Poisson neuron and the integrate-and-fire neuron, and a cellular automata network model. In the latter case, it is shown how one can use information theory measures to retrieve the connectivity matrix of a network. All codes used in the simulations were made publicly available at the GitHub platform and are accessible through the URL: github.com/ViniciusLima94/ticodigoneural.

*Daniel Fraiman and Ricardo Fraiman*

The study of brain networks has developed extensively over the last couple of decades. By contrast, techniques for the statistical analysis of these networks are less developed. In this paper, we focus on the statistical comparison of brain networks in a nonparametric framework and discuss the associated detection and identification problems. We tested network differences between groups with an analysis of variance (ANOVA) test we developed specifically for networks. We also propose and analyse the behaviour of a new statistical procedure designed to identify different subnetworks. As an example, we show the application of this tool in resting-state fMRI data obtained from the Human Connectome Project. We identify, among other variables, that the amount of sleep the days before the scan is a relevant variable that must be controlled. Finally, we discuss the potential bias in neuroimaging findings that is generated by some behavioural and brain structure variables. Our method can also be applied to other kind of networks such as protein interaction networks, gene networks or social networks.

NeuroCineMat |
---|

Featuring this week: |

Newsletter |
---|

Stay informed on our latest news! |

Follow Us on Facebook |
---|