As the NeuroMat research team worked on the project renewal proposal, ten research teams were listed. These teams have specific research topics, that will in the coming years provide the direction of the RIDC agenda. These teams were normally already active prior to 2018.

The research directions are elements within the general challenge the NeuroMat team faces: the development of new classes of probabilistic models to study different aspects of brain functioning. In reports of activities, that have been systematically published on the RIDC website (http://neuromat.numec.prp.usp.br/reports). This general challenge has been associated to: developing a new class of stochastic processes describing nets of spiking; making steps towards a mathematical and statistical framework to formulate the phenomenon of brain plasticity; and developing stochastic models, statistical procedures and neurobiological experimental protocols to address the classical conjecture of the Statistician Brain. The ten research directions for the coming years, that can also be described as ongoing projects, are even more specific elements within NeuroMat’s general challenge and its associated questions.

The ten ongoing projects include:

- Hebbian time evolution for the interaction graph of a network of spiking neurons
- Statistical analysis of stochastic processes
- Simulation laboratory scientific project
- Phase transitions, criticality and oscillations in stochastic neuronal networks
- Structural learning and decision making
- Modeling the plasticity in the brain after a traumatic brachial plexus injury in adults
- Instrumentation issues to address brain plasticity: the state of the art
- Stochastic modeling of spatio-temporal patterns of epileptic seizures
- Random networks for the brain
- Random graphs and computational psychiatry

The NeuroMat dissemination team will provide on the newsletter and its social media communities constant updates on the development associated to each of these projects.

**Summaries**

Short summaries of each project is provided below, for reference. These summaries have been extracted from the NeuroMat renewal proposal to FAPESP.

*Hebbian time evolution for the interaction graph of a network of spiking neurons*

In the model of interacting neurons introduced in Galves-Locherbach (2013), the activity of each neuron is represented by a point process. Each neuron spikes at a rate which is an increasing function of its membrane potential. This membrane potential evolves in time and integrates the spikes of all presynaptic neurons since the last spiking time of the neuron. When a neuron spikes, its membrane potential is reset to a resting potential and simultaneously, a constant value – proportional to the synaptic weight – is added to the membrane potentials of its postsynaptic neurons. Moreover, each neuron is exposed to a leakage effect.

It is however of tremendous importance to add a stochastic time-evolution to the synaptic weights themselves and this is the goal of the present project. Indeed, it is commonly admitted that memory should be present in the specific configuration of synaptic weights after some initial learning period to which the system is exposed. In particular this means that the synaptic weights are themselves submitted to some evolution (a well-known fact, related to synaptic plasticity) and not fixed. Plasticity also helps the system to be able to react to drastic changes of the state of the system provoked e.g. by a stroke.

More precisely we wish to add a Hebbian learning rule to the evolution described in Galves and Locherbach (2013). Hebbian learning means that a spike of neuron i followed by a spike of neuron j within a short time window increases the synaptic weight of neuron i on j – and at the same time, decreases the weight of neuron j on i maybe followed by some leakage effect. As a consequence, if i and j have fired a lot together and if each time the spike of neuron j was preceded by a spike of neuron j, then the synaptic weight from i to j will be rather high.

*Statistical analysis of stochastic processes*

NeuroMat team obtained theoretical results proposing and studying models for neural networks. The aim of this project is to perform statistical analysis of stochastic processes involved in these models. Such analysis is crucial in order to validate a model. One of the more relevant questions arising when studying a neural network is the estimation of the interaction graph for which the main difficulty in practice is the partial observation since it is impossible to record the activity of all the neurons of the network. This research project consists in three different studies dealing with this issue of partial observation: Two samples test for transition probability of Markov Chains of infinite order; Sparse space-time models: Concentration Inequalities, Lasso and Their Applications to Networks of Spiking Neurons; and Interaction graph estimation for the first olfactory relay of an insect.

The first study aims at building a two sample test for Markov Chains of infinite order. The sample being finite, a direct test with infinite memory transition probabilities is not possible. We therefore assume continuity on these transition probabilities in order to approximate the law of the chain with a Markov chain of finite order k. This projection procedure is inspired by a NeuroMat study where the two sample test is performed for Poisson processes, making the projection of the L2 space on a subset S generated by a countable family of functions. This kind of question is relevant for the problem of partial observation we face in neuroscience since it is impossible to record the activity of all neurons in a network. We consider the projection of the set of models for the whole network on the set of models for the observed neurons as a possible way to deal with this problem. However, the necessary complexity of a model aiming at describing the activity of a neural network is a barrier for this kind of study, and we hope that the present work with a simpler model of Markov Chains will be a first step in this direction.

The second study focuses on building a Lasso estimator to estimate the graph of interaction of partially observed networks of neurons whose computational cost is much smaller than the existing methods and no Gibssian assumption on the model is made.

The third study tackles a new statistical procedure to iden tify the interaction graph in the class of stochastic neuronal networks introduced by NeuroMat. This procedure was specially designed to deal with the small sample sizes met in actual datasets. We evaluate the performance of the procedure on simulated data. This simulation study is also used for parameter tuning and for exploring the effect of recording only a small subset of the neurons of a network. We also obtain theoretical results in the case of partial observation.

*Simulation laboratory scientific project*

The NeuroMat Simulation Laboratory (SimLab) consists of a cluster with four compute nodes, each with eight Intel E5-2650 processors, 128 GB of RAM and 1 HD of 2 TB, and one server node with two Intel E5-2650 processors, 128 GB of RAM, 6 HD of 1 TB and a Tesla K40 graphic accelerator with 12 GB of RAM. The cluster is physically located at the Department of Physics of the School of Philosophy, Sciences and Letters of Ribeir ̃ao Preto of the University of S ̃ao Paulo at Ribeirao Preto.

A core goal is to simulate large-scale mathematical models. What is meant by a “large-scale model”? According to anatomical estimates, the average number of synapses received by a cortical neuron is ⟨k⟩ = 104. This means that the number of neurons in a cortical model with this average number of synapses per neuron is N = ⟨k⟩/p, where p is the probability of a synaptic contact between two neurons. The value of p depends on the distance separating the neurons, and an estimate of its value for neurons within a cortical volume defined by a surface area of 1 mm2 is p = 0.1. This implies, assuming that all synapses a neuron receives come from this small volume, that the number of neurons in a large-scale cortical network model would be 105. Since the above assumption is clearly wrong, for a typical neuron receives synaptic connections from neurons beyond this small volume (long-range connections), this estimate of N can be considered as an inferior limit for what could be considered a large-scale model. As in other NeuroMat theoretical research lines, the simulations that will be done at the SimLab will primarily use the stochastic neuron model introduced by Galves and Locherbach in 2013.

Research lines include: Simulation of spontaneous activity in cortical models, Simulation of the response of cortical models to external stimuli, and Simulation of neural networks at the population level. The research line associated to Simulation of spontaneous activity in cortical models, aims, first of all, to compare the behavior of network models that use the stochastic model introduced in Galves-Locherbach with the behavior of network models that use deterministic spiking neuron models (LIF, Izhikevich and AdEx). Secondly, it aims to investigate the spontaneous activity patterns of stochastic and deterministic networks with different modular topologies. The main aim of the research line on Simulation of the response of cortical models to external stimuli is to simulate the experimental protocol devised within NeuroMat applied to the cortical network models described in the previous section. The work of Duarte et al. suggests, from EEG measurements in Human subjects, that the brain is capable of identifying the temporal pattern (which can be mathematically compressed by a context tree) in a sequence of structured sensory stimuli. The research line on Simulation of neural networks at the population level aims at investigating the possibility of accelerating the simulation of stochastic networks by means of a stochastic representation of each neuronal population instead of the representation of the individual state of each neuron.

*Phase transitions, criticality and oscillations in stochastic neuronal networks*

The study of phase transitions is deeply important to the understanding of stochastic neuronal networks models and it has revealed very interesting properties of the new class of models proposed by Galves & Locherbach. We have been investigating how the model behaves at criticality, the emergence of oscillations that could be related to brain waves and also phenomena like metastability and information transmission in such models.

Our team has adopted a two pronged approach to the study of such issues: one side supported by rigorous mathematical work and the other side by mean field reasoning together with numerical simulations. In this section we present the main results obtained, the work in progress and future research directions regarding these issues. We intend to further capitalize on the work already performed in order to achieve a deeper understanding of the issues at hand and improve the synergy of the work currently in progress and future studies of our researchers.

*Structural learning and decision making*

Since the classical conjecture of the “unbewusster Schluss” formulated in 1863 by von Helmholtz, the question whether the brain assigns probabilistic models to samples of stimuli and makes statistical inference to predict upcoming events gained a central role in contemporary neuroscience.

Within NeuroMat, the effort to address this question resulted in a new class of stochastic processes, namely, the class of stochastic sequences of functional data segments driven by a context tree model. This, in turn, led to a new experimental protocol in which volunteers were exposed to sequences of auditory stimuli generated by context tree models while EEG signals were recorded. This new procedure for statistical model selection produced results supporting the conjecture that the brain effectively identifies the structure of the chain generating the sequence of stimuli.

Inspired by this first set of results, we developed a new experimental approach consisting in a virtual scenario in which the volunteer, playing the role of a goal keeper, has to stop the next penalty shoot by guessing the direction chosen by the kicker. A mathematical theorem shows that the performance of the goalkeeper is maximized when he/she identifies the context tree governing the shooters successive choice. In other terms, the perfect goalkeeper would not pay attention to his past performance, building instead his strategy only taking into account the kickers past choice.

Thus, a basic question at this point is how to describe how the goalkeeper assigns a model to the data. Is this choice influenced by the complexity of the sequences of choices of the shooter? How to define this complexity? Is the entropy of the chain of directions chosen by the shooter a good measure of the complexity of the goalkeeper task? These seem to be open questions, both from the point of view of neurobiology and statistics. In a different context, [7] argue that the complexity of the learning procedure should be measured by the rate at which the entropy of the chain of stimuli truncated at a certain order converge to the entropy of the chain when the truncation level diverges. This suggestion is however not based in any rigorous mathematical reasoning, and from the point of view of mathematical neuroscience an empirical validation is required to support any guess in this direction.

*Modeling the plasticity in the brain after a traumatic brachial plexus injury in adults*

With the in-depth work developed on the last five years, NeuroMat is finally able to face the major challenge of proposing a mathematical model accounting for brain plasticity. This will be done in two different scales. At the neuronal level, the team leaded by Eva Loecherbach will introduce a new important feature in the model proposed by Galves and Loecherbach (2013). Namely, in parallel to the spiking activity already described by the model, a new dynamics describing the time evolution of the synaptic weights will be introduced. This dynamics will represent the synaptic plasticity phenomenon through which the brain learns, constitute memories and adapts to new situations.

Besides the neuronal level, it is necessary to develop a model describing the plasticity phenomena at a mesoscopic scale. This description level would correspond, for instance, to the changes in cortical representation observed with TMS or fMRI mapping. It has been suggested that a variational principle balance between “cost” and “efficiency” could explain the brain restructuration after a lesion (see, for instance, Buch et al., 2012 and references therein). This neurobiological intuition suggests a novel mathematical approach to the question, namely, employing large deviation principles (see Freidlin and Wentzell, 1998) to describe the paths followed by the newly formed representations in the cortex through time after an injury and during rehabilitation. This is a most challenging issue that can be faced in NeuroMat due to the strong mathematical basis of our team. These efforts will be a major contribution of NeuroMat at a world level.

To progress towards a new understanding of brain plasticity, new non-invasive methodological approaches need to be refined so as to address the fine-grain changes occurring longitudinally in cortical maps. Our team has recently done some progress in this direction employing resting state fMRI [Fraiman et al., 2016]. New venues in this domain shall be opened through the development of approaches allowing putting together state of the art technical approaches to address brain plasticity, computational modeling of TBPI outcomes and mathematical models of brain plasticity. This is precisely what we intend to do within NeuroMat in the next years.

*Instrumentation issues to address brain plasticity: the state of the art*

One of NeuroMat's main scientific objectives is to develop mathematical tools to assess and model plastic alterations in cortical circuits following peripheral nerve injury. In parallel with this theoretical research, new technological tools are necessary to map with greater ease and precision the cortical representations of the injured muscles, their superpositions and temporal changes. More accurate measures of cortical maps would allow a better description of their dynamic evolution and functional effects.

Our objective is to develop a closed-loop robotic arm that can deliver transcranial magnetic stimulation to specific areas of the brain with higher precision and reproducibility than it is possible today. The main thrust is to develop a closed-loop robotic arm that can deliver transcranial magnetic stimulation to specific areas of the brain with higher precision and reproducibility than it is possible today. This last aspect is an area of intense research in several experimental neuroscience groups around the world and could have many spin offs to other applications.

*Stochastic modeling of spatio-temporal patterns of epileptic seizures*

Despite 25 years of ongoing research efforts concerning the pathophysiology of temporal lobe epilepsy, both from animal models and clinical studies, the mechanisms responsible for seizure generation, propagation and termination are still unknown. The general hypothesis proposes that anatomical reorganization taking place in the limbic system after brain insult/injury disrupts local control of neuronal excitability, including decreased recurrent inhibition and increased feedforward excitation, leading to recurrent seizures. In chronic epileptic animals, attempts to probe excitation and inhibition unbalance in susceptible circuitries using electrical stimulation has failed to demonstrate the collapse of inhibition before spontaneous seizures. Unpublished data from our lab suggest a link between spontaneous seizures and brain state transitions especially those observed during the sleep-wake cycle. Given the importance of coordinate spatiotemporal activity of neural ensembles for proper brain functioning, efficient methods capable of characterizing pathological oscillations and their underlying dynamics in the epileptic brain can be critically enabling theoretical platforms for advancing the field. Traditional mathematical methods designed to extract information from electrophysiological time series data have consistently failed to unveil these processes. Of critical importance is understanding the formation of coherent patterns of neural activity across both space and time, yet existing computational neuroscience methods are typically restricted to analysis either in space or in time separately. Recent innovations in data-driven modeling, especially at the interface of machine learning and complex dynamical systems, have opened the possibility of jointly analyzing LFP and multi-unit activity in distant brain regions looking for the construction of descriptive/generative models of brain function in health and pathology. As a proof of principle, this approach has recently allowed the characterization of brain-wide electrical spatiotemporal dynamic maps related to stress. Here, we propose to record and compare the effects of kainic acid and pilocarpine (drugs used to produce status epilepticus-induced epileptogenesis) on LFP synchronization within the limbic network and to correlate the spatiotemporal patterns of epileptiform discharges with anatomopathological alterations and future epileptic life. The severity of the epileptic life will be measured by the latency to the first seizure, and the frequency and duration of spontaneous seizure during the chronic phase of the model. The occurrence, frequency and temporal dynamics of interictal spikes and high-frequency oscillations (gold-standards markers of the epileptic condition) will be used to cross-validate our analytical approach.

*Random networks for the brain*

The brain can be viewed through the prism of a complex network, in which there is a set of neurons which communicate with each other through synaptic connections. In light of this, NeuroMat proposes two directions of research. The first direction is concerned with developing and applying mathematical tools to random networks in general, encompassing the study of properties of classical random graph models and of technical methods for determining their properties. The second direction aims to use these general methods to evaluate and validate models that have been proposed for the brain. This would be done, for instance, determining whether such models capture some of the structural properties that have been identified by the neuroscience community.

*Random graphs and computational psychiatry*

The objective is to use graph-theoretical tools to describe and quantify the structure of the flow of thoughts, as expressed on verbal accounts, to help diagnosis of persons with psychiatric disorders. This representation has been successfully applied to describe the typical speech structures of persons with schizophrenia and bipolar mood disorder. The goal for the next period is to apply this approach to map the cognitive decline that follows the first psychotic episode, still in the adolescence, in order to establish the early symptoms that allow for an early intervention.

The most recent advance shows that the degree of speech disorganization measured during the first psychiatric interview of a psychotic teenager can predict the schizophrenia diagnosis 6 months later. We have also been interested in the use of semantic tools for similar purposes.

Altogether, these methods show wide applicability far beyond psychology, reaching the various mental realms induced by sleep and dream states, mood and attention variations, medication, drug use, nutrition, and the onset of psychiatric and neurological diseases. The methods also have potential to reveal new perspectives on the mental correlates of talking, reading, writing and, most importantly, learning.

*This piece is part of NeuroMat's Newsletter #51. Read more here*