The collaboration between Mathematics and Neuroscience has evolved in the last five years, from being just a tentative association of interest and exchange to building a joint research agenda that could lead to fundamental progress in the understanding of the brain. This is the general view of mathematician Remco van der Hofstad, who has been at the forefront of this collaboration and co-organizes the workshop “Random Graphs in the Brain,” that the Research, Innovation and Dissemination Center for Neuromathematics (NeuroMat) will host at the end of November in São Paulo. The workshop's official website is: neuromat.numec.prp.usp.br/rgbrain. Professor in probability at Eindhoven University of Technology and scientific director of the European Institute for Statistics, Probability, Stochastic Operations Research and its Applications (EURANDOM), Hofstad discusses challenges and perspectives of modeling at the neuronal and functional levels. He has been the leading person of the workshop on “Random Graphs and the Brain,” in 2011, in Eindhoven, and in the interview that follows makes sense of the evolution of the understanding of brain connectivity. "To some extent, we are moving from an attempt to establish a collaboration among neuroscientists and mathematicians to organizing joint research. In 2011, random graph theory and brain theory needed to be bridged, now we moved further and this is evident in the title change from 2011 and 2015. Things are getting more concrete.”
NeuroMat’s project rests upon the idea that the advances in Neurobiology put forward the need for and the challenge of devising new mathematical objects and theories, especially with respect to the theory of random graphs. In the last ten years, there has been a considerable volume of publications that attempt to connect random graphs and the brain —a search on GoogleScholar on these two expressions brings up over 4,000 published pieces in this period, just to give a figure. What is your take on this "neurobiological challenge" for Mathematics, especially the theory of random graphs?
One of the main things I learned in my effort to bring together neuroscientists and mathematicians is that there is not one graph for the brain. One can think of the brain on many levels, depending on the amount of hierarchies and also on the data one actually collects. I started to work on the brain at a neuronal level, and there you have a humongous graph that somehow starts when you are a baby and grows and grows and grows. Connections get strengthened, because of how the brain evolves over time, and initially I would think of this as the main object of study, if we consider random graphs in the brain. But I think that if you look at the 4,000 published pieces that somehow connect graphs and brains a large proportion of that is not about the brain at the neuronal level, but about the brain at the functional level —a much larger collection of neurons, centered around certain functionalities. I am personally interested in all these levels, and they have different messages to send. At the neuronal level, we have some understanding on how one neuron acts, but there is less direct understanding about how groups of neurons act. This is a very fascinating subject, and it leads to trying to understand how things are organized and to describe that. It is not enough to think just about a graph, there is always something happening on the graph, just like thinking of processes on graphs. There are people trying to do that with bootstrap percolation graphs, which are somehow inspired by the brain, Ising models, the relation between different areas… We are far from having a full understanding on how this should be done in the best possible way, and this is where mathematicians can help, both in terms of setting up good models for connectivity and analyzing those and of trying to understand the relation between the model we have and the data that we produce. This is a very interesting topic, but it is also very hard, because you get a lot of data but it is hard to pinpoint where the data comes from.You are always measuring something that is a consequence of the thing you would like to know. It is rare to actually measure the thing you would like to understand directly.
What are specific challenges of modeling neuronal activity?
The sheer complexity of neural activity makes it really hard to model. If you think at the neuronal level, there is something like one hundred billion neurons, it is very hard to get data at this level, and we simply cannot really know at this point how these neurons are connected one another and what the precise relation among different neurons is. You can have information on how one neuron acts, but then the others are just a sort of black box that give some input. It is very hard to make sense of how different neurons interact. It reminds me of the story of crickets or fireflies, which have the tendency to synchronize. Now, looking at the brain as a collection of single neurons is like looking at one cricket and coming to the conclusion that one neuron cannot synchronize. But then, what should it synchronize to? It only has itself. Only when you have a very large collection of crickets, neurons you can see emerging behaviors, so you have to look at this at a higher level. And this makes it all very hard, because it is very hard to get data at this level. There is another aspect that makes the brain at the same time fascinating and hard to understand: you cannot separate structure from function. The connectivity structure of the brain changes over time. The connections you use will strengthened. The connections you do not use will be weakened. There is a strong relation between the evolution of the brain and how it is being used. This is very hard to describe. We usually try to separate those, by thinking of the neuron as being fix and functionality as a process acting on it, but in reality we know that they are not independent.
This seems to be especially relevant when we consider what plasticity is. Sometimes, one gets the impression that plasticity is more understood as a metaphor of brain activity than a real process that one can describe and interpret.
I remember having a discussion with Antonio Galves and Roberto Fernandez about brain plasticity. It is one of the things that make the brain work so well. It pertains to the dynamics of the brain. Something happens, the brain reacts to it and finds shortcuts and detours in order to restore functionality, preferably completely but in some cases just partially. I have been thinking on how to model this, and the way I have thinking of this is very simplistic. I assume that something like a stroke just knocks out part of the brain and then the brain is able to restore functionality. But this raises the question: what does it mean for the brain in terms of the graph to function? This is not obvious. We get stimuli because we touch things, because we hear things, because we see things, these are registered in the brain and we decide to act upon it. The brain is able to store relevant information, put away what is irrelevant —which is something that computers are not able to do— and act upon that. You need to model this in an appropriate way if you want to model plasticity. There, the same stimuli are still coming in, but the part that used to take care of it, acted upon it is gone, yet the information is somehow flowing somewhere else and a new part of the brain is acting upon it. If you do not have a good description of how the information from the stimuli are being translated into neural reaction, how are you going to model plasticity? At this point, mathematicians are unable to really model functionality and consequently cannot really make sense of plasticity. At a very high level, within the functional network, we can describe things by appropriate network models, that describe connections and maybe correlations in the data, but it is not immediately obvious how this relates to functionality. This is something I believe we should understand better to go forward, and it is not obvious how we should accomplish this. On the one hand, from an abstract point of view, I just think of the brain as a huge collection of neurons of which I need to understand how they connect among themselves, and then try to find out what the brain should look like and also find properties of the model that you might observe from data. This is very indirect. On the other hand, a more direct method, that is less clear, is to look at the data and then model that. But then it is less clear what you are really trying to model. These questions are extremely interesting, and mathematicians can help in modeling and interpreting models, but I strongly believe we need neuroscientists as well, in the context of a proper collaboration. At the very high level, mathematicians will not have the real neuroscience knowledge to interpret things in the right way. This is how a proper collaboration should be, and the aim should be to write a joint paper where you combine knowledge from the applied area to the model and then go back to conclusions that should be important for the application. I find this to be a real challenge.
In 2011, you co-organized the Workshop on “Random Graphs and the Brain,” at the European Institute for Statistics, Probability, Stochastic Operations Research and its Applications (Eurandom), in Eindhoven. Four years later you are a co-organizer of the Workshop “Random Graphs in the Brain,” at NeuroMat, in São Paulo. There seems to be a conceptual shift in the title of the two workshops. In 2011, the title associated RG and Neurobiology; in 2015, there seems to be the expectation that these two scientific areas are already in an interactive process. What do you think about this?
The first activities to connect formally mathematicians and neuroscientists were mostly meant to provide clearer pictures of what each of those had in mind. We basically learned a lot. I learned a lot about how neuroscientists see the brain, and this was definitely not clear before 2011. For instance, I had no clear understanding about how to interpret the brain at the neuronal and functional level. In this conference and also in a later conference, in Utrecht, it has been very useful to hear neuroscientists who know a little bit more about network theory, such as Martijn van den Heuvel. They come from a neuroscientific background but have sufficient understanding and appreciation for the mathematical tools and techniques that come from graph theory. It takes time to build some knowledge in this respect. There has been a rapidly growing network to connect neuroscientists and mathematicians. This has been a slow but necessary process to start research collaborations, specifically on how to use random graphs and network models to answer questions abou the brain. I am currently collaborating with a post-doc, in Eindhoven, Sándor Kolumbán, who will be in São Paulo in a few weeks, and we have worked on this question: if you have different modules in the brain, that you model by random graphs and you would like them to communicate to one another, what is the most effective communication? Should we connect these modules through their highly connected areas, or should we connect them randomly? This gets close to dealing with functionality, that we now try to do using an Ising model approach. This can be seen the output of all this process. So, to some extent, we are moving from an attempt to establish a collaboration among neuroscientists and mathematicians to organizing joint research. In 2011, random graph theory and brain theory needed to be bridged, now we moved further and this is evident in the title change from 2011 and 2015. Things are getting more concrete.
Have you had an opportunity to work on the 2013 Galves-Löcherbach model?
I heard Antonio Galves talk about that a few times, and he is using a critical Erdős–Rényi model random graph as an input, and it was not entirely clear to me why it needed to be a critical Erdős–Rényi random graph and I would like to hear more about this. I hope we can start something from that, since with my background in random graph theory I might be able to help out. I am hoping that something like this could be the conclusion of the workshop. There is always a challenge when you start to work on this kind of topics: you need to have a good question. On the one hand, this question must be interesting from an applied point of view, but also from a theoretical point of view. On the other hand, the question must be sufficiently global. If you look at the brain at the neuronal level, you can say “This looks like a random graph,” you can try to model it this way, you can try to see the evolution of the brain and dynamics of connectivity, and come up with a mathematical model, but this mathematical model is so complex that the best you can do is to simulate it. Simulations are good, but it is difficult to go from them to understand the big picture. This is what a mathematical theory is very good at. And the more I learned the more I got the impression that the question I could bring up to neuroscientists were mathematically so hard that it would be almost impossible for a mathematician to contribute. Or I also had the impression we did not know how to model things. What does it mean for the brain to work? If you cannot describe that, it will be extremely difficult to describe what plasticity does. Either a model is impossible or it is almost intractable from a mathematician’s perspective. You need to have a problem that you can do something with, have some theory that describes the behavior and compare it to reality.
In a 2013 presentation at Eurandom, called "Random network models and routing on weighted networks," on the final slides you bring some key questions that might inform the connection between the theory of random graphs and brain science. You emphasize edge weights as a specific challenge to be taken account of with respect to applying network data to brain. Could you explain this research agenda?
In the brain, lots of weighted graphs are being produced by simply the following procedure: you take a patient and you put EEG nodes on the head —and they have a very good idea where to put them— and for every pair of nodes they compute the correlation between output signals. This gives a weight, which could be positive or negative. What I have seen happening a lot in neuroscience is that this is used as input and then things are done to transform this into a proper graph or to analyze this data. The interpretation of this is not at all obvious. If you do this, what you will get is a complete graph with edge weights between them, positive and negative, and what is often done then is thresholding. Every correlation that is an absolute value larger than 0.1 is kept, and all the other edges are thrown away. This gives you a summary of the dataset, and then properties of the dataset are interpreted. I have a lot of work on graphs with weighted structures, and we generally assume that weights are all random and independent —which is probably not true in the brain. So, what to do with datasets that have weighted structures? How to interpret them in the right way? What a correlation means precisely should be interpreted from understanding what node corresponds to what and why you would have such correlation. For example, one of the things that is typically done in this setting is that they do threshold and throw away some edges, but then there is no distinction between positive and negative correlations. And there is an enormous difference between the two. This occurs a lot in creating random graphs in these cases. What should one do there? Can the mathematical analysis that has been done on large random graphs be of any help? Moreover, should the negative and positive correlation be taken differently into account in thresholding?
There have been attempts to connect several graph models to make sense of connectivity among brain areas, for instance, Rich-Club Organization and Rich Get Richer. At this point, none of these models appear to have been useful to make sense of what one sees at microscopic level of neural activity (a conjecture would be that at this level we might see a model similar to an Erdős–Rényi model at a critical point). The models that have been applied to studies among brain areas are different. Why is there no consensus at this level? How do we reconcile the hypothesis that microscopic and macroscopic levels of neural activity might be based on different models with a general theory of the brain?
This is one of the things I will be talking at our tutorial in São Paulo. We are investigating what models are being used currently, in mathematics and neuroscience. Sometimes the Erdős–Rényi graph is being used, but my feeling is that this is too egalitarian, since in this graph all vertices play the same role, and my feeling is that in the brain this should not be the case. If I think about how the brain evolves over time, it must mean that there is some long range connection between neurons. You start with a very small clump that has certain connections, the connectivity remains but the clump is growing, perhaps equally or not. If the early connections remain, then it will mean that there is some sort of hierarchy in space. This is completely absent from the Erdős–Rényi graph. I wonder whether or not a better model should include geometry, where you say that vertices have come up at different times and depending on the time on which they come in they will be able to attach to vertices at a certain distance. How this could be incorporated in the model? This could be a slightly better way to interpret the brain at the neuronal level than when you ignore this. This is the sort of things we should discuss in the meeting in São Paulo. What is the right way in viewing things? The only way to advance this is to connect the best model to the knowledge in neuroscience, so that you build a model that informs reality. You need expert information from the application domain to have the right way of viewing things, and this should be captured in the mathematical model you are trying to build. There is a big challenge there, that goes both ways: neuroscientists have to think of the evolution of the brain in these terms and mathematicians have to understand what the neuroscientists are telling us about brain development and translate that into a proper mathematical model. NeuroMat is very interested in this, and this is why I am very interested in joining this research agenda. NeuroMat is an extremely unique institute. It is one of the only places that I know, where you have Maths and Neuroscience so close together, that the aim is that they collaborate. What is more common is the scenario in which neuroscientists are very happy to collaborate with mathematicians, but it is very rare to have this collaboration as the goal of an institute. And I am convinced that this is a necessity to move forward.
This piece is part of NeuroMat's Newsletter #21. Read more here
Share on Twitter Share on FacebookNeuroCineMat |
---|
Featuring this week: |
Newsletter |
---|
Stay informed on our latest news! |
Follow Us on Facebook |
---|