close
close

New research unlocks secrets of the brain – with the help of mathematics

CANADA

bookmark

In the 1960s science fiction film Fantastic tripa team of American doctors boards a submarine, which is then miniaturized to the size of a microbe and injected into the body of Dr. Jan Benes. Their mission is to destroy a blood clot in his brain before an hour is up, as after that they and the submarine would return to their normal size, killing Benes.

As they travel through his brain, they pass through billions of neurons, represented by mesh-like tendrils and cobweb-like structures – which are far less orderly than the 21 letters and mathematical terms that make up the equation developed by University of Ottawa professor Richard Naud. Brain and Mind Research Institute to describe “dendritic excitability,” essentially the contribution of branch-like appendages to the firing of neurons that communicate information across synapses (the gap between neurons).

As is the case with theoretical physics, as shown in the recent film Oppenheimer shows, describes the subatomic world through equations, Naud’s research into the most fundamental building blocks of the brain is anything but static.

Einstein’s famous equation E=mc² represents the explosive force (E or energy) released by atomic bombs: E is equal to the mass (m) of, for example, uranium or plutonium, multiplied by the speed of light (called the ‘constant’). or c, 300,000 km per second) squared, which is nine with 10 zeros after it.

Naud’s equation takes into account charges measured in millivolts (2000 times smaller than the 240 volts of a North American wall socket), which contains all the information our body needs to stay alive (for example, the regulation of heart rate) and which also shapes our thoughts.

Both the title of Naud’s article, “Dendritic excitability controls overdispersion,” and the publication Natural Computational Science may be a bit daunting, but, says Naud, the main equation and subsequent equations describe a “dynamic system that is infinitely more efficient than the computer on our desk.”

“Each neuron has dozens of dendrites – think of them as wires sticking out in a soup of chemicals that conduct electrical charges generated by neighboring neurons. These charges, signals, travel, if you like, to the nucleus of the neuron, which then generates another signal that travels through the neuron to the axion, which generates a new charge, what we call a ‘spike’ that is picked up by the dendrite of the neuron. another neuron.

“Our equation allows us to see in mathematical terms how these dendrites process the information they receive from other neurons, distilling 10,000 equations normally required to represent a neuron. It was a mathematical achievement that my colleagues, neuroscientists, did not think possible,” says Naud.

His work has implications for AI, understanding learning and treating diseases such as Parkinson’s, Naud said University world news.

Communication through variability

One of the key findings of their comparison was that dendrites do not fire according to the run-of-the-mill cause-and-effect model, that is, more input leads to more activity that directly corresponds to the input.

Part of the reason for this, Naud explained, is that neurons function something like transistors that are “coupled in a funky way” that allows them to communicate two messages at once. It’s like an innuendo, a sentence whose meaning depends on who interprets it.

“What surprised us was that the equation showed that instead of spiking (that is, firing across the synapse) when we increased the input in the equation, the output did not increase in a regular pattern, but was completely random.”

This variability can be compared to an internal code that contains information. In other words, Naud said University world newshis equation discovered that neurons in the brain not only receive sensory data from the billions of nerves in the body in a regularized internal Morse-like code. Instead, some information is communicated through the variability of the peaks.

Accordingly, Naud’s equation explains the difference between when I “see” a piece of paper on my desk and the glass of my desk, as encoded in the variability of the peaks.

Learning signals

What do neurons use this special ability for? The answer to this question may lie in the theoretical findings shown in Naud’s article: “Burst-dependent synaptic plasticity can coordinate learning in hierarchical circuits,” published three years ago in Nature Neuroscience. In this paper, Naud’s team explored how the ability to communicate and process two signals simultaneously can solve a dilemma about how learning takes place in the brain.

“A synapse doesn’t know if you did something right or wrong,” he says. “All it knows is that there is an action taking place, a possible discharge (firing) on ​​the other side of the synapse. What we are suggesting is that the variability in the information flow carries – or is – some kind of ‘learning signal’ that then talks to the synapse and suddenly knows what to do.”

This has implications for whether AI can be made more energy efficient by designing an algorithm that mimics the structure of neurons and dendrites.

Currently, programs that identify plants from a mobile phone encode the image of a plant into pixels and upload it to a remote computer. The process of learning to recognize the image requires a back-and-forth exchange between the processing unit and the memory unit needed to remember what to process. Every time the computer exchanges information between the two units, a lot of energy is wasted.

“In the paper from three years ago, we theorized that if we could create an algorithm that mimicked the way the brain worked, training or teaching AI systems would be much faster and, more importantly, more energy efficient because the CPU wouldn’t have to do. back and forth,” says Naud.

Naud’s team thought then, and have now shown, that if a virtual neural network, like neurons, could communicate two types of signals, possibly one where variability was present and one where the firing rate was different, they could create an energy-efficient algorithm could implement for learning in computer chips.

Disruptive variability

The implications of Naud’s work for understanding learning are profound, because memory, the sine qua non of learning, is thought to be stored in the synapses and the chemical soup surrounding them, and accessed through dendritic activity.

Naud’s equation could explain the colloquialism that brains are “wired” differently. That is, the comparison makes it possible that the structures in a person’s brain could make them more open to learning things like the times tables.

The deep structures in someone else’s brain may not be as organized. For this person, learning the times tables requires many repetitions that amount to a conscious and purposeful rewiring of the brain.

“There is no research yet that shows that if you disrupt the variability, you would also have disrupted learning. I work with many experimental colleagues who are trying to figure out how best to test that theory. But all my work has shown that this is a possibility,” he said.

Towards a healthier state

At the end of our interview, it was Naud’s turn to reference science fiction and scientific conspiracy theories as he emphasized that the experiments he was involved in to develop electrical stimulation patterns for clinical applications were not nefarious.

These treatments involve implanting an electrode in the brain to treat brain disorders such as Parkinson’s or depression. The electrode stimulates the activity or “wiring of the circuits” in the patient’s brain.

“When you say ‘rewire the brain,’ people get scared,” Naud says. “They think I’m going to enter your brain and erase memories,” which is part of the premise behind the film Total recall (1990) starring Arnold Schwarzenegger.

“But even if we don’t currently know how much rewiring is taking place, we know that it cannot be specific to any one connection. Instead, it is thought to act on a heterogeneous clump of neurons, pushing them toward a healthier state.”

The findings presented in the Natural Computational Science article suggests new ways in which brain stimulation can be applied in clinical settings. “If our theory comes into practice, I think it will be feasible in the coming years that we will be able to know which stimulation patterns to send through the electrode in the patient’s brain to most optimally treat the patient’s condition.”

“If we understand how variability in the neural system should work, we may be able to correct for errors or failures in the neural system caused by morphological or chemical problems. In other words, we could rewire the brain by stimulating neurons to act as a learning signal, giving the network the functionality it once had,” Naud said.