New brain learning mechanism calls for revision of long-held neuroscience hypothesis

0

Summary: Experimental observations conclude that learning is mainly carried out by neural dendritic trees instead of being modified solely by the strength of synapses, as previously believed.

Source: Bar Ilan University

The brain is a complex network containing billions of neurons. Each of these neurons communicates simultaneously with thousands of others via their synapses (links) and collects incoming signals through several extremely long and branching “arms”, called dendritic trees.

For the past 70 years, a central hypothesis in neuroscience has been that brain learning occurs by altering the strength of synapses, following the relative firing activity of their connecting neurons.

This assumption has served as the basis for machine and deep learning algorithms that increasingly affect almost every aspect of our lives. But after seven decades, this long-held assumption is now being challenged.

In an article published today in Scientific reportsresearchers from Bar-Ilan University in Israel reveal that the brain learns completely differently than has been assumed since the 20th century.

The new experimental observations suggest that learning is mainly carried out in neural dendritic trees, where the trunk and branches of the tree change their strength, instead of only changing the strength of synapses (dendritic sheets), as previously thought. previously.

These observations also indicate that the neuron is actually a much more complex, dynamic and computational element than a binary element that can activate or not.

A single neuron can perform deep learning algorithms, which previously required a complex artificial network of thousands of connected neurons and synapses.

“We have shown that efficient learning on single-neuron dendritic trees can artificially achieve near-unity success rates for handwritten digit recognition. This discovery paves the way for a new type of efficient, biologically-inspired AI hardware and algorithms,” said Professor Ido Kanter, from the Bar-Ilan Department of Physics and the Center for Multidisciplinary Brain Research at Gonda (Goldschmied), who conducted the research.

A paradigm shift in brain research: the new neuron and the new type of learning. Credit: Prof. Ido Kanter, Bar-Ilan University

“This simplified learning mechanism represents a step towards a plausible biological realization of backpropagation algorithms, which are currently the central technique of AI,” added Shiri Hodassman, Ph.D. student and one of the main contributors to this work.

Effective learning on dendritic trees is based on the experimental evidence of Kanter and his research team for subdendritic adaptation using neuronal cultures, as well as other anisotropic properties of neurons, such as different shapes peak wave, refractory periods and maximum transmission rates.

The brain clock is a billion times slower than existing parallel GPUs, but with comparable success rates in many perceptual tasks.

The new demonstration of efficient learning on dendritic trees calls for new approaches in brain research, as well as the generation of homologous material aimed at implementing advanced AI algorithms. If slow brain dynamics can be implemented on superfast computers, the sky is the limit.

About this news on research in neuroscience and learning

Author: Press office
Source: Bar Ilan University
Contact: Press Service – Bar-Ilan University
Picture: Image is credited to Bar-Ilan University

Original research: Free access.
“Efficient Dendritic Learning as an Alternative to the Synaptic Plasticity Hypothesis” by Shiri Hodassman et al. Scientific reports


Abstract

See also

This shows the outline of two heads

Effective dendritic learning as an alternative to the synaptic plasticity hypothesis

Synaptic plasticity is a long-running central hypothesis of brain learning that suggests local adaptation between two connected neurons and is the foundation of machine learning.

The main complexity of synaptic plasticity is that synapses and dendrites connect neurons serially and existing experiments cannot identify the significant imprinted fitting location.

We have shown efficient backpropagation and Hebbian learning on dendritic trees, inspired by experimental evidence, for subdendritic adaptation and its nonlinear amplification.

It was found to achieve success rates close to unity for handwritten digit recognition, indicating the achievement of deep learning by even a single dendrite or neuron.

Moreover, dendritic amplification generates practically an exponential number of input crossovers, higher-order interactions, with the number of inputs, which improves success rates.

However, the direct implementation of a large number of cross-weights and their exhaustive manipulation independently exceeds the existing and planned computing power.

Therefore, a new type of nonlinear adaptive dendritic material to mimic dendritic learning and estimate brain computational ability should be constructed.

Share.

About Author

Comments are closed.