A team of researchers has discovered a new mechanism for the transmission of electrical signals between nerve cells in the human brain. This mechanism enables a single neuron to perform more complex computations than we previously thought.
Advertisement
The evolution of the mammalian brain and in particular, the human brain, has led to the emergence of an exceptionally complex structure—the cerebral cortex. The cerebral cortex is composed of billions of neurons and is linked, among other things, to the advanced capabilities of human cognition: thinking, speech, complex information processing, and decision-making [1,2]. Over time, the human cortex evolved into a structure roughly 3 mm thick, consisting of six layers of different thicknesses, with layers 2 and 3 thicker and denser than the rest.
Recently, a team of researchers examined the electrical properties of layers 2 and 3 in samples of human cortex and, for the first time, discovered an information-transfer mechanism that allows the cortex to carry out more complex computation than previously assumed, as reported in Science Magazine[3].
The basic computational unit of the brain is the neuron. These cells consist of many processes that receive signals from neighboring neurons (dendrites), a cell body, and a long, thin process that sends electrical signals (the axon). Neurons communicate through connections called synapses—contact sites between one neuron and another that enable them to transmit signals and thus convey and process information within the nervous system.
What governs information transfer between neurons? Simply put, if a neuron receives sufficiently strong and well-timed electrical inputs from its neighbors, an action potential will be generated in its dendrites. Whether such a potential arises depends on the strength and timing of the inputs arriving at the dendrites. If their amplitude exceeds a certain voltage threshold, a dendritic action potential is produced. This potential dampens as it reaches the cell body, but if its amplitude upon arrival remains above a specific threshold, a second, stronger impulse—an axonal action potential—will propagate along the axon, enabling the neuron to relay the signal to other cells. Traditionally, both axonal and dendritic impulses are viewed as “binary” signals: if the inputs cross the threshold, the impulse is generated; if they do not, it is not [4]. These principles are considered fundamental in neuroscience and form the basis of neural information transfer, or the “computational” capacity of neurons. Nonetheless, almost all evidence for these principles comes from rodent studies, so we do not know for certain how information is transferred in the human brain.
To investigate what occurs in the human cortex, the researchers used brain slices donated by patients with epilepsy and cancer. Using advanced optical and electrophysiological techniques, they characterized the signal-transmission properties of neurons from cortical layer 2/3. To their surprise, they discovered a new mechanism for generating dendritic impulses, one more sophisticated and complex than anything previously documented in rodents or humans.
When the researchers applied electrical stimulation to the neurons’ dendrites, they observed classic dendritic action potentials: weak stimuli did not elicit a potential, but once the stimulus exceeded a threshold, any stimulus triggered one. Yet, in some cells they found a new type of dendritic impulse characterized by an upper threshold: a stimulus that was too strong failed to generate an action potential, just as one that was too weak did. Unlike previously known impulses, this newly discovered signal is not binary—above or below threshold—but rather “tuned” to a specific voltage. In other words, this mechanism allows a neuron to pass on a signal only if it receives dendritic input of a defined strength: stimuli that are too weak or too strong will not produce an impulse; only a stimulus within a certain range can do so.
To understand the computational significance of this impulse-generation mechanism, the researchers built a computer model of an artificial neural network that obeys the newly discovered rule. The mechanism enabled a single neuron to sum its inputs and “decide” whether to fire in a way previously thought impossible, effectively regulating its own activity. These kinds of functions had been previously assumed to require many neurons rather than being within the capability of a single one.
Further experiments are, of course, required to determine whether this mechanism characterizes intact, healthy brains and whether it exists in additional brain regions. If confirmed, it suggests that we have so far underestimated the computational power of human neurons and that neurons can carry out more complex computation than previously believed. A major question also remains: is this mechanism unique to humans? We will have to wait patiently to find out.
References: