Quantcast
Viewing all articles
Browse latest Browse all 1027

Deep-learning algorithms get a boost from synapse-based hardware that mimics functions in the brain.

Image may be NSFW.
Clik here to view.
Credit: KTSDesign/Science Photo Library via Getty Images

The human brain is the world’s most efficient computer, performing profoundly complex calculations using less energy — just 20 watts — than a dim light bulb. For decades, engineers have been trying to build brain-like computer chips that process information far more organically and efficiently. Now such a “neuromorphic computer” may only be five to 10 years away thanks to a new type of artificial synapse.

Researchers at the National Institute of Standards and Technology (NIST) have created a tiny switch that mimics the function of synapses in the brain. The cylindrical artificial synapse measures only 10 micrometers in diameter, which is much smaller than the thickness of human hair, and works just like a real synapse by regulating incoming and outgoing electrical pulses, according to a paper published in Science Advances. These artificial synapses could enable next-generation artificial intelligence that runs on a chip rather than a giant supercomputer.

In the brain, neurons “talk” to one another by sending electrochemical impulses across tiny gates or switches called synapses. When a synapse receives a strong enough incoming signal from one neuron, it triggers an electrochemical reaction that produces an outgoing spike in a second neuron.

RELATED: This Artificial Intelligence System Can ID Faces Even If They Are Disguised

What’s amazing is that the more two neurons talk to one another, the less power is required to cross the synapse and trigger the spiking response. That’s why our brains are so efficient at learning and remembering — it’s built into the wiring.

Mike Schneider is a NIST physicist and lead author of the artificial synapse paper. He told Seeker that today’s most advanced AI — like the image recognition capabilities of Google’s Deep Dream — is achieved by deep neural network software running on conventional supercomputers. But those deep neural network algorithms would be many times more powerful and efficient if they ran on hardware that crunches data like a biological neural network.

The artificial synapse developed at NIST is an analog switch, just like real synapses. That means that it takes in an incoming electrical signal and produces a complementary outgoing spike of energy. And like a real synapse, it can be “tuned” to produce a wide range of spikes based on varying signal strengths. Compare that to a conventional digital transistor on a silicon chip which can only be “on” or “off,” 1 or 0.

A neuromorphic computer based on this analog technology would process data across billions of these artificial synapses, each talking to one another other through tiny pulses of energy. But what’s truly remarkable about the NIST switches is just how little energy they use.

“The spiking energy in these synapses is one million times less than the human brain,” said Schneider. “That’s pretty striking.”



Image may be NSFW.
Clik here to view.
Credit: NIST

Plus the artificial synapses can fire a billion times per second, compared to just 50 times a second for real brain synapses. “By having this efficiency, you open up the possibility of tackling more complex problems,” Schneider said. “You can make larger system without needing your own giant power station.”

The NIST synapse is a type of Josephson Junction, a sandwich of two superconductors around an insulating layer. Josephson Junctions have been around for a while, but makes these synapses special is that the insulating layer is packed with special magnetic clusters that allow the researchers to control how much energy is required to throw the switch, known as the critical current.

The magnetic clusters are “tunable” because they each have a particular spin, similar to magnetic orientation. When all of the clusters are oriented in different directions, the critical current is high. Using pulses from a magnetic field, though, the researchers can align the clusters’ orientation, lowering the critical current.

This tuning process is essential because it replicates the natural “learning” process of real synapses, in which frequent interactions result in a lower critical current. For now, Schneider and his team will tune the synapses manually to achieve greater efficiencies for neural network algorithms, but the ultimate goal is for the synapses to tune themselves organically through the frequency and quality of interactions within the circuit.

RELATED: Quantum Data Storage in a Single Atom Brings New Computing Era Closer to Reality

So how long until we have AI applications running on neuromorphic circuits? Sooner than you think. Schneider said most of the large-scale fabrication issues around manufacturing chips with millions or billions of artificial synapses have already been worked out by other researchers trying to make computers based on digital Josephson Junctions.

“We’re optimistic that we can start to scale these devices somewhat aggressively,” said Schneider, who puts the figure at between five and 10 years.

One drawback of the low-power artificial synapses is that they only work at 4 Kelvin (-452 F). That’s not a problem for applications hosted in data centers, where liquid-helium cryocooling is an option. But don’t expect to see neuromorphic processors in your cell phone or self-driving car.




Viewing all articles
Browse latest Browse all 1027

Trending Articles