Access and Feeds

Deep Learning: Researchers Pursue Advantages of Analog Computing

By Dick Weisinger

During the last decade Artificial Intelligence, particularly Deep Learning algorithms, have made enormous progress by adopting GPU processors. GPUs, graphic processing units, are just that — they were designed to boost the performance of computer graphics processing. Over the last couple of years, there has been a surge of interest in designing chips that are purposely designed for use in AI computing.

Researchers are investigating how to improve the performance, capacity, and power consumption of current deep learning implementations, which are digital and use GPU processors. Analog computation is one of many areas under investigation.

Logan Wright, a research scientist at NTT Research, told VentureBeat that “from the early history of AI, people weren’t trying to think about how to make digital computers. They were trying to think about how we could emulate the brain, which of course is not digital. I have an analog system in my head, and it’s much more efficient at performing the types of calculations that go on in deep neural networks than today’s digital logic circuits.”

Tim Vehling, senior vice president at Mythic, told Digital Trends that “there’s a lot of ways to tackle the problem of A.I. computation. There’s no wrong way. But we do fundamentally believe that the keep-throwing-more-transistors-at-it, keep-making-the-process-nodes-smaller — basically the Moore’s Law approach — is not viable anymore. It’s starting to prove out already. So whether you do analog computers or not, companies will have to find a different approach to make next-generation products that are high computation, low power, et cetera.”

Researchers at MIT have created artificial “analog synapses” that respond one million times more quickly than the synapse in a human brain.

Murat Onen, an MIT postdoc researcher, said that “once you have an analog processor, you will no longer be training networks everyone else is working on. You will be training networks with unprecedented complexities that no one else can afford to, and therefore vastly outperform them all. In other words, this is not a faster car, this is a spacecraft.”

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Leave a Reply

Your email address will not be published. Required fields are marked *

*