Access and Feeds

Neuromorphic Chips: Using Animal Brains as a Model for Computing

By Dick Weisinger

Strong interest in Artificial Intelligence and Machine Learning is driving rapid advances into the basic elements of computers are architected.  GPUs are one example — a GPU consists of a large number of processor cores that can all work in parallel and are tuned to be very performant when operating on very specific kinds problems, like image processing.  While originally developed primarily for graphic processing, GPU’s are increasingly being used for other computationally intensive problems in machine learning.

Our current concept for how a computer works was first conceived by Turing and von Neumann in the 1940’s.  In the von Neumann model for computing, there is a central processing unit or CPU that uses internal registers for processing data.  The CPU has random-access to memory and interprets the data retrieved as as either instructions or data.

Geoffrey Burr, a principal research staff member at IBM Research, said that “for many problems going forward, (von Neumann hardware) will still be the right solution.  But there’s an enormous amount of work that needs to be done to make those (intelligent-like) algorithms work in software on regular von Neumann hardware. The problem is that you need this steady stream of data through the bus. So, you’re spending a lot of energy and time shipping that data in and out. It would be ideal to bring the computation to where the data is.  That’s where we see the opportunities for these neuromorphic systems. It will accelerate machine learning.”

Neuromorphic computers offer an alternative design to the von Neumann model.  Traditional chips are reaching the limit of Moore’s law of performance and neuromorphic chips offer a path for improved performance.  Neuromorphic design uses biological brains as a model and  is expected to be very useful for applications like artificial intelligence.

In 2015, IBM released information about research into a neuromorphic computer built from 48 chips, 4096 cores and 48 million connections, about the number of connections that are found in the brain of a rat.

Brian Van Essen, a computer scientist from the Lawrence Livermore National Laboratory, said that “what does a neuro-synaptic architecture give us? It lets us do things like image classification at a very, very low power consumption.”

But wide-spread use of this kind of a neuromorphic chip may be a long ways away. Yann LeCun, researcher at NYU, commented that “I’m all in favour of building special-purpose chips for running neural nets. But I think people should build chips to implement algorithms that we know work at state of the art level.  This avenue of research is not going to pan out for quite a while, if ever. They may get neural net accelerator chips in their smartphones soonish, but these chips won’t look at all like the IBM chip. They will look more like modified GPUs.”

 

 

 

 

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Leave a Reply

Your email address will not be published. Required fields are marked *

*