Access and Feeds

Artificial Intelligence: Newer Chip Designs Focus on Turbo-Charging AI Algorithm Processing

By Dick Weisinger

Intelligent Processing Units (IPU) are computer chips designed with AI algorithms in mind. The chips are able to run AI algorithms ten times faster than the most powerful GPU chips. IPUs were introduced by the UK-based Graphcore.

Nigel Toon, CEO of Graphcore, said that “the architecture of the hardware is quite simple and straightforward. You can’t simply come up with the hardware, and then try to figure out how to write the software with it.”

Standard CPUs are based on a scalar architecture.  GPUs are based on a vector architecture.  IPUs instead use a graph architecture.  This approach allows them to target specific AI algorithms, like neural networks, Bayesian networks, and Markov Fields, for example.  IPUs support both the training and inference steps of AI.

GPUs are currently the rage and used for most AI projects.  But the GPU chips were designed to optimize the processing of graphic images and only have recently been applied to AI algorithms.

Feldman, CEO at Cerebras, said that “I don’t think the GPU is very good for machine learning. It’s just better than Intel’s [central processing unit], but the GPU represents 25 years of optimization for a different problem.”

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Leave a Reply

Your email address will not be published. Required fields are marked *

*