Access and Feeds

Technology and Computing: Google Challenges Nvidia with TPU Chip

By Dick Weisinger

Graphics Processing Units (GPUs) are specialized computer chips that are designed to be able to handle large-scale parallel processing.  Nvidia first introduced the concept of GPUs in 1999 for use in video cards to quickly display images on PCs.  Since then, the parallel processing capabilities of GPUs have found applications beyond just that of graphics processing.

In fact, the use of GPUs can be partially attributed to the explosive growth of areas like deep learning, analytics,  and artificial intelligence. GPUs are being applied to applications like autonomous vehicles, drones and robots. Nvidia has continued to have a lock on the GPU market.

But recently Google introduced a new chip that it is calling a TPU (Tensor Processing Unit). TPUs are even more targeted in handling artificial intelligence problems compared to GPUs. TPUs are custom chips that were designed to optimize the types of processing needed for solving the math equations involved in machine learning algorithms.  A single TPU device is actually composed of four chips, but just one of the devices was benchmarked to be 12,000 faster than the supercomputer IBM used called Deep Blue.  Google is rolling out TPUs into the cloud and putting 64 TPU devices into something that they’re calling a “pod”.  A “pod” was clocked as delivering 11.52 petaflops of processing power.

Fei-Fei Li, chief scientist at Google Cloud and the director of Stanford’s AI Lab, said that “TPUs deliver a staggering 128 teraflops, and are built for just the kind of number crunching that drives machine learning today.”

 

 

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Leave a Reply

Your email address will not be published. Required fields are marked *

*