The most popular and comprehensive Open Source ECM platform
AI Chips: A Cost Spectrum Ranging from $1 to $200K
The huge processing requirements of many AI algorithms have driven intense interest in the creation of specialized AI chips that are optimized for specific types of algorithms. Specialized chips can be much more efficient in processing algorithms than general purpose chips.
Consider the state of AI processing in 2015. Naveen Kumar, senior staff engineer working on the Google Brain project, tells of how Google’s specialized TPU chips were used to train an AI model, taking more than three weeks. With today’s more advanced version of the TPU it takes under 30 seconds. That’s an advance of five orders of magnitude in five years.
There are a growing number of uses for AI. AI is even being embedded in low cost devices sitting ‘at the edge’ on networks perimeters.
Michael Azoff, analyst at Kisaco Research, said that “the need for AI hardware accelerators has grown with the adoption of DL applications in real-time systems where there is need to accelerate DL computation to achieve low latency (less than 20ms) and ultra-low latency (1-10ms). From a commercial viewpoint, the small edge is about selling millions of products and the cost of the AI chip component may be as low as $1, whereas a high-end GPU AI accelerator ‘box’ for the data center may have a price tag of $200k.”
A report by Deloitte found that “besides being relatively inexpensive, standalone edge AI processors have the advantage of being small. Some are small enough to fit on a USB stick; the largest is on a board about the size of a credit card. They are also relatively low power, drawing between 1 to 10 watts. For comparison, a data center cluster (albeit a very powerful one) of 16 GPUs and two CPUs costs US$400,000, weighs 350 pounds, and consumes 10,000 watts of power.”