Access and Feeds

Algorithm Performance: Chip Metrics can be Deceptive

By Dick Weisinger

Benchmarks are key for doing side-by-side comparisons of hardware. But benchmark results aren’t always what they might seem to be and often need to be viewed in context with other factors.

Consider the TOPS metrics. TOPS is Tera (Trillions) Operations Per Second and is a common metric for measuring performance of high-end self-contained chips, Systems on a Chip (SoCs). It is a frequent metric used for evaluating AI-specific chips. TOPS is a measure of the maximum achievable throughput, not actual throughput. TOPS is an abstract measure of capability. Unfortunately, TOPS provides only limited information, because it won’t tell you how specific algorithms will perform on the chip.

Ludovic Larzul, CEO at Mipsology, wrote for EETimes that “while it’s certainly possible to tweak a neural network to squeeze better performance out of a card, it is extremely unlikely you will ever get close to the peak TOPS listed by vendors. Attempting to get even 60 or 70 percent computing efficiency will be a massive time sink. If any change to the neural network happens, you will have to go back to square one to again optimize everything – and still, it may not even work for your application. The problem is particularly pronounced for small batch processing; you’d be lucky to get more than 15 percent of the peak TOPS.”

Performance typically depends on many factors and TOPS statistics tends to often skew the numbers to appear favorably.

Jeremy Horowitz summarized in an article for Venturebeat, saying that “just as was the case in the console and computer performance wars of years past, relying on TOPS as a singular data point in assessing the AI processing potential of any solution probably isn’t wise… While end users considering the purchase of AI-powered devices should look past simple numbers in favor of solutions that perform tasks that matter to them, businesses should consider TOPS alongside other metrics and features — such as the presence or absence of specific accelerators — to make investments in AI hardware that will be worth keeping around for years to come.”

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Leave a Reply

Your email address will not be published.

*

20 − 3 =