Access and Feeds

Artificial Intelligence: Designing Accurate Benchmarks

By Dick Weisinger

As both startups and more established chip makers design and build computer chips that target the acceleration of the processing of AI algorithms, benchmarking becomes very important as a way to compare the capabilities of the different chips.

Until recently AI benchmarks have been flawed.

Anoop Saha, market development manager at Mentor, said that “in recent years, we have seen a lot of advancements in hardware choices available for AI – from GPUs to FPGAs to custom hardware ASICs. However, the existing benchmarks are not suitable for measuring how the hardware will work for AI applications – either in training or in inference.”

To come up with better AI chip benchmarks, a consortium of more than 70 academic, research, and commercial organizations formed MLPerf, a group that focuses on developing benchmarks for Machine Learning performance criteria like speed and energy efficiency.

David Kanter, MLPerf Inference Working Group cochair, said that “if you’re a researcher or you’re an engineer designing these next generation systems, it’s important to know what are the workloads and metrics that matter, because ultimately at the end of the day, all the engineers working on this stuff are very smart and talented folks, but we’ve got to point them in the right direction and make sure they’re optimizing for the right things so that the solutions that come out — whether it’s 2 or 5 years from now — are designed for the workloads of today and tomorrow.”

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Leave a Reply

Your email address will not be published. Required fields are marked *

*