Access and Feeds

Huang’s Law: A New Chapter in Computing

By Dick Weisinger

In the realm of computing, Moore’s Law has long been the guiding principle, predicting a doubling of transistors on a chip approximately every two years. However, as we venture further into the era of artificial intelligence (AI), a new principle has emerged: Huang’s Law.

Coined by Jensen Huang, CEO of NVIDIA, Huang’s Law forecasts that the performance of graphics processing units (GPUs), particularly in AI applications, will more than double every two years. This law attributes the increased performance to advancements in AI software as much as to improvements in chip hardware.

Unlike Moore’s Law, which focuses on central processing units (CPUs), Huang’s Law is specific to GPUs. GPUs are at the heart of AI and machine learning applications, making them a critical component in the next generation of computing.

Over the past decade, NVIDIA’s GPU AI-processing prowess has grown 1000-fold, a testament to the power of Huang’s Law. This law suggests that the speedups we have seen in “single chip inference performance” will continue.

While Moore’s Law has been instrumental in guiding the progress of traditional computing, Huang’s Law appears to be setting the pace for the future, particularly in the realm of AI. As we continue to push the boundaries of what’s possible with technology, principles like Huang’s Law will be crucial in charting our course.

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Leave a Reply

Your email address will not be published. Required fields are marked *

*