The most popular and comprehensive Open Source ECM platform
Deep Learning has enabled impressive advances in areas like healthcare, finance, machine vision, and self-driving cars. But to be successful, the algorithms need massive amounts of data and CPU cycles. Most Deep Learning projects today rely on specialized GPU processors that are much more expensive than standard commodity CPUs.
Now a startup is called Neural Magic has devised a way to efficiently run Deep Learning algorithms on standard CPUs. The company advertises ‘GPU speeds without GPUs’.
Nir Shavit, MIT professor and co-founder of Neural Magic, said that “our vision is to enable data science teams to take advantage of the ubiquitous computing platforms they already own to run deep learning models at GPU speeds — in a flexible and containerized way that only commodity CPUs can deliver… Yes, running on a commodity processor you get the cost savings of running on a CPU, but more importantly, it eliminates all of these huge commercialization problems and essentially this big limitation of the whole field of machine learning of having to work on small models and small data sets because the accelerators are kind of limited. This is the big unlock of Neural Magic.”
It remains to be seen if the company can live up to their hype. Neural Magic said that their approach has already been demonstrated with typically very computing-intensive applications like image classification and object detection. They report that they’re able to do those types of applications as fast as GPUs and that their approach provides greater flexibility and can process larger images and video streams.