The most popular and comprehensive Open Source ECM platform
Deep Learning has enabled impressive AI results, but the technique may be hitting a roadblock. Research from a group at MIT said that scaling up Deep Learning beyond it’s current capabilities is limited by computational power. To push the capabilities of the technology further, the group found that “dramatically” more computational efficient techniques must be developed.
While this may sound like a dire prediction for the technology, the authors are optimistic that the theory and algorithms that underpin the approach will continue to improve.
The MIT group wrote that “deep learning is not computationally expensive by accident, but by design. The same flexibility that makes it excellent at modeling diverse phenomena and outperforming expert models also makes it dramatically more computationally expensive. Despite this, we find that the actual computational burden of deep learning models is scaling more rapidly than (known) lower bounds from theory, suggesting that substantial improvements might be possible.”
The group studied academic papers on Deep Learning and concluded that in order the advance Deep Learning capabilities, two factors need to improve: algorithmic efficiency and computational power. The MIT team noted that if algorithms track at the same rate of improvements that have been measured in the past that three years of algorithm improvement are equivalent to a ten times increase in computing power.