Access and Feeds

Reservoir Computing: Turbocharging Neural Networks for Large-Scale Simulations

By Dick Weisinger

Combining nonlinear system analysis with recurrent neural network theory is an advanced area of AI called Reservoir Computing. It’s a mchine learning algoithm that has been used since the early 2000’s and often applied to solving otherwise intractable problems. While Resevoir Computing would eventually solve a problem, it has been an exceedingly resource and time intensive provess.

A new technique to improve the speed of Reservoir Computing pioneered at Ohio State university has been able to best the previous approach by as much as a factor of one million times and een when using significantly fewer computing resources.

Daniel Gauthier, professor of physics at Ohio State, said that “we can perform very complex information processing tasks in a fraction of the time using much less computer resources compared to what reservoir computing can currently do. And reservoir computing was already a significant improvement on what was previously possible.”

Part of the reason why neural network computing has been slow is that while researchers knew the technique worked, they couldn’t fully understand the internals of what the algorithm did. Gauthier and his team tried to identify what part of the neural network were not essential to the process and could be eliminated. As a result of that effort, they discovered that a large part of the machinery behind the algorithm wasn’t needed.

Much of the speedup is due to drastically reducing the number of “warm-up” initial data priming which is typically needed to get the algorithm ready to process the actual problem at hand.

Gauthier said that “for our next-generation reservoir computing, there is almost no warming time needed. Currently, scientists have to put in 1,000 or 10,000 data points or more to warm it up. And that’s all data that is lost, that is not needed for the actual work. We only have to put in one or two or three data points. What’s exciting is that this next generation of reservoir computing takes what was already very good and makes it significantly more efficient.”

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Leave a Reply

Your email address will not be published. Required fields are marked *