Access and Feeds

AI Generalization: Can Algorithms Be Taught to Regret?

By Dick Weisinger

Been there. Done that. That’s how most AI algorithms to date work. The recipe is to collect as many examples as possible, maybe thousands or even millions of previous instances of something, and from those examples then teach the AI algorithm how the object or event should be classified or diagnosed. The AI algorithm learns by building a massive understanding of how humans have dealt with a specific type of thing in the the past. The result? Algorithms that can recognize cancer from XRays or image identification often with accuracies better than a single human can.

But while accuracy is very high, the results are very specific and for limited tasks. Asking those algorithms to do anything outside of those task realms just doesn’t work. As a result, AI researchers are looking for ways to make AI smarter by being able to use and extrapolate from their knowledge of similar, although not identical, past trainings. More general AI which can be applied to events and objects that the AI hasn’t previously been exposed to is called ‘AI generalization’.

Near generalization isn’t pushing the envelope too far. For example, an algorithm may be trained on XRays that originated entirely from a population living in the US and then applied to a set of XRays taken using a different type of equipment and from a population residing in a different country. The expectation is that the algorithm may not do as well when it is applied to data set different than the type of data used in the original training.

Researchers at Carnegie Mellon found that using and training algorithms that originally are ‘noisy’ can improve results when applied to populations that differ slightly from the original training.

But building a smarter AI that really can learn is much harder than a ‘near generalization’.

Luke Hewitt, graduate student at MIT, said that “it is a bad idea to intuit how broadly intelligent a machine must be, or have the capacity to be, based solely on a single task. The checkers-playing machines of the 1950s amazed researchers and many considered these a huge leap towards human-level reasoning, yet we now appreciate that achieving human or superhuman performance in this game is far easier than achieving human-level general intelligence. In fact, even the best humans can easily be defeated by a search algorithm with simple heuristics. Human or superhuman performance in one task is not necessarily a stepping-stone towards near-human performance across most tasks.”

Despite the challenge, other researchers, like at Google’s DeepMind, are trying to make more generalized AI algorithms. Their technique involves a trial-and-error sort of learning. The algorithm starts with a minimal set of skills and through trial and error is able to eventually learn what it takes to be successful in solving a problem.

The DeepMind team experimented using robotics and computer vision. The goal was for the robot to move from the start to an end point, but along the way there is a small step that blocks its movement. Ultimately the robot finds that it can move a flat object to a location where it can then use it as a ramp to climb up and get past the step. Researchers ran hundreds of thousands of scenarios like this and were able to teach the robot to solve challenges that would occur with basic mobility skills.

But is this really a more generalized technique or an algorithm with a wider repertoire of trainings? An algorithm running in real life that needs to respond in real-time to a new situation doesn’t have the luxury of re-enacting the same new set of conditions thousands of times before making a final decision. Algorithms may be no different than humans wishing after the fact to have made a different decision when faced with a new challenge. Can AI be taught to regret?

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Leave a Reply

Your email address will not be published. Required fields are marked *

*