Access and Feeds

Artificial Intelligence: Neural Network Short-Cuts Can Lead to Failures

By Dick Weisinger

Artificial Intelligence has made great progress over the last decade and is being used in applications like autonomous driving and medical diagnoses. But there has been a reluctance to trust or turn over control to AI algorithms because of a lack of ‘explainability’.

A joint study by the University of Chicago and York University provides more information about why concerns over AI may be justified, especially for mission-critical applications. The study found that neural networks can be taught to ‘see’, but what they see and how they see are vastly different than how a human processes visual information.

The study created ‘Frankenstein’ images that humans would immediately identify as problematic but which the neural network can readily identify.

James Elder, a professor at York University, said that “Frankensteins are simply objects that have been taken apart and put back together the wrong way around. As a result, they have all the right local features, but in the wrong places… Neural network models tend to take ‘shortcuts’ when solving complex recognition tasks. While these shortcuts may work in many cases, they can be dangerous in some of the real-world AI applications we are currently working on with our industry and government partners.”

Elder said that “our results explain why deep AI models fail under certain conditions and point to the need to consider tasks beyond object recognition in order to understand visual processing in the brain. We speculate that to match human configurable sensitivity, networks must be trained to solve a broader range of object tasks beyond category recognition.”

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Leave a Reply

Your email address will not be published. Required fields are marked *

*