Access and Feeds

Explainable Artificial Intelligence (XAI): Letting Humans Have the Final Say

By Dick Weisinger

The accuracy of many of today’s AI algorithms have been uncannily good. But not always, and that’s the problem. It’s hard to put trust in something that isn’t understood and often can’t be identically reproduced. Mission-critical applications aren’t able to tolerate that kind of uncertainty.

Many of today’s best AI algorithms are black boxes and work off of deep learning and neural network interconnections. But ultimately there is little visibility into how or why an algorithm arrived at a particular result.

Explainable Artificial Intelligence (XAI) is a type of machine learning that builds into AI algorithms a way to report the steps and reasons for how a particular recommendation or result was determined.

Jacob Wagner, IT director at Bluestem Brands, said that “walking people over the trust barrier is a challenge. The more information we have about why something was recommended, the easier that experience is.”

Ji Li, data science director at CLARA analytics, said that “AI is not perfect. And although AI predictions can be very accurate, there will always be the possible case where the model is wrong. With explainability, the AI technology assists human beings in making quick, fact-based decisions but allows humans the capability to still use their judgment. With explainable AI, AI becomes a more useful technology because instead of always trusting or never trusting the predictions, humans are helping to improve the predictions every day.”

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Leave a Reply

Your email address will not be published. Required fields are marked *

*