Access and Feeds

Artificial Intelligence: Attempts to Make Black-Box Algorithms Explainable

By Dick Weisinger

Many recent successes using Artificial Intelligence have been stunning. But a major criticism of the technique is that often AI results can not be fully explained or are not totally reproducible.  Even the designers of the AI algorithm are not able to explain how a specific result or recommended decision was arrived at.

The black-box nature of many of AI algorithms have resulted in hesitation in applying AI to mission-critical applications. There are legal and ethical consequences of not understanding how an algorithm works and then going forward with it into production or wide-spread use.

Abhijit Thatte, VP of Artificial Intelligence at Aricent, said that “with AI-powered systems increasingly making decisions such as credit card approval for an application, a self-driving car applying the brakes after getting closer to an obstacle, and parole recommendation for incarcerated felons, it has become vital for humans to understand the decision-making mechanism of the underlying AI to ascertain that the AI makes accurate and fair decisions.”

This uncertainty in AI has driven a group of researchers within AI to develop algorithms classified as Explainable AI (XAI), algorithms that allow humans to understand better why the algorithm came to a particular conclusion or result, a sort of algorithmic accountability.

Some of the factors that Explainable AI attempt to disclose include:

  • Why was a particular decision recommended as opposed to others
  • What are the confidence levels in the recommnedation
  • What criteria was used to arrive at the decision
  • What are the strengths and weaknesses of the algorithm
Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Leave a Reply

Your email address will not be published. Required fields are marked *

*