Access and Feeds

Artificial Intelligence: The Importance of Explainability

By Dick Weisinger

Explainable AI (XAI) is the application of artificial intelligence that produces results that include the reasoning for how the solution was achieved. Early AI algorithms, especially in the areas of Machine Learning and Neural Networks, could often produce stunningly accurate results but provide no way to backtrack the steps taken to arrive at the solution. Without fully understanding the process, the AI algorithm methodology is classified as a ‘black box’ and that results in a question of trust. For mission-critical or life-dependent decisions, relying on AI black-box solutions can be a risk.

The reasons why explainable AI is important include:

  • Humans can be confident that assumptions, reasoning, and data used to derive a solution are sound.
  • Methodology must be explainable to be able to comply with regulatory requirements.
  • Transparency would allow the algorithm to be observed and any unconscious biases or improper ethics-related data that influences the results can be removed.
  • Understanding the decision and reasoning used by the algorithm can help humans in approaching decision analysis for similar problems.

Stephen Blum, CTO and co-founder of PubNub, said that “for small things like AI-powered chatbots or sentiment analysis of social feeds, it doesn’t really matter if the AI system operates in a black box. But for use cases with a big human impact – autonomous vehicles, aerial navigation and drones, military applications – being able to understand the decision-making process is mission-critical. As we rely more and more on AI in our everyday lives, we need to be able to understand its ‘thought process’ and make changes and improvements over time.”

Marios Savvides, professor at Carnegie Mellon, said that “explainable AI has the benefits of allowing us to understand what has gone wrong and where it has gone wrong in an AI pipeline when the whole AI system makes an erroneous classification or prediction. These are the benefits of an XAI pipeline. In contrast, a conventional AI system involving a complete end-to-end black-box deep learning solution is more complex to analyze and more difficult to pinpoint exactly where and why an error has occurred.”

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Leave a Reply

Your email address will not be published.

*

seventeen + 12 =