Access and Feeds

Explainable Artificial Intelligence: Can You Believe the Explanation? Are Validation and Verification a More Practical Approach?

By Dick Weisinger

Artificial Intelligence is trying to overcome the problem of explainability. AI algorithms have grown in complexity to include billions of tunable parameters, and have achieved results that are often considered stunning or amazing. But the problem is trust. For most of today’s AI algorithms there is no way to determine how the algorithm arrived at the final result — it’s simply too complex. Can we trust an algorithm to do things like pilot an airplane or make other critical life-or-death or mission-critical decisions if we don’t understand the full extent of the algorithm’s limitations and capabilities and what conditions might cause the algorithm to fail?

AI vendors know that explainability is a hurdle that needs to be overcome in order to make AI software sellable. As a result, some companies are claiming that their products offer ‘explainability’, but not everyone is convinced. Zachary Lipton, for example, a computer science professor at Carnegie Mellon, told Jermy Kahn at Fortune magazine that “everyone who is serious in the field knows that most of today’s explainable A.I. are nonsense”.

A Lancet Digital Health article written by AI and medical researchers in 2022 concluded that “the black-box nature of current artificial intelligence (AI) has caused some to question whether AI must be explainable to be used in high-stakes scenarios such as medicine… This argument represents a false hope for explainable AI and that current explainability methods are unlikely to achieve these goals for patient-level decision support… In the absence of suitable explainability methods, we advocate for rigorous internal and external validation of AI models as a more direct means of achieving the goals often associated with explainability, and we caution against having explainability be a requirement for clinically deployed models.”

Rigorous internal and external validation. Doesn’t this parallel the software QA process? All software is complex and computer scientists have long struggled with the problem of proving that software is ‘correct’. Perhaps requiring AI algorithms to satisfy rigorous validation and verification would be a more practical approach.

Digg This
Reddit This
Stumble Now!
Buzz This
Vote on DZone
Share on Facebook
Bookmark this on Delicious
Kick It on DotNetKicks.com
Shout it
Share on LinkedIn
Bookmark this on Technorati
Post on Twitter
Google Buzz (aka. Google Reader)

Leave a Reply

Your email address will not be published. Required fields are marked *

*