The most popular and comprehensive Open Source ECM platform
AI and Decision Making: Humans Need to Understand Why
The applications of Artificial Intelligence is growing and touching all areas of our life. There are countless examples that can show the useful applications of AI. But should we trust AI algorithms in life-critical applications? How can we be sure that AI algorithms will always make the optimal choices, free of glitches and inexplicable lapses of judgment?
Can we trust AI to set the price of products like insurance, determine appropriate criminal sentences in trials, or pilot a passenger airplane in turbulent weather?
A report by Deloitte explained that “the ‘black-box’ conundrum is one of the biggest roadblocks preventing [organizations] from executing their AI strategies. Machine learning models tasked with identifying patterns in data, making predictions, and solving complex problems are often opaque, obscuring their under-the-hood mechanisms. Deploying such models without explainability poses risks.”
As of yet, AI is basically unregulated. Legislation in the US like the Algorithmic Accountability Act tries to establish standards to ensure that AI outcomes are safe, ethical, and transparent. But this is likely that legislation like this is just the beginning.
A study by Forrester found that the uncertainty and trustability of AI algorithms create worries that can block the acceptance and deployment of the technology. 70 percent of organizations felt that it was highly important for humans to be included as part of the AI/ML process in order to reduce potential risks.
Rik Chomko, CEO and co-founder of InRule, said that “AI is consistently ranked by c-suite executives as critically important to the future of their business, yet two-thirds have difficulty explaining the decisions their AI systems make. Built-in, native explainability empowers non-data scientists and c-suite executives to quickly understand why a decision was made and take confidence in the outcomes of intelligent automation.”