The most popular and comprehensive Open Source ECM platform
First generation artificial intelligence models have relied on complex modeling based on iterative stages of pattern recognition. AI results are often uncanny in their accuracy but have been criticized because the methodology for how the results were calculated can’t be explained. This lack of explainability raises a question about how much trust can be placed in AI results, particularly for mission-critical applications.
Subram Natarajan, CTO at IBM India, said that “the crucial thing most AI solution providers fail to incorporate is explainability. You have to ensure there’s no bias in AI algorithms. This is a very important factor in any AI model.”
Oliver Schabenberger, COO and CTO of SAS, said that “explainability of AI is part of a larger effort toward fair, accountable and transparent AI systems. The issues about algorithmic decision making are not new, but the conversation has ramped up in recent years. AI is bringing automated decisioning to new domains such as medical diagnostics and autonomous driving, and is building systems that are more complex, less transparent and highly scalable. That combination makes us uneasy.”
Explainability enables rules to be established and for the methodology to become more transparent. Jans Aasman, Franz CEO, said that “you can take the output of those machine learning algorithms and then turn them into explainable rules. Then you can say well, the reason I’m not giving you this loan is because of these factors. Now, instead of just applying the formula, you can also use additional rules. Of course you have rules for how you want to deal with customers, and you apply the rules and then you can use continuous machine learning to see if your actions were positive or negative for the bank or for your customer.”