RALEIGH – Artificial intelligence continues to be deployed to enhance efficiency and optimize decision-making, and researchers at Red Hat have now released a library that leverages the techniques for explaining automated decision-making systems as a part of its cloud-native business automation framework, Kogito.

The TrustyAI Explainability Toolkit, on which VentureBeat published a feature story, is a publicly-accessible PDF file that describes how the open-source product, TrustyAI, can support trust in decision services and predictive models, according to its abstract.

“Artificial intelligence (AI) is becoming increasingly more popular and can be found in workplaces and homes around the world,” the authors write. “However, how do we ensure trust in these systems? Regulation changes such as the GDPR mean that users have a right to understand how their data has been processed as well as saved. Therefore if, for example, you are denied a loan you have the right to ask why.”

Challenges exist, though, because it can be really hard to ensure trust and to ensure effective and transparent communication, say the authors, if the method for that decision uses “black box” machine learning techniques such as neural networks.

That’s why they’ve rolled out the product, TrustyAI, which the authors describe as “a new initiative which looks into explainable artificial intelligence (XAI) solutions to address trustworthiness in ML as well as decision services landscapes.”

One key aspect is the “feature importance” chart that can order a model’s inputs and allow an easier determination on whether the model is biased.

“Within TrustyAI, we will combine ML models and decision logic … to enrich automated decisions by including predictive analytics. By monitoring the outcome of decision making, we can audit systems to ensure they … meet regulations,” Rebecca Whitworth, who is a part of the TrustyAI initiative at Red Hat, wrote in a blog post last year introducing TrustyAI, which is also quoted in the VentureBeat story. “We can also trace these results through the system to help with a global overview of the decisions and predictions made. TrustyAI [relies] on the combination of these two standards to ensure trusted automated decision making.”

Whitworth goes on to describe an example, a hypothetical case study of a bank manager that wants to manage current loans and approval rates in accordance with a company policy, along with a review and possible automation of a communication to the bank’s customers about whether and, importantly, why that loan was rejected or accepted, all in accordance with lending standards and best practices and in alignment with the legal rights afforded to consumers applying for credit.