RESEARCH TRIANGLE PARK – As artificial intelligence (AI) deployments increase around the world, IBM says it’s determined to ensure that they’re fair, secure and trustworthy.

To that end, it has donated a series of open-source toolkits designed to help build trusted AI to a Linux Foundation project, the LF AI Foundation, as reported in ZDNet.

“Donation of these projects to LFAI will further the mission of creating responsible AI-powered technologies and enable the larger community to come forward and co-create these tools under the governance of Linux Foundation,” IBM said in a blog post, penned by Todd Moore, Sriram Raghavan and Aleksandra Mojsilovic.

Among the contributions: the AI Fairness 360 Toolkit, the Adversarial Robustness 360 Toolbox and the AI Explainability 360 Toolkit. The first allows developers and data scientists to detect and mitigate unwanted bias in machine learning models and datasets. It also provides around 70 metrics to test for biases and 11 algorithms to mitigate bias in datasets and models.

The Adversarial Robustness 360 Toolbox is an open-source library that helps researchers and developers defend deep neural networks from adversarial attacks.

Meanwhile, the AI Explainability 360 Toolkit provides a set of algorithms, code, guides, tutorials, and demos to support the interpretability and explainability of machine learning models.

As reported, the LFAI’s Technical Advisory Committee voted earlier this month to host and incubate the project, and IBM is currently working with them to formally move them under the foundation.

As a Linux Foundation project, the LFAI provides a vendor-neutral space for the promotion of Artificial Intelligence (AI), Machine Learning (ML) and Deep Learning (DL) open-source projects. It’s backed by major organizations like AT&T, Baidu, Ericsson, Nokia, Tencent and Huawei.