SOTAVerified

TrustyAI Explainability Toolkit

2021-04-26Code Available0· sign in to hype

Rob Geada, Tommaso Teofili, Rui Vieira, Rebecca Whitworth, Daniele Zonca

Code Available — Be the first to reproduce this paper.

Reproduce

Code

Abstract

Artificial intelligence (AI) is becoming increasingly more popular and can be found in workplaces and homes around the world. The decisions made by such "black box" systems are often opaque; that is, so complex as to be functionally impossible to understand. How do we ensure that these systems are behaving as desired? TrustyAI is an initiative which looks into explainable artificial intelligence (XAI) solutions to address this issue of explainability in the context of both AI models and decision services. This paper presents the TrustyAI Explainability Toolkit, a Java and Python library that provides XAI explanations of decision services and predictive models for both enterprise and data science use-cases. We describe the TrustyAI implementations and extensions to techniques such as LIME, SHAP and counterfactuals, which are benchmarked against existing implementations in a variety of experiments.

Tasks

Reproductions