The AI Explainability 360 toolkit, an LF AI Foundation incubation project, is an open-source library that supports the interpretability and explainability of datasets and machine learning models. The AI Explainability 360 Python package includes a comprehensive set of algorithms that cover different dimensions of explanations along with proxy explainability metrics. There is no single approach to explainability that works best. The toolkit is designed to translate algorithmic research from the lab into the actual practice of domains as wide-ranging as finance, human capital management, healthcare, and education. IBM moved AI Explainability 360 to LF AI in July 2020.
10 state-of-the-art explainability algorithms
Supports a growing list of explainability algorithms including ProtoDash, Disentangled Inferred Prior VAE, Contrastive Explanations Method, Contrastive Explanations Method with Monotonic Attribute Functions, LIME, SHAP, Teaching AI to Explain its Decisions, Boolean Decision Rules via Column Generation (Light Edition), Generalized Linear Rule Models, and ProfWeight.
Metrics for explainability
Although it is ultimately the consumer who determines the quality of an explanation, the research community has proposed quantitative metrics as proxies for explainability. Provided metrics include: Faithfulness and Monotonicity.
Packed with tutorials demonstrating an industrial use case using the toolkit that offer a deeper, data scientist-oriented introduction. Examples: Credit Card Approval, Medical Expenditure, Dermoscopy, Health and Nutrition and Proactive Retention.
Please visit us on GitHub where our development happens. We invite you to join our community both as a user of AI Fairness 360 and also as a contributor to its development. We look forward to your contributions!