IBM announced AI Explainability 360, an open source toolkit of state-of-the-art algorithms that support the interpretability and explainability of machine learning models. IBM created AI Explainability 360 with algorithms for case-based reasoning, directly interpretable rules, post hoc local explanations, post hoc global explanations, and more. Given that there are so many different explanation options, IBM created helpful resources in a single place:
- an interactive experience that provides a gentle introduction through a credit scoring application;
- several detailed tutorials to educate practitioners on how to inject explainability in other high-stakes applications such as clinical medicine, healthcare management and human resources;
- documentation that guides the practitioner on choosing an appropriate explanation method.
The toolkit has been engineered with a common interface for all of the different ways of explaining and is extensible to accelerate innovation by the community advancing AI explainability. IBM is open sourcing it to help create a community of practice for data scientists, policymakers, and the general public that need to understand how algorithmic decision making affects them. AI Explainability 360 differs from other open source explainability offerings through the diversity of its methods, focus on educating a variety of stakeholders, and extensibility via a common framework. Moreover, it interoperates with AI Fairness 360 and Adversarial Robustness 360, two other open-source toolboxes from IBM Research released in 2018.
The initial release contains eight algorithms recently created by IBM Research, and also includes metrics from the community that serve as quantitative proxies for the quality of explanations. Beyond the initial release, IBM encourages contributions of other algorithms from the broader research community.