INFORMATION

Time: 12:00 pm – 3:00 pm Pacific Time (Vancouver), February 3, 2021

Zoom Location: TBD

Online Resources:

Speakers:

DESCRIPTION

The goal of this tutorial is to provide a systematic view of the current knowledge relating explainability to several key outstanding concerns regarding the quality of ML models;in particular, robustness, privacy, and fairness. We will discuss the ways in which explainability can inform questions about these aspects of model quality, and how methods for improving them that are emerging from recent research of AI, Security & Privacy, and Fairness communities can in turn lead to better outcomes for explainability. We aim to make these findings accessible to a general AI audience, including not only researchers who want to further engage with this direction, but also practitioners who stand to benefit from the results, and policy-makers who want to deepen their technical understanding of these important issues.

SYLLABUS

Part I: Introduction (10min)

Part II: Foundations of XAI (60min with Q&A)

Break (10min)

Part III: From Explanation to Model Quality (60min with Q&A)

Break (10min)

Part IV: From Model Quality to Explanations (30min with Q&A)

AUDIENCE

The target audience of this tutorial is researchers, practitioners, and policy-makers who are interested in the role that our topic plays in applications of AI. We expect audience members to be familiar the supervised learning, and have a working knowledge of how optimization methods are used to train models. We do not expect familiarity with the problems from privacy, fairness, or robustness.

TruLens

Library containing attribution and interpretation methods for deep nets. To quickly play around with the TruLens library, check out the following CoLab notebooks:

More resources are available on our Github page: TruLens