Workshop on Interpretable Machine Learning

Andrew Gordon Wilson sends along this conference announcement:

NIPS 2017 Symposium
Interpretable Machine Learning
Long Beach, California, USA
December 7, 2017

Call for Papers:

We invite researchers to submit their recent work on interpretable machine learning from a wide range of approaches, including (1) methods that are designed to be more interpretable from the start, such as rule-based methods, (2) methods that produce insight into existing ML models, and (3) perspectives either for or against interpretability in general. Topics of interest include:

– Deep learning
– Kernel, tensor, graph, or probabilistic methods
– Automatic scientific discovery
– Safe AI and AI Ethics
– Causality
– Social Science
– Human-computer interaction
– Quantifying or visualizing interpretability
– Symbolic regression

Authors are welcome to submit 2-4 page extended abstracts, in the NIPS style. References and supplementary material are not included in the page limit. Author names do not need to be anonymized. Accepted papers will have the option of inclusion in the proceedings. Certain papers will also be selected to present spotlight talks. Email submissions to [email protected].

Key Dates:

Submission Deadline: 20 Oct 2017
Acceptance Notification: 23 Oct 2017
Symposium: 7 Dec 2017

Workshop Overview:

Complex machine learning models, such as deep neural networks, have recently achieved outstanding predictive performance in a wide range of applications, including visual object recognition, speech perception, language modeling, and information retrieval. There has since been an explosion of interest in interpreting the representations learned and decisions made by these models, with profound implications for research into explainable ML, causality, safe AI, social science, automatic scientific discovery, human computer interaction (HCI), crowdsourcing, machine teaching, and AI ethics. This symposium is designed to broadly engage the machine learning community on the intersection of these topics—tying together many threads which are deeply related but often considered in isolation.

For example, we may build a complex model to predict levels of crime. Predictions on their own produce insights, but by interpreting the learned structure of the model, we can gain more important new insights into the processes driving crime, enabling us to develop more effective public policy. Moreover, if we learn that the model is making good predictions by discovering how the geometry of clusters of crime events affect future activity, we can use this knowledge to design even more successful predictive models. Similarly, if we wish to make AI systems deployed on self-driving cars safe, straightforward black-box models will not suffice, as we will need methods of understanding their rare but costly mistakes.

The symposium will feature invited talks and two panel discussions. One of the panels will have a moderated debate format where arguments are presented on each side of key topics chosen prior to the symposium, with the opportunity to follow-up each argument with questions. This format will encourage an interactive, lively, and rigorous discussion, working towards the shared goal of making intellectual progress on foundational questions. During the symposium, we will also feature the launch of a new Explainability in Machine Learning Challenge, involving the creation of new benchmarks for motivating the development of interpretable learning algorithms.

I’m interested in this topic! It relates to some of the ideas we’ve been talking about regarding Stan and Bayesian workflow.

4 thoughts on “Workshop on Interpretable Machine Learning

  1. Not just “interpretable” but involving time travel.

    Acceptance Notification: 27 Oct 2017
    Symposium: 7 Dec 2016

    But it’s a very important topic. I hope the papers get posted somewhere easily accessible.

Leave a Reply to Alex D Groce Cancel reply

Your email address will not be published. Required fields are marked *