Proseminar Wintersemester 2021/2022

Interpretable Machine Learning

Prof. Dr. Emmanuel Müller - Informatik LS9


Procedure

Students who are enrolled in this Pro-Seminar will send their favorite topics (possibly with priorities) to simon.kluettermann(at)cs.tu-dortmund.de until the 08.04.2022. We will assign topic based on your choices until the 15.04.2022. If you are uncertain about which topic to choose, we will meet shortly before once and answer your questions. The exact date depends on when we can get a room, but will probably be in the first week of april.

After you are assigned a topic, you will also be assigned a supervisor from us to help you with questions you might have. If you have more general questions you can also always write to chiara.balestra(at)cs.tu-dortmund.de or to simon.kluettermann(at)cs.tu-dortmund.de.

We will not have a special presentation course, you will have to take the one offered by the faculty. Also we will hold the course in english.

We will distribute the Presentations over 1-3 days in the last week of Juli. Every Presentation should be between 25 and 30 minutes long. Finally you will have to hand in a written report about your topic until the mid of September (Friday the 16.09.2022) Finally, it is important to us, that you learn to be critical with any given topic. To train this, you will be given two reports of other students to critize until the end of the semester. You need to participate in every part of this seminar to be able to pass it.

Goals and Criteria for a succesful seminar

In the Proseminar you shall learn how to work yourself into a topic, research related literature and answer questions to this topic. For this it is important to not rely on the chapter given to you and use different sources to verify all statements made. Also, by listening and engaging with the other presentations, you will get a wide understanding of interpretable machine learning methods.


Content

Abstract from "Interpretable Machine Learning" by Christoph Molnar

Machine learning has great potential for improving products, processes and research. But computers usually do not explain their predictions which is a barrier to the adoption of machine learning. This book is about making machine learning models and their decisions interpretable.

After exploring the concepts of interpretability, you will learn about simple, interpretable models such as decision trees, decision rules and linear regression. The focus of the book is on model-agnostic methods for interpreting black box models such as feature importance and accumulated local effects, and explaining individual predictions with Shapley values and LIME. In addition, the book presents methods specific to deep neural networks.

All interpretation methods are explained in depth and discussed critically. How do they work under the hood? What are their strengths and weaknesses? How can their outputs be interpreted? This book will enable you to select and correctly apply the interpretation method that is most suitable for your machine learning project. Reading the book is recommended for machine learning practitioners, data scientists, statisticians, and anyone else interested in making machine learning models interpretable.

Literature

This Seminar is based on the book "Interpretable Machine Learning - A Guide for Making Black Box Models Explainable" by Christoph Molnar. This book is available for free here https://christophm.github.io/interpretable-ml-book/. Please note that, as this book is only written by a single person and thus probably contains some errors. So finding alternative sources is extremely important here.