Sheng Zhou

Doctorant
Personal website:
Office: 37.1.40

my thesis subject: "Explaining model decisions: towards automatic interpretation of learning models" Until recently, the focus in predictive modelling has mainly been set on improving model prediction accuracy. Many successful models scaling to big amounts of heterogeneous data have been proposed in the literature, and widely used implementations of these models are available. Unfortunately, these models generally do not intrinsically come with an easy way to explain their predictions, and are often presented as black-box tools performing complex and non-intuitive operations on their inputs. This can be an issue in many applications where the interpretation of the model decision may have a greater added-value than the decision itself. Examples include medical diagnosis where the interpretation would consist in identifying which combination(s) of characteristics presented by an individual contributes most to the diagnosis. Similarly in manufacturing, we may want to understand why an object is leaving the production line with a defect. Also in the field of business and marketing, we may want to relate a strategic decision with some concrete elements, such as customer’s profiles or needs, etc. The necessity for interpretability in model decisions is present everywhere, and in particular where prediction/decision models can fail and where failure in taking the right decision comes with an important cost. In this thesis, we propose to add interpretability to machine learning models without restriction as to the type of model, without impacting the performance of the model to interpret. This means not adding any constraint in the learning of the original model, and not restraining its complexity in any way. Trained models, even transparent ones such as tree ensemble models, are in general too complex to be analyzed directly by a human operator. The expected task is to design interpretability models that can cope with these aspects and learn how to interpret the decisions issued by these models learned in the context of massive data and large input feature spaces.

Top