loading page

We Should Care about Explaining Even Linear Programs
  • +1
  • David Steinmann,
  • Matej Zečević,
  • Devendra Singh Dhami,
  • Kristian Kersting
David Steinmann
Technische Universitat Darmstadt Fachbereich Informatik

Corresponding Author:david.steinmann@tu-darmstadt.de

Author Profile
Matej Zečević
Technische Universitat Darmstadt Fachbereich Informatik
Author Profile
Devendra Singh Dhami
Hessian Center for AI (hessianAI
Author Profile
Kristian Kersting
Technische Universitat Darmstadt Fachbereich Informatik
Author Profile

Abstract

There has been a recent push to make machine learning models more interpretable so that their performance can be trusted. Although successful, this push has primarily focused on deep learning methods, while simpler optimization methods have been essentially ignored. Consider linear programs (LP), a working horse of sciences. Even if LPs can be considered whitebox or clearbox models, they are not easy to understand in terms of relationships between inputs and outputs, contrary to common belief. As a linear program solver only provides the optimal solution to an optimization problem, further explanations are often helpful. We extend the attribution methods for explaining neural networks to linear programs, thereby taking the first step towards what might be called explainable optimization. These attribution methods explain the model by providing relevance scores for the model inputs to show the influence of each input on the output. Alongside using classical gradient-based attribution methods, we also propose a way to adapt perturbation-based attribution methods to LPs. Our evaluations of several different linear and integer problems show that attribution methods can generate helpful explanations for these models. In particular, we demonstrate that explanations can generate interesting insights into large, real-world linear programs.