loading page

Elucidating Linear Programs by Neural Encodings
  • +1
  • Florian Peter Busch,
  • Matej Zečević,
  • Kristian Kersting,
  • Devendra Singh Dhami
Florian Peter Busch
Technische Universitat Darmstadt

Corresponding Author:florian_peter.busch@tu-darmstadt.de

Author Profile
Matej Zečević
Technische Universitat Darmstadt
Author Profile
Kristian Kersting
Technische Universitat Darmstadt
Author Profile
Devendra Singh Dhami
Hessian Center for Artificial Intelligence (hessianAI
Author Profile

Abstract

Linear Programs (LPs) are one of the major building blocks of AI and have championed recent strides in differentiable optimizers for learning systems. While there exist efficient solvers for even high-dimensional LPs, explaining their solutions has not received much attention yet, as explainable artificial intelligence (XAI) has mostly focused on deep learning models. LPs are mostly considered whitebox and thus assumed simple to explain but we argue that they are not easy to understand in terms of relationships between inputs and outputs. To mitigate this rather non-explainability of LPs we show how to adapt attribution methods by encoding LPs in a neural fashion. The encoding functions consider aspects such as feasibility of the decision space, the cost attached to each input and the distance to special points of interest. Using a variety of LPs, including a very large-scale LP with 10k dimensions, we demonstrate the usefulness of explanation methods using our neural LP encodings, though, the attribution methods Saliency and LIME are indistinguishable for low perturbation levels. In essence, we demonstrate that LPs can and should be explained which can be achieved by representing an LP as a neural network.