Abstract
Linear Programs (LPs) are one of the major building blocks of AI and
have championed recent strides in differentiable optimizers for learning
systems. While there exist efficient solvers for even high-dimensional
LPs, explaining their solutions has not received much attention yet, as
explainable artificial intelligence (XAI) has mostly focused on deep
learning models. LPs are mostly considered whitebox and thus assumed
simple to explain but we argue that they are not easy to understand in
terms of relationships between inputs and outputs. To mitigate this
rather non-explainability of LPs we show how to adapt attribution
methods by encoding LPs in a neural fashion. The encoding functions
consider aspects such as feasibility of the decision space, the cost
attached to each input and the distance to special points of interest.
Using a variety of LPs, including a very large-scale LP with 10k
dimensions, we demonstrate the usefulness of explanation methods using
our neural LP encodings, though, the attribution methods Saliency and
LIME are indistinguishable for low perturbation levels. In essence, we
demonstrate that LPs can and should be explained which can be achieved
by representing an LP as a neural network.