Objective: The interpretation of electrocardiogram (ECG) signals is vital for the diagnosis of cardiac conditions.Traditional methods rely on expert knowledge, which is time consuming, costly and potentially misses subtle features. Deep learning has shown promise in ECG interpretation, but model transparency is often overlooked in literature but desired in medical contexts. Methods: We introduce an explainable AI method for ECG classification by partitioning the latent space of a variational autoencoder (VAE) into a label-specific and a non-label-specific subset. By optimizing both subsets for signal reconstruction and one subset also for prediction while constraining the other from learning label-specific information with an adversarial network, the latent space is disentangled in a supervised manner. This latent space is subsequently leveraged to create enhanced visualizations for ECG feature interpretation by means of attribute manipulation. As a proof of concept, we predict the left ventricular function (LVF), a critical prognostic determinant in cardiac disease, from the ECG. Results: Our study demonstrates the effective segregation of LVF-specific information within a single dimension of the VAE latent space, without compromising classification performance. We show that the proposed model improves state-of-the-art VAE methods (AUC 0.832 vs. 0.790, F1 0.688 vs. 0.640) in prediction and performs comparable to ground truth LVF (concordance 0.72 vs.0.72) in predicting survival. Conclusion: The model facilitates the interpretation of LVF predictions by providing visual context to the ECG signals, offering a general explainable and predictive AI method. Significance: Our explainable AI model can potentially reduce time and expertise required for ECG analysis.