Talk #D1.03

21.05.2024, 11:45 – 12:15





MEGAN: Multi-explanation Graph Attention Network

Pascal Friederich,a,b Jonas Teufel,a Luca Torresi,a Patrick Reiser,a,b



Most current explainable AI methods are post-hoc methods that analyze trained models and only generate importance annotations, which often leads to an accuracy- explainability tradeoff and limits interpretability. Here, we propose a multi-explanation graph attention network (MEGAN) [1]. Unlike existing graph explainability methods, our network can produce node and edge attributional explanations along multiple channels, the number of which is independent of task specifications. This proves crucial to improve the interpretability of graph regression predictions, as explanations can be split into positive and negative evidence w.r.t to a reference value. Additionally, our attention-based network is fully differentiable and explanations can actively be trained in an explanation-supervised manner. We first validate our model on a synthetic graph regression dataset with known ground-truth explanations. Our network outperforms existing baseline explainability methods for the single- as well as the multi-explanation case, achieving near-perfect explanation accuracy during explanation supervision. Finally, we demonstrate our model’s capabilities on multiple real-world datasets, e.g. molecular solubility prediction (see Fig. 1). We find that our model produces sparse high-fidelity explanations consistent with human intuition about those tasks.


Figure 1. Example explanations generated by MEGAN and GNNExplainer for the prediction of water solubility. Explanations are represented as bold highlights of the corresponding graph elements. (a) Examples of molecules dominated by large carbon structures which are known as negative influences on water solubility. (b) Examples of molecules containing oxygen functional groups which are known to have a positive influence on water solubility. (c) Examples of molecules containing nitrogen groups which are also known as positive influences.


  1. Jonas Teufel, Luca Torresi, Patrick Reiser, Pascal Friederich, World Conference on Explainable Artificial Intelligence 2023, 338–360.





Prof. Pascal Friederich

 Prof. Pascal Friederich


  •   Karlsruher Institut für Technologie (KIT), Germany