# torch_geometric.explain.metric.unfaithfulness

unfaithfulness(explainer: Explainer, explanation: Explanation, top_k: = None) [source]

Evaluates how faithful an Explanation is to an underyling GNN predictor, as described in the “Evaluating Explainability for Graph Neural Networks” paper.

In particular, the graph explanation unfaithfulness metric is defined as

$\textrm{GEF}(y, \hat{y}) = 1 - \exp(- \textrm{KL}(y || \hat{y}))$

where $$y$$ refers to the prediction probability vector obtained from the original graph, and $$\hat{y}$$ refers to the prediction probability vector obtained from the masked subgraph. Finally, the Kullback-Leibler (KL) divergence score quantifies the distance between the two probability distributions.

Parameters
• explainer (Explainer) – The explainer to evaluate.

• explanation (Explanation) – The explanation to evaluate.

• top_k (int, optional) – If set, will only keep the original values of the top-$$k$$ node features identified by an explanation. If set to None, will use explanation.node_mask as it is for masking node features. (default: None)