- class AttentionExplainer(reduce: str = 'max')
An explainer that uses the attention coefficients produced by an attention-based GNN (e.g.,
TransformerConv) as edge explanation. Attention scores across layers and heads will be aggregated according to the
reduce (str, optional) – The method to reduce the attention scores across layers and heads. (default:
- forward(model: Module, x: Tensor, edge_index: Tensor, *, target: Tensor, index: Optional[Union[int, Tensor]] = None, **kwargs) Explanation
Computes the explanation.
model (torch.nn.Module) – The model to explain.
target (torch.Tensor) – The target of the model.
**kwargs (optional) – Additional keyword arguments passed to