torch_geometric.explain.algorithm.AttentionExplainer

class AttentionExplainer(reduce: str = 'max')[source]

Bases: ExplainerAlgorithm

An explainer that uses the attention coefficients produced by an attention-based GNN (e.g., GATConv, GATv2Conv, or TransformerConv) as edge explanation. Attention scores across layers and heads will be aggregated according to the reduce argument.

Parameters:

reduce (str, optional) – The method to reduce the attention scores across layers and heads. (default: "max")

forward(model: Module, x: Tensor, edge_index: Tensor, *, target: Tensor, index: Optional[Union[int, Tensor]] = None, **kwargs) Explanation[source]

Computes the explanation.

Parameters:
  • model (torch.nn.Module) – The model to explain.

  • x (Union[torch.Tensor, Dict[NodeType, torch.Tensor]]) – The input node features of a homogeneous or heterogeneous graph.

  • edge_index (Union[torch.Tensor, Dict[NodeType, torch.Tensor]]) – The input edge indices of a homogeneous or heterogeneous graph.

  • target (torch.Tensor) – The target of the model.

  • index (Union[int, Tensor], optional) – The index of the model output to explain. Can be a single index or a tensor of indices. (default: None)

  • **kwargs (optional) – Additional keyword arguments passed to model.

supports() bool[source]

Checks if the explainer supports the user-defined settings provided in self.explainer_config, self.model_config.