class PANPooling(in_channels: int, ratio: float = 0.5, min_score: Optional[float] = None, multiplier: float = 1.0, nonlinearity: Union[str, Callable] = 'tanh')[source]

Bases: Module

The path integral based pooling operator from the “Path Integral Based Convolution and Pooling for Graph Neural Networks” paper. PAN pooling performs top-\(k\) pooling where global node importance is measured based on node features and the MET matrix:

\[{\rm score} = \beta_1 \mathbf{X} \cdot \mathbf{p} + \beta_2 {\rm deg}(\mathbf{M})\]
  • in_channels (int) – Size of each input sample.

  • ratio (float) – Graph pooling ratio, which is used to compute \(k = \lceil \mathrm{ratio} \cdot N \rceil\). This value is ignored if min_score is not None. (default: 0.5)

  • min_score (float, optional) – Minimal node score \(\tilde{\alpha}\) which is used to compute indices of pooled nodes \(\mathbf{i} = \mathbf{y}_i > \tilde{\alpha}\). When this value is not None, the ratio argument is ignored. (default: None)

  • multiplier (float, optional) – Coefficient by which features gets multiplied after pooling. This can be useful for large graphs and when min_score is used. (default: 1.0)

  • nonlinearity (str or callable, optional) – The non-linearity to use. (default: "tanh")


Resets all learnable parameters of the module.

forward(x: Tensor, M: SparseTensor, batch: Optional[Tensor] = None) Tuple[Tensor, Tensor, Tensor, Tensor, Tensor, Tensor][source]
  • x (torch.Tensor) – The node feature matrix.

  • M (SparseTensor) – The MET matrix \(\mathbf{M}\).

  • batch (torch.Tensor, optional) – The batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), which assigns each node to a specific example. (default: None)