# torch_geometric.nn.pool.SAGPooling

class SAGPooling(in_channels: int, ratio: ~typing.Union[float, int] = 0.5, GNN: ~torch.nn.modules.module.Module = <class 'torch_geometric.nn.conv.graph_conv.GraphConv'>, min_score: ~typing.Optional[float] = None, multiplier: float = 1.0, nonlinearity: ~typing.Union[str, ~typing.Callable] = 'tanh', **kwargs)[source]

Bases: Module

The self-attention pooling operator from the “Self-Attention Graph Pooling” and “Understanding Attention and Generalization in Graph Neural Networks” papers.

If min_score $$\tilde{\alpha}$$ is None, computes:

\begin{align}\begin{aligned}\mathbf{y} &= \textrm{GNN}(\mathbf{X}, \mathbf{A})\\\mathbf{i} &= \mathrm{top}_k(\mathbf{y})\\\mathbf{X}^{\prime} &= (\mathbf{X} \odot \mathrm{tanh}(\mathbf{y}))_{\mathbf{i}}\\\mathbf{A}^{\prime} &= \mathbf{A}_{\mathbf{i},\mathbf{i}}\end{aligned}\end{align}

If min_score $$\tilde{\alpha}$$ is a value in [0, 1], computes:

\begin{align}\begin{aligned}\mathbf{y} &= \mathrm{softmax}(\textrm{GNN}(\mathbf{X},\mathbf{A}))\\\mathbf{i} &= \mathbf{y}_i > \tilde{\alpha}\\\mathbf{X}^{\prime} &= (\mathbf{X} \odot \mathbf{y})_{\mathbf{i}}\\\mathbf{A}^{\prime} &= \mathbf{A}_{\mathbf{i},\mathbf{i}}.\end{aligned}\end{align}

Projections scores are learned based on a graph neural network layer.

Parameters
reset_parameters()[source]

Resets all learnable parameters of the module.

forward(x: Tensor, edge_index: Tensor, edge_attr: = None, batch: = None, attn: = None) [source]
Parameters