torch_geometric.nn.pool.EdgePooling
- class EdgePooling(in_channels: int, edge_score_method: Optional[Callable] = None, dropout: float = 0.0, add_to_edge_score: float = 0.5)[source]
Bases:
Module
The edge pooling operator from the “Towards Graph Pooling by Edge Contraction” and “Edge Contraction Pooling for Graph Neural Networks” papers.
In short, a score is computed for each edge. Edges are contracted iteratively according to that score unless one of their nodes has already been part of a contracted edge.
To duplicate the configuration from the “Towards Graph Pooling by Edge Contraction” paper, use either
EdgePooling.compute_edge_score_softmax()
orEdgePooling.compute_edge_score_tanh()
, and setadd_to_edge_score
to0.0
.To duplicate the configuration from the “Edge Contraction Pooling for Graph Neural Networks” paper, set
dropout
to0.2
.- Parameters:
in_channels (int) – Size of each input sample.
edge_score_method (callable, optional) – The function to apply to compute the edge score from raw edge scores. By default, this is the softmax over all incoming edges for each node. This function takes in a
raw_edge_score
tensor of shape[num_nodes]
, anedge_index
tensor and the number of nodesnum_nodes
, and produces a new tensor of the same size asraw_edge_score
describing normalized edge scores. Included functions areEdgePooling.compute_edge_score_softmax()
,EdgePooling.compute_edge_score_tanh()
, andEdgePooling.compute_edge_score_sigmoid()
. (default:EdgePooling.compute_edge_score_softmax()
)dropout (float, optional) – The probability with which to drop edge scores during training. (default:
0.0
)add_to_edge_score (float, optional) – A value to be added to each computed edge score. Adding this greatly helps with unpooling stability. (default:
0.5
)
- static compute_edge_score_softmax(raw_edge_score: Tensor, edge_index: Tensor, num_nodes: int) Tensor [source]
Normalizes edge scores via softmax application.
- Return type:
- static compute_edge_score_tanh(raw_edge_score: Tensor, edge_index: Optional[Tensor] = None, num_nodes: Optional[int] = None) Tensor [source]
Normalizes edge scores via hyperbolic tangent application.
- Return type:
- static compute_edge_score_sigmoid(raw_edge_score: Tensor, edge_index: Optional[Tensor] = None, num_nodes: Optional[int] = None) Tensor [source]
Normalizes edge scores via sigmoid application.
- Return type:
- forward(x: Tensor, edge_index: Tensor, batch: Tensor) Tuple[Tensor, Tensor, Tensor, UnpoolInfo] [source]
Forward pass.
- Parameters:
x (torch.Tensor) – The node features.
edge_index (torch.Tensor) – The edge indices.
batch (torch.Tensor) – The batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), which assigns each node to a specific example.
- Return types:
x (torch.Tensor) - The pooled node features.
edge_index (torch.Tensor) - The coarsened edge indices.
batch (torch.Tensor) - The coarsened batch vector.
unpool_info (UnpoolInfo) - Information that is consumed by
EdgePooling.unpool()
for unpooling.
- unpool(x: Tensor, unpool_info: UnpoolInfo) Tuple[Tensor, Tensor, Tensor] [source]
Unpools a previous edge pooling step.
For unpooling,
x
should be of same shape as those produced by this layer’sforward()
function. Then, it will produce an unpooledx
in addition toedge_index
andbatch
.- Parameters:
x (torch.Tensor) – The node features.
unpool_info (UnpoolInfo) – Information that has been produced by
EdgePooling.forward()
.
- Return types:
x (torch.Tensor) - The unpooled node features.
edge_index (torch.Tensor) - The new edge indices.
batch (torch.Tensor) - The new batch vector.