torch_geometric.nn.conv.EGConv

class EGConv(in_channels: int, out_channels: int, aggregators: List[str] = ['symnorm'], num_heads: int = 8, num_bases: int = 4, cached: bool = False, add_self_loops: bool = True, bias: bool = True, **kwargs)[source]

Bases: MessagePassing

The Efficient Graph Convolution from the “Adaptive Filters and Aggregator Fusion for Efficient Graph Convolutions” paper.

Its node-wise formulation is given by:

\[\mathbf{x}_i^{\prime} = {\LARGE ||}_{h=1}^H \sum_{\oplus \in \mathcal{A}} \sum_{b = 1}^B w_{i, h, \oplus, b} \; \underset{j \in \mathcal{N}(i) \cup \{i\}}{\bigoplus} \mathbf{W}_b \mathbf{x}_{j}\]

with \(\mathbf{W}_b\) denoting a basis weight, \(\oplus\) denoting an aggregator, and \(w\) denoting per-vertex weighting coefficients across different heads, bases and aggregators.

EGC retains \(\mathcal{O}(|\mathcal{V}|)\) memory usage, making it a sensible alternative to GCNConv, SAGEConv or GINConv.

Note

For an example of using EGConv, see examples/egc.py.

Parameters:
  • in_channels (int) – Size of each input sample, or -1 to derive the size from the first input(s) to the forward method.

  • out_channels (int) – Size of each output sample.

  • aggregators (List[str], optional) – Aggregators to be used. Supported aggregators are "sum", "mean", "symnorm", "max", "min", "std", "var". Multiple aggregators can be used to improve the performance. (default: ["symnorm"])

  • num_heads (int, optional) – Number of heads \(H\) to use. Must have out_channels % num_heads == 0. It is recommended to set num_heads >= num_bases. (default: 8)

  • num_bases (int, optional) – Number of basis weights \(B\) to use. (default: 4)

  • cached (bool, optional) – If set to True, the layer will cache the computation of the edge index with added self loops on first execution, along with caching the calculation of the symmetric normalized edge weights if the "symnorm" aggregator is being used. This parameter should only be set to True in transductive learning scenarios. (default: False)

  • add_self_loops (bool, optional) – If set to False, will not add self-loops to the input graph. (default: True)

  • bias (bool, optional) – If set to False, the layer will not learn an additive bias. (default: True)

  • **kwargs (optional) – Additional arguments of torch_geometric.nn.conv.MessagePassing.

Shapes:
  • input: node features \((|\mathcal{V}|, F_{in})\), edge indices \((2, |\mathcal{E}|)\)

  • output: node features \((|\mathcal{V}|, F_{out})\)

forward(x: Tensor, edge_index: Union[Tensor, SparseTensor]) Tensor[source]

Runs the forward pass of the module.

reset_parameters()[source]

Resets all learnable parameters of the module.