class GroupAddRev(conv: Union[Module, ModuleList], split_dim: int = -1, num_groups: Optional[int] = None, disable: bool = False, num_bwd_passes: int = 1)[source]

Bases: InvertibleModule

The Grouped Reversible GNN module from the “Graph Neural Networks with 1000 Layers” paper. This module enables training of arbitary deep GNNs with a memory complexity independent of the number of layers.

It does so by partitioning input node features \(\mathbf{X}\) into \(C\) groups across the feature dimension. Then, a grouped reversible GNN block \(f_{\theta(i)}\) operates on a group of inputs and produces a group of outputs:

\[ \begin{align}\begin{aligned}\mathbf{X}^{\prime}_0 &= \sum_{i=2}^C \mathbf{X}_i\\\mathbf{X}^{\prime}_i &= f_{\theta(i)} ( \mathbf{X}^{\prime}_{i - 1}, \mathbf{A}) + \mathbf{X}_i\end{aligned}\end{align} \]

for all \(i \in \{ 1, \ldots, C \}\).


For an example of using GroupAddRev, see examples/

  • conv (torch.nn.Module or torch.nn.ModuleList]) – A seed GNN. The input and output feature dimensions need to match.

  • split_dim (int, optional) – The dimension across which to split groups. (default: -1)

  • num_groups (int, optional) – The number of groups \(C\). (default: None)

  • disable (bool, optional) – If set to True, will disable the usage of InvertibleFunction and will execute the module without memory savings. (default: False)

  • num_bwd_passes (int, optional) – Number of backward passes to retain a link with the output. After the last backward pass the output is discarded and memory is freed. (default: 1)


Resets all learnable parameters of the module.