torch_geometric.nn.conv.ResGatedGraphConv

class ResGatedGraphConv(in_channels: Union[int, Tuple[int, int]], out_channels: int, act: Optional[Callable] = Sigmoid(), edge_dim: Optional[int] = None, root_weight: bool = True, bias: bool = True, **kwargs)[source]

Bases: MessagePassing

The residual gated graph convolutional operator from the “Residual Gated Graph ConvNets” paper.

\[\mathbf{x}^{\prime}_i = \mathbf{W}_1 \mathbf{x}_i + \sum_{j \in \mathcal{N}(i)} \eta_{i,j} \odot \mathbf{W}_2 \mathbf{x}_j\]

where the gate \(\eta_{i,j}\) is defined as

\[\eta_{i,j} = \sigma(\mathbf{W}_3 \mathbf{x}_i + \mathbf{W}_4 \mathbf{x}_j)\]

with \(\sigma\) denoting the sigmoid function.

Parameters:
  • in_channels (int or tuple) – Size of each input sample, or -1 to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities.

  • out_channels (int) – Size of each output sample.

  • act (callable, optional) – Gating function \(\sigma\). (default: torch.nn.Sigmoid())

  • edge_dim (int, optional) – Edge feature dimensionality (in case there are any). (default: None)

  • bias (bool, optional) – If set to False, the layer will not learn an additive bias. (default: True)

  • root_weight (bool, optional) – If set to False, the layer will not add transformed root node features to the output. (default: True)

  • **kwargs (optional) – Additional arguments of torch_geometric.nn.conv.MessagePassing.

Shapes:
  • inputs: node features \((|\mathcal{V}|, F_{in})\) or \(((|\mathcal{V_s}|, F_{s}), (|\mathcal{V_t}|, F_{t}))\) if bipartite, edge indices \((2, |\mathcal{E}|)\)

  • outputs: node features \((|\mathcal{V}|, F_{out})\) or \((|\mathcal{V_t}|, F_{out})\) if bipartite

forward(x: Union[Tensor, Tuple[Tensor, Tensor]], edge_index: Union[Tensor, SparseTensor], edge_attr: Optional[Tensor] = None) Tensor[source]

Runs the forward pass of the module.

reset_parameters()[source]

Resets all learnable parameters of the module.