torch_geometric.nn.conv.AntiSymmetricConv

class AntiSymmetricConv(in_channels: int, phi: Optional[MessagePassing] = None, num_iters: int = 1, epsilon: float = 0.1, gamma: float = 0.1, act: Optional[Union[str, Callable]] = 'tanh', act_kwargs: Optional[Dict[str, Any]] = None, bias: bool = True)[source]

Bases: Module

The anti-symmetric graph convolutional operator from the “Anti-Symmetric DGN: a stable architecture for Deep Graph Networks” paper.

\[\mathbf{x}^{\prime}_i = \mathbf{x}_i + \epsilon \cdot \sigma \left( (\mathbf{W}-\mathbf{W}^T-\gamma \mathbf{I}) \mathbf{x}_i + \Phi(\mathbf{X}, \mathcal{N}_i) + \mathbf{b}\right),\]

where \(\Phi(\mathbf{X}, \mathcal{N}_i)\) denotes a MessagePassing layer.

Parameters:
  • in_channels (int) – Size of each input sample.

  • phi (MessagePassing, optional) – The message passing module \(\Phi\). If set to None, will use a GCNConv layer as default. (default: None)

  • num_iters (int, optional) – The number of times the anti-symmetric deep graph network operator is called. (default: 1)

  • epsilon (float, optional) – The discretization step size \(\epsilon\). (default: 0.1)

  • gamma (float, optional) – The strength of the diffusion \(\gamma\). It regulates the stability of the method. (default: 0.1)

  • act (str, optional) – The non-linear activation function \(\sigma\), e.g., "tanh" or "relu". (default: "tanh")

  • act_kwargs (Dict[str, Any], optional) – Arguments passed to the respective activation function defined by act. (default: None)

  • bias (bool, optional) – If set to False, the layer will not learn an additive bias. (default: True)

Shapes:
  • input: node features \((|\mathcal{V}|, F_{in})\), edge indices \((2, |\mathcal{E}|)\), edge weights \((|\mathcal{E}|)\) (optional)

  • output: node features \((|\mathcal{V}|, F_{in})\)

forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], *args, **kwargs) Tensor[source]

Runs the forward pass of the module.

reset_parameters()[source]

Resets all learnable parameters of the module.