torch_geometric.nn.models.LINKX

class LINKX(num_nodes: int, in_channels: int, hidden_channels: int, out_channels: int, num_layers: int, num_edge_layers: int = 1, num_node_layers: int = 1, dropout: float = 0.0)[source]

Bases: Module

The LINKX model from the “Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods” paper.

\[ \begin{align}\begin{aligned}\mathbf{H}_{\mathbf{A}} &= \textrm{MLP}_{\mathbf{A}}(\mathbf{A})\\\mathbf{H}_{\mathbf{X}} &= \textrm{MLP}_{\mathbf{X}}(\mathbf{X})\\\mathbf{Y} &= \textrm{MLP}_{f} \left( \sigma \left( \mathbf{W} [\mathbf{H}_{\mathbf{A}}, \mathbf{H}_{\mathbf{X}}] + \mathbf{H}_{\mathbf{A}} + \mathbf{H}_{\mathbf{X}} \right) \right)\end{aligned}\end{align} \]

Note

For an example of using LINKX, see examples/linkx.py.

Parameters:
  • num_nodes (int) – The number of nodes in the graph.

  • in_channels (int) – Size of each input sample, or -1 to derive the size from the first input(s) to the forward method.

  • hidden_channels (int) – Size of each hidden sample.

  • out_channels (int) – Size of each output sample.

  • num_layers (int) – Number of layers of \(\textrm{MLP}_{f}\).

  • num_edge_layers (int, optional) – Number of layers of \(\textrm{MLP}_{\mathbf{A}}\). (default: 1)

  • num_node_layers (int, optional) – Number of layers of \(\textrm{MLP}_{\mathbf{X}}\). (default: 1)

  • dropout (float, optional) – Dropout probability of each hidden embedding. (default: 0.0)

forward(x: Optional[Tensor], edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor[source]
reset_parameters()[source]

Resets all learnable parameters of the module.