torch_geometric.nn.models.PMLP

class PMLP(in_channels: int, hidden_channels: int, out_channels: int, num_layers: int, dropout: float = 0.0, norm: bool = True, bias: bool = True)[source]

Bases: Module

The P(ropagational)MLP model from the “Graph Neural Networks are Inherently Good Generalizers: Insights by Bridging GNNs and MLPs” paper. PMLP is identical to a standard MLP during training, but then adopts a GNN architecture during testing.

Parameters:
  • in_channels (int) – Size of each input sample.

  • hidden_channels (int) – Size of each hidden sample.

  • out_channels (int) – Size of each output sample.

  • num_layers (int) – The number of layers.

  • dropout (float, optional) – Dropout probability of each hidden embedding. (default: 0.)

  • norm (bool, optional) – If set to False, will not apply batch normalization. (default: True)

  • bias (bool, optional) – If set to False, the module will not learn additive biases. (default: True)

forward(x: Tensor, edge_index: Optional[Tensor] = None) Tensor[source]
reset_parameters()[source]

Resets all learnable parameters of the module.