torch_geometric.nn.models.MLP

class MLP(channel_list: Optional[Union[int, List[int]]] = None, *, in_channels: Optional[int] = None, hidden_channels: Optional[int] = None, out_channels: Optional[int] = None, num_layers: Optional[int] = None, dropout: Union[float, List[float]] = 0.0, act: Optional[Union[str, Callable]] = 'relu', act_first: bool = False, act_kwargs: Optional[Dict[str, Any]] = None, norm: Optional[Union[str, Callable]] = 'batch_norm', norm_kwargs: Optional[Dict[str, Any]] = None, plain_last: bool = True, bias: Union[bool, List[bool]] = True, **kwargs)[source]

Bases: Module

A Multi-Layer Perception (MLP) model.

There exists two ways to instantiate an MLP:

  1. By specifying explicit channel sizes, e.g.,

    mlp = MLP([16, 32, 64, 128])
    

    creates a three-layer MLP with differently sized hidden layers.

  1. By specifying fixed hidden channel sizes over a number of layers, e.g.,

    mlp = MLP(in_channels=16, hidden_channels=32,
              out_channels=128, num_layers=3)
    

    creates a three-layer MLP with equally sized hidden layers.

Parameters:
  • channel_list (List[int] or int, optional) – List of input, intermediate and output channels such that len(channel_list) - 1 denotes the number of layers of the MLP (default: None)

  • in_channels (int, optional) – Size of each input sample. Will override channel_list. (default: None)

  • hidden_channels (int, optional) – Size of each hidden sample. Will override channel_list. (default: None)

  • out_channels (int, optional) – Size of each output sample. Will override channel_list. (default: None)

  • num_layers (int, optional) – The number of layers. Will override channel_list. (default: None)

  • dropout (float or List[float], optional) – Dropout probability of each hidden embedding. If a list is provided, sets the dropout value per layer. (default: 0.)

  • act (str or Callable, optional) – The non-linear activation function to use. (default: "relu")

  • act_first (bool, optional) – If set to True, activation is applied before normalization. (default: False)

  • act_kwargs (Dict[str, Any], optional) – Arguments passed to the respective activation function defined by act. (default: None)

  • norm (str or Callable, optional) – The normalization function to use. (default: "batch_norm")

  • norm_kwargs (Dict[str, Any], optional) – Arguments passed to the respective normalization function defined by norm. (default: None)

  • plain_last (bool, optional) – If set to False, will apply non-linearity, batch normalization and dropout to the last layer as well. (default: True)

  • bias (bool or List[bool], optional) – If set to False, the module will not learn additive biases. If a list is provided, sets the bias per layer. (default: True)

  • **kwargs (optional) – Additional deprecated arguments of the MLP layer.

forward(x: Tensor, batch: Optional[Tensor] = None, batch_size: Optional[int] = None, return_emb: Optional[Tensor] = None) Tensor[source]

Forward pass.

Parameters:
  • x (torch.Tensor) – The source tensor.

  • batch (torch.Tensor, optional) – The batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), which assigns each element to a specific example. Only needs to be passed in case the underlying normalization layers require the batch information. (default: None)

  • batch_size (int, optional) – The number of examples \(B\). Automatically calculated if not given. Only needs to be passed in case the underlying normalization layers require the batch information. (default: None)

  • return_emb (bool, optional) – If set to True, will additionally return the embeddings before execution of the final output layer. (default: False)

reset_parameters()[source]

Resets all learnable parameters of the module.

property in_channels: int

Size of each input sample.

property out_channels: int

Size of each output sample.

property num_layers: int

The number of layers.