class AttentiveFP(in_channels: int, hidden_channels: int, out_channels: int, edge_dim: int, num_layers: int, num_timesteps: int, dropout: float = 0.0)[source]

Bases: Module

The Attentive FP model for molecular representation learning from the “Pushing the Boundaries of Molecular Representation for Drug Discovery with the Graph Attention Mechanism” paper, based on graph attention mechanisms.

  • in_channels (int) – Size of each input sample.

  • hidden_channels (int) – Hidden node feature dimensionality.

  • out_channels (int) – Size of each output sample.

  • edge_dim (int) – Edge feature dimensionality.

  • num_layers (int) – Number of GNN layers.

  • num_timesteps (int) – Number of iterative refinement steps for global readout.

  • dropout (float, optional) – Dropout probability. (default: 0.0)

forward(x: Tensor, edge_index: Tensor, edge_attr: Tensor, batch: Tensor) Tensor[source]

Resets all learnable parameters of the module.