torch_geometric.nn.models.RECT_L

class RECT_L(in_channels: int, hidden_channels: int, normalize: bool = True, dropout: float = 0.0)[source]

Bases: Module

The RECT model, i.e. its supervised RECT-L part, from the “Network Embedding with Completely-imbalanced Labels” paper. In particular, a GCN model is trained that reconstructs semantic class knowledge.

Note

For an example of using RECT, see examples/rect.py.

Parameters:
  • in_channels (int) – Size of each input sample.

  • hidden_channels (int) – Intermediate size of each sample.

  • normalize (bool, optional) – Whether to add self-loops and compute symmetric normalization coefficients on-the-fly. (default: True)

  • dropout (float, optional) – The dropout probability. (default: 0.0)

forward(x: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None) Tensor[source]
reset_parameters()[source]

Resets all learnable parameters of the module.

get_semantic_labels(x: Tensor, y: Tensor, mask: Tensor) Tensor[source]

Replaces the original labels by their class-centers.