torch_geometric.nn.kge.KGEModel

class KGEModel(num_nodes: int, num_relations: int, hidden_channels: int, sparse: bool = False)[source]

Bases: Module

An abstract base class for implementing custom KGE models.

Parameters:
  • num_nodes (int) – The number of nodes/entities in the graph.

  • num_relations (int) – The number of relations in the graph.

  • hidden_channels (int) – The hidden embedding size.

  • sparse (bool, optional) – If set to True, gradients w.r.t. to the embedding matrices will be sparse. (default: False)

reset_parameters()[source]

Resets all learnable parameters of the module.

forward(head_index: Tensor, rel_type: Tensor, tail_index: Tensor) Tensor[source]

Returns the score for the given triplet.

Parameters:
Return type:

Tensor

loss(head_index: Tensor, rel_type: Tensor, tail_index: Tensor) Tensor[source]

Returns the loss value for the given triplet.

Parameters:
Return type:

Tensor

loader(head_index: Tensor, rel_type: Tensor, tail_index: Tensor, **kwargs) Tensor[source]

Returns a mini-batch loader that samples a subset of triplets.

Parameters:
Return type:

Tensor

test(head_index: Tensor, rel_type: Tensor, tail_index: Tensor, batch_size: int, k: int = 10, log: bool = True) Tuple[float, float, float][source]

Evaluates the model quality by computing Mean Rank, MRR and Hits@:math:k across all possible tail entities.

Parameters:
  • head_index (torch.Tensor) – The head indices.

  • rel_type (torch.Tensor) – The relation type.

  • tail_index (torch.Tensor) – The tail indices.

  • batch_size (int) – The batch size to use for evaluating.

  • k (int, optional) – The \(k\) in Hits @ \(k\). (default: 10)

  • log (bool, optional) – If set to False, will not print a progress bar to the console. (default: True)

Return type:

Tuple[float, float, float]

random_sample(head_index: Tensor, rel_type: Tensor, tail_index: Tensor) Tuple[Tensor, Tensor, Tensor][source]

Randomly samples negative triplets by either replacing the head or the tail (but not both).

Parameters:
Return type:

Tuple[Tensor, Tensor, Tensor]