torch_geometric.nn.conv.RGCNConv
- class RGCNConv(in_channels: Union[int, Tuple[int, int]], out_channels: int, num_relations: int, num_bases: Optional[int] = None, num_blocks: Optional[int] = None, aggr: str = 'mean', root_weight: bool = True, is_sorted: bool = False, bias: bool = True, **kwargs)[source]
Bases:
MessagePassing
The relational graph convolutional operator from the “Modeling Relational Data with Graph Convolutional Networks” paper.
\[\mathbf{x}^{\prime}_i = \mathbf{\Theta}_{\textrm{root}} \cdot \mathbf{x}_i + \sum_{r \in \mathcal{R}} \sum_{j \in \mathcal{N}_r(i)} \frac{1}{|\mathcal{N}_r(i)|} \mathbf{\Theta}_r \cdot \mathbf{x}_j,\]where \(\mathcal{R}\) denotes the set of relations, i.e. edge types. Edge type needs to be a one-dimensional
torch.long
tensor which stores a relation identifier \(\in \{ 0, \ldots, |\mathcal{R}| - 1\}\) for each edge.Note
This implementation is as memory-efficient as possible by iterating over each individual relation type. Therefore, it may result in low GPU utilization in case the graph has a large number of relations. As an alternative approach,
FastRGCNConv
does not iterate over each individual type, but may consume a large amount of memory to compensate. We advise to check out both implementations to see which one fits your needs.Note
RGCNConv
can use dynamic shapes, which means that the shape of the interim tensors can be determined at runtime. If your device doesn’t support dynamic shapes, useFastRGCNConv
instead.- Parameters:
in_channels (int or tuple) – Size of each input sample. A tuple corresponds to the sizes of source and target dimensionalities. In case no input features are given, this argument should correspond to the number of nodes in your graph.
out_channels (int) – Size of each output sample.
num_relations (int) – Number of relations.
num_bases (int, optional) – If set, this layer will use the basis-decomposition regularization scheme where
num_bases
denotes the number of bases to use. (default:None
)num_blocks (int, optional) – If set, this layer will use the block-diagonal-decomposition regularization scheme where
num_blocks
denotes the number of blocks to use. (default:None
)aggr (str, optional) – The aggregation scheme to use (
"add"
,"mean"
,"max"
). (default:"mean"
)root_weight (bool, optional) – If set to
False
, the layer will not add transformed root node features to the output. (default:True
)is_sorted (bool, optional) – If set to
True
, assumes thatedge_index
is sorted byedge_type
. This avoids internal re-sorting of the data and can improve runtime and memory efficiency. (default:False
)bias (bool, optional) – If set to
False
, the layer will not learn an additive bias. (default:True
)**kwargs (optional) – Additional arguments of
torch_geometric.nn.conv.MessagePassing
.
- forward(x: Union[Tensor, None, Tuple[Optional[Tensor], Tensor]], edge_index: Union[Tensor, SparseTensor], edge_type: Optional[Tensor] = None)[source]
Runs the forward pass of the module.
- Parameters:
x (torch.Tensor or tuple, optional) – The input node features. Can be either a
[num_nodes, in_channels]
node feature matrix, or an optional one-dimensional node index tensor (in which case input features are treated as trainable node embeddings). Furthermore,x
can be of typetuple
denoting source and destination node features.edge_index (torch.Tensor or SparseTensor) – The edge indices.
edge_type (torch.Tensor, optional) – The one-dimensional relation type/index for each edge in
edge_index
. Should be onlyNone
in caseedge_index
is of typetorch_sparse.SparseTensor
. (default:None
)