torch_geometric.transforms

BaseTransform

An abstract base class for writing transforms.

Compose

Composes several transforms together.

ToDevice

Performs tensor device conversion, either for all attributes of the Data object or only the ones given by attrs.

ToSparseTensor

Converts the edge_index attributes of a homogeneous or heterogeneous data object into a (transposed) torch_sparse.SparseTensor type with key adj_.t.

ToUndirected

Converts a homogeneous or heterogeneous graph to an undirected graph such that \((j,i) \in \mathcal{E}\) for every edge \((i,j) \in \mathcal{E}\).

Constant

Adds a constant value to each node feature x.

Distance

Saves the Euclidean distance of linked nodes in its edge attributes.

Cartesian

Saves the relative Cartesian coordinates of linked nodes in its edge attributes.

LocalCartesian

Saves the relative Cartesian coordinates of linked nodes in its edge attributes.

Polar

Saves the polar coordinates of linked nodes in its edge attributes.

Spherical

Saves the spherical coordinates of linked nodes in its edge attributes.

PointPairFeatures

Computes the rotation-invariant Point Pair Features

OneHotDegree

Adds the node degree as one hot encodings to the node features.

TargetIndegree

Saves the globally normalized degree of target nodes

LocalDegreeProfile

Appends the Local Degree Profile (LDP) from the “A Simple yet Effective Baseline for Non-attribute Graph Classification” paper

Center

Centers node positions pos around the origin.

NormalizeRotation

Rotates all points according to the eigenvectors of the point cloud.

NormalizeScale

Centers and normalizes node positions to the interval \((-1, 1)\).

RandomTranslate

Translates node positions by randomly sampled translation values within a given interval.

RandomFlip

Flips node positions along a given axis randomly with a given probability.

LinearTransformation

Transforms node positions pos with a square transformation matrix computed offline.

RandomScale

Scales node positions by a randomly sampled factor \(s\) within a given interval, e.g., resulting in the transformation matrix

RandomRotate

Rotates node positions around a specific axis by a randomly sampled factor within a given interval.

RandomShear

Shears node positions by randomly sampled factors \(s\) within a given interval, e.g., resulting in the transformation matrix

NormalizeFeatures

Row-normalizes the attributes given in attrs to sum-up to one.

AddSelfLoops

Adds self-loops to the given homogeneous or heterogeneous graph.

RemoveIsolatedNodes

Removes isolated nodes from the graph.

KNNGraph

Creates a k-NN graph based on node positions pos.

RadiusGraph

Creates edges based on node positions pos to all points within a given distance.

FaceToEdge

Converts mesh faces [3, num_faces] to edge indices [2, num_edges].

SamplePoints

Uniformly samples num points on the mesh faces according to their face area.

FixedPoints

Samples a fixed number of num points and features from a point cloud.

ToDense

Converts a sparse adjacency matrix to a dense adjacency matrix with shape [num_nodes, num_nodes, *].

TwoHop

Adds the two hop edges to the edge indices.

LineGraph

Converts a graph to its corresponding line-graph:

LaplacianLambdaMax

Computes the highest eigenvalue of the graph Laplacian given by torch_geometric.utils.get_laplacian().

GenerateMeshNormals

Generate normal vectors for each mesh node based on neighboring faces.

Delaunay

Computes the delaunay triangulation of a set of points.

ToSLIC

Converts an image to a superpixel representation using the skimage.segmentation.slic() algorithm, resulting in a torch_geometric.data.Data object holding the centroids of superpixels in pos and their mean color in x.

GDC

Processes the graph via Graph Diffusion Convolution (GDC) from the “Diffusion Improves Graph Learning” paper.

SIGN

The Scalable Inception Graph Neural Network module (SIGN) from the “SIGN: Scalable Inception Graph Neural Networks” paper, which precomputes the fixed representations

GridSampling

Clusters points into voxels with size size.

GCNNorm

Applies the GCN normalization from the “Semi-supervised Classification with Graph Convolutional Networks” paper.

SVDFeatureReduction

Dimensionality reduction of node features via Singular Value Decomposition (SVD).

RemoveTrainingClasses

Removes classes from the node-level training set as given by data.train_mask, e.g., in order to get a zero-shot label scenario.

RandomNodeSplit

Performs a node-level random split by adding train_mask, val_mask and test_mask attributes to the Data or HeteroData object.

RandomLinkSplit

Performs an edge-level random split into training, validation and test sets of a Data or a HeteroData object.

AddMetaPaths

Adds additional edge types to a HeteroData object between the source node type and the destination node type of a given metapath, as described in the “Heterogenous Graph Attention Networks” paper.

LargestConnectedComponents

Selects the subgraph that corresponds to the largest connected components in the graph.

VirtualNode

Appends a virtual node to the given homogeneous graph that is connected to all other nodes, as described in the “Neural Message Passing for Quantum Chemistry” paper.

class BaseTransform[source]

An abstract base class for writing transforms.

Transforms are a general way to modify and customize Data objects, either by implicitly passing them as an argument to a Dataset, or by applying them explicitly to individual Data objects.

import torch_geometric.transforms as T
from torch_geometric.datasets import TUDataset

transform = T.Compose([T.ToUndirected(), T.AddSelfLoops()])

dataset = TUDataset(path, name='MUTAG', transform=transform)
data = dataset[0]  # Implicitly transform data on every access.

data = TUDataset(path, name='MUTAG')[0]
data = transform(data)  # Explicitly transform data.
class Compose(transforms: List[Callable])[source]

Composes several transforms together.

Parameters

transforms (List[Callable]) – List of transforms to compose.

class ToDevice(device: Union[int, str], attrs: Optional[List[str]] = None, non_blocking: bool = False)[source]

Performs tensor device conversion, either for all attributes of the Data object or only the ones given by attrs.

Parameters
  • device (torch.device) – The destination device.

  • attrs (List[str], optional) – If given, will only perform tensor device conversion for the given attributes. (default: None)

  • non_blocking (bool, optional) – If set to True and tensor values are in pinned memory, the copy will be asynchronous with respect to the host. (default: False)

class ToSparseTensor(attr: Optional[str] = 'edge_weight', remove_edge_index: bool = True, fill_cache: bool = True)[source]

Converts the edge_index attributes of a homogeneous or heterogeneous data object into a (transposed) torch_sparse.SparseTensor type with key adj_.t.

Note

In case of composing multiple transforms, it is best to convert the data object to a SparseTensor as late as possible, since there exist some transforms that are only able to operate on data.edge_index for now.

Parameters
  • attr – (str, optional): The name of the attribute to add as a value to the SparseTensor object (if present). (default: edge_weight)

  • remove_edge_index (bool, optional) – If set to False, the edge_index tensor will not be removed. (default: True)

  • fill_cache (bool, optional) – If set to False, will not fill the underlying SparseTensor cache. (default: True)

class ToUndirected(reduce: str = 'add', merge: bool = True)[source]

Converts a homogeneous or heterogeneous graph to an undirected graph such that \((j,i) \in \mathcal{E}\) for every edge \((i,j) \in \mathcal{E}\). In heterogeneous graphs, will add “reverse” connections for all existing edge types.

Parameters
  • reduce (string, optional) – The reduce operation to use for merging edge features ("add", "mean", "min", "max", "mul"). (default: "add")

  • merge (bool, optional) – If set to False, will create reverse edge types for connections pointing to the same source and target node type. If set to True, reverse edges will be merged into the original relation. This option only has effects in HeteroData graph data. (default: True)

class Constant(value: float = 1.0, cat: bool = True)[source]

Adds a constant value to each node feature x.

Parameters
  • value (float, optional) – The value to add. (default: 1.0)

  • cat (bool, optional) – If set to False, all existing node features will be replaced. (default: True)

class Distance(norm=True, max_value=None, cat=True)[source]

Saves the Euclidean distance of linked nodes in its edge attributes.

Parameters
  • norm (bool, optional) – If set to False, the output will not be normalized to the interval \([0, 1]\). (default: True)

  • max_value (float, optional) – If set and norm=True, normalization will be performed based on this value instead of the maximum value found in the data. (default: None)

  • cat (bool, optional) – If set to False, all existing edge attributes will be replaced. (default: True)

class Cartesian(norm=True, max_value=None, cat=True)[source]

Saves the relative Cartesian coordinates of linked nodes in its edge attributes.

Parameters
  • norm (bool, optional) – If set to False, the output will not be normalized to the interval \({[0, 1]}^D\). (default: True)

  • max_value (float, optional) – If set and norm=True, normalization will be performed based on this value instead of the maximum value found in the data. (default: None)

  • cat (bool, optional) – If set to False, all existing edge attributes will be replaced. (default: True)

class LocalCartesian(norm=True, cat=True)[source]

Saves the relative Cartesian coordinates of linked nodes in its edge attributes. Each coordinate gets neighborhood-normalized to the interval \({[0, 1]}^D\).

Parameters
  • norm (bool, optional) – If set to False, the output will not be normalized to the interval \({[0, 1]}^D\). (default: True)

  • cat (bool, optional) – If set to False, all existing edge attributes will be replaced. (default: True)

class Polar(norm=True, max_value=None, cat=True)[source]

Saves the polar coordinates of linked nodes in its edge attributes.

Parameters
  • norm (bool, optional) – If set to False, the output will not be normalized to the interval \({[0, 1]}^2\). (default: True)

  • max_value (float, optional) – If set and norm=True, normalization will be performed based on this value instead of the maximum value found in the data. (default: None)

  • cat (bool, optional) – If set to False, all existing edge attributes will be replaced. (default: True)

class Spherical(norm=True, max_value=None, cat=True)[source]

Saves the spherical coordinates of linked nodes in its edge attributes.

Parameters
  • norm (bool, optional) – If set to False, the output will not be normalized to the interval \({[0, 1]}^3\). (default: True)

  • max_value (float, optional) – If set and norm=True, normalization will be performed based on this value instead of the maximum value found in the data. (default: None)

  • cat (bool, optional) – If set to False, all existing edge attributes will be replaced. (default: True)

class PointPairFeatures(cat=True)[source]

Computes the rotation-invariant Point Pair Features

\[\left( \| \mathbf{d_{j,i}} \|, \angle(\mathbf{n}_i, \mathbf{d_{j,i}}), \angle(\mathbf{n}_j, \mathbf{d_{j,i}}), \angle(\mathbf{n}_i, \mathbf{n}_j) \right)\]

of linked nodes in its edge attributes, where \(\mathbf{d}_{j,i}\) denotes the difference vector between, and \(\mathbf{n}_i\) and \(\mathbf{n}_j\) denote the surface normals of node \(i\) and \(j\) respectively.

Parameters

cat (bool, optional) – If set to False, all existing edge attributes will be replaced. (default: True)

class OneHotDegree(max_degree, in_degree=False, cat=True)[source]

Adds the node degree as one hot encodings to the node features.

Parameters
  • max_degree (int) – Maximum degree.

  • in_degree (bool, optional) – If set to True, will compute the in-degree of nodes instead of the out-degree. (default: False)

  • cat (bool, optional) – Concat node degrees to node features instead of replacing them. (default: True)

class TargetIndegree(norm=True, max_value=None, cat=True)[source]

Saves the globally normalized degree of target nodes

\[\mathbf{u}(i,j) = \frac{\deg(j)}{\max_{v \in \mathcal{V}} \deg(v)}\]

in its edge attributes.

Parameters

cat (bool, optional) – Concat pseudo-coordinates to edge attributes instead of replacing them. (default: True)

class LocalDegreeProfile[source]

Appends the Local Degree Profile (LDP) from the “A Simple yet Effective Baseline for Non-attribute Graph Classification” paper

\[\mathbf{x}_i = \mathbf{x}_i \, \Vert \, (\deg(i), \min(DN(i)), \max(DN(i)), \textrm{mean}(DN(i)), \textrm{std}(DN(i)))\]

to the node features, where \(DN(i) = \{ \deg(j) \mid j \in \mathcal{N}(i) \}\).

class Center[source]

Centers node positions pos around the origin.

class NormalizeRotation(max_points: int = - 1, sort: bool = False)[source]

Rotates all points according to the eigenvectors of the point cloud. If the data additionally holds normals saved in data.normal, these will be rotated accordingly.

Parameters
  • max_points (int, optional) – If set to a value greater than 0, only a random number of max_points points are sampled and used to compute eigenvectors. (default: -1)

  • sort (bool, optional) – If set to True, will sort eigenvectors according to their eigenvalues. (default: False)

class NormalizeScale[source]

Centers and normalizes node positions to the interval \((-1, 1)\).

class RandomTranslate(translate)[source]

Translates node positions by randomly sampled translation values within a given interval. In contrast to other random transformations, translation is applied separately at each position.

Parameters

translate (sequence or float or int) – Maximum translation in each dimension, defining the range \((-\mathrm{translate}, +\mathrm{translate})\) to sample from. If translate is a number instead of a sequence, the same range is used for each dimension.

class RandomFlip(axis, p=0.5)[source]

Flips node positions along a given axis randomly with a given probability.

Parameters
  • axis (int) – The axis along the position of nodes being flipped.

  • p (float, optional) – Probability that node positions will be flipped. (default: 0.5)

class LinearTransformation(matrix: torch.Tensor)[source]

Transforms node positions pos with a square transformation matrix computed offline.

Parameters

matrix (Tensor) – Tensor with shape [D, D] where D corresponds to the dimensionality of node positions.

class RandomScale(scales)[source]

Scales node positions by a randomly sampled factor \(s\) within a given interval, e.g., resulting in the transformation matrix

\[\begin{split}\begin{bmatrix} s & 0 & 0 \\ 0 & s & 0 \\ 0 & 0 & s \\ \end{bmatrix}\end{split}\]

for three-dimensional positions.

Parameters

scales (tuple) – scaling factor interval, e.g. (a, b), then scale is randomly sampled from the range \(a \leq \mathrm{scale} \leq b\).

class RandomRotate(degrees, axis=0)[source]

Rotates node positions around a specific axis by a randomly sampled factor within a given interval.

Parameters
  • degrees (tuple or float) – Rotation interval from which the rotation angle is sampled. If degrees is a number instead of a tuple, the interval is given by \([-\mathrm{degrees}, \mathrm{degrees}]\).

  • axis (int, optional) – The rotation axis. (default: 0)

class RandomShear(shear)[source]

Shears node positions by randomly sampled factors \(s\) within a given interval, e.g., resulting in the transformation matrix

\[\begin{split}\begin{bmatrix} 1 & s_{xy} & s_{xz} \\ s_{yx} & 1 & s_{yz} \\ s_{zx} & z_{zy} & 1 \\ \end{bmatrix}\end{split}\]

for three-dimensional positions.

Parameters

shear (float or int) – maximum shearing factor defining the range \((-\mathrm{shear}, +\mathrm{shear})\) to sample from.

class NormalizeFeatures(attrs: List[str] = ['x'])[source]

Row-normalizes the attributes given in attrs to sum-up to one.

Parameters

attrs (List[str]) – The names of attributes to normalize. (default: ["x"])

class AddSelfLoops(attr: Optional[str] = 'edge_weight', fill_value: Optional[Union[float, torch.Tensor, str]] = None)[source]

Adds self-loops to the given homogeneous or heterogeneous graph.

Parameters
  • attr – (str, optional): The name of the attribute of edge weights or multi-dimensional edge features to pass to torch_geometric.utils.add_self_loops(). (default: "edge_weight")

  • fill_value (float or Tensor or str, optional) – The way to generate edge features of self-loops (in case attr != None). If given as float or torch.Tensor, edge features of self-loops will be directly given by fill_value. If given as str, edge features of self-loops are computed by aggregating all features of edges that point to the specific node, according to a reduce operation. ("add", "mean", "min", "max", "mul"). (default: 1.)

class RemoveIsolatedNodes[source]

Removes isolated nodes from the graph.

class KNNGraph(k=6, loop=False, force_undirected=False, flow='source_to_target')[source]

Creates a k-NN graph based on node positions pos.

Parameters
  • k (int, optional) – The number of neighbors. (default: 6)

  • loop (bool, optional) – If True, the graph will contain self-loops. (default: False)

  • force_undirected (bool, optional) – If set to True, new edges will be undirected. (default: False)

  • flow (string, optional) – The flow direction when used in combination with message passing ("source_to_target" or "target_to_source"). If set to "source_to_target", every target node will have exactly \(k\) source nodes pointing to it. (default: "source_to_target")

class RadiusGraph(r: float, loop: bool = False, max_num_neighbors: int = 32, flow: str = 'source_to_target')[source]

Creates edges based on node positions pos to all points within a given distance.

Parameters
  • r (float) – The distance.

  • loop (bool, optional) – If True, the graph will contain self-loops. (default: False)

  • max_num_neighbors (int, optional) – The maximum number of neighbors to return for each element in y. This flag is only needed for CUDA tensors. (default: 32)

  • flow (string, optional) – The flow direction when using in combination with message passing ("source_to_target" or "target_to_source"). (default: "source_to_target")

class FaceToEdge(remove_faces=True)[source]

Converts mesh faces [3, num_faces] to edge indices [2, num_edges].

Parameters

remove_faces (bool, optional) – If set to False, the face tensor will not be removed.

class SamplePoints(num, remove_faces=True, include_normals=False)[source]

Uniformly samples num points on the mesh faces according to their face area.

Parameters
  • num (int) – The number of points to sample.

  • remove_faces (bool, optional) – If set to False, the face tensor will not be removed. (default: True)

  • include_normals (bool, optional) – If set to True, then compute normals for each sampled point. (default: False)

class FixedPoints(num, replace=True, allow_duplicates=False)[source]

Samples a fixed number of num points and features from a point cloud.

Parameters
  • num (int) – The number of points to sample.

  • replace (bool, optional) – If set to False, samples points without replacement. (default: True)

  • allow_duplicates (bool, optional) – In case replace is :obj`False` and num is greater than the number of points, this option determines whether to add duplicated nodes to the output points or not. In case allow_duplicates is False, the number of output points might be smaller than num. In case allow_duplicates is True, the number of duplicated points are kept to a minimum. (default: False)

class ToDense(num_nodes=None)[source]

Converts a sparse adjacency matrix to a dense adjacency matrix with shape [num_nodes, num_nodes, *].

Parameters

num_nodes (int) – The number of nodes. If set to None, the number of nodes will get automatically inferred. (default: None)

class TwoHop[source]

Adds the two hop edges to the edge indices.

class LineGraph(force_directed: bool = False)[source]

Converts a graph to its corresponding line-graph:

\[ \begin{align}\begin{aligned}L(\mathcal{G}) &= (\mathcal{V}^{\prime}, \mathcal{E}^{\prime})\\\mathcal{V}^{\prime} &= \mathcal{E}\\\mathcal{E}^{\prime} &= \{ (e_1, e_2) : e_1 \cap e_2 \neq \emptyset \}\end{aligned}\end{align} \]

Line-graph node indices are equal to indices in the original graph’s coalesced edge_index. For undirected graphs, the maximum line-graph node index is (data.edge_index.size(1) // 2) - 1.

New node features are given by old edge attributes. For undirected graphs, edge attributes for reciprocal edges (row, col) and (col, row) get summed together.

Parameters

force_directed (bool, optional) – If set to True, the graph will be always treated as a directed graph. (default: False)

class LaplacianLambdaMax(normalization=None, is_undirected=False)[source]

Computes the highest eigenvalue of the graph Laplacian given by torch_geometric.utils.get_laplacian().

Parameters
  • normalization (str, optional) –

    The normalization scheme for the graph Laplacian (default: None):

    1. None: No normalization \(\mathbf{L} = \mathbf{D} - \mathbf{A}\)

    2. "sym": Symmetric normalization \(\mathbf{L} = \mathbf{I} - \mathbf{D}^{-1/2} \mathbf{A} \mathbf{D}^{-1/2}\)

    3. "rw": Random-walk normalization \(\mathbf{L} = \mathbf{I} - \mathbf{D}^{-1} \mathbf{A}\)

  • is_undirected (bool, optional) – If set to True, this transform expects undirected graphs as input, and can hence speed up the computation of the largest eigenvalue. (default: False)

class GenerateMeshNormals[source]

Generate normal vectors for each mesh node based on neighboring faces.

class Delaunay[source]

Computes the delaunay triangulation of a set of points.

class ToSLIC(add_seg=False, add_img=False, **kwargs)[source]

Converts an image to a superpixel representation using the skimage.segmentation.slic() algorithm, resulting in a torch_geometric.data.Data object holding the centroids of superpixels in pos and their mean color in x.

This transform can be used with any torchvision dataset.

Example:

from torchvision.datasets import MNIST
import torchvision.transforms as T
from torch_geometric.transforms import ToSLIC

transform = T.Compose([T.ToTensor(), ToSLIC(n_segments=75)])
dataset = MNIST('/tmp/MNIST', download=True, transform=transform)
Parameters
  • add_seg (bool, optional) – If set to True, will add the segmentation result to the data object. (default: False)

  • add_img (bool, optional) – If set to True, will add the input image to the data object. (default: False)

  • **kwargs (optional) – Arguments to adjust the output of the SLIC algorithm. See the SLIC documentation for an overview.

class GDC(self_loop_weight=1, normalization_in='sym', normalization_out='col', diffusion_kwargs={'alpha': 0.15, 'method': 'ppr'}, sparsification_kwargs={'avg_degree': 64, 'method': 'threshold'}, exact=True)[source]

Processes the graph via Graph Diffusion Convolution (GDC) from the “Diffusion Improves Graph Learning” paper.

Note

The paper offers additional advice on how to choose the hyperparameters. For an example of using GCN with GDC, see examples/gcn.py.

Parameters
  • self_loop_weight (float, optional) – Weight of the added self-loop. Set to None to add no self-loops. (default: 1)

  • normalization_in (str, optional) – Normalization of the transition matrix on the original (input) graph. Possible values: "sym", "col", and "row". See GDC.transition_matrix() for details. (default: "sym")

  • normalization_out (str, optional) – Normalization of the transition matrix on the transformed GDC (output) graph. Possible values: "sym", "col", "row", and None. See GDC.transition_matrix() for details. (default: "col")

  • diffusion_kwargs (dict, optional) – Dictionary containing the parameters for diffusion. method specifies the diffusion method ("ppr", "heat" or "coeff"). Each diffusion method requires different additional parameters. See GDC.diffusion_matrix_exact() or GDC.diffusion_matrix_approx() for details. (default: dict(method='ppr', alpha=0.15))

  • sparsification_kwargs (dict, optional) – Dictionary containing the parameters for sparsification. method specifies the sparsification method ("threshold" or "topk"). Each sparsification method requires different additional parameters. See GDC.sparsify_dense() for details. (default: dict(method='threshold', avg_degree=64))

  • exact (bool, optional) – Whether to exactly calculate the diffusion matrix. Note that the exact variants are not scalable. They densify the adjacency matrix and calculate either its inverse or its matrix exponential. However, the approximate variants do not support edge weights and currently only personalized PageRank and sparsification by threshold are implemented as fast, approximate versions. (default: True)

Return type

torch_geometric.data.Data

transition_matrix(edge_index, edge_weight, num_nodes, normalization)[source]

Calculate the approximate, sparse diffusion on a given sparse matrix.

Parameters
  • edge_index (LongTensor) – The edge indices.

  • edge_weight (Tensor) – One-dimensional edge weights.

  • num_nodes (int) – Number of nodes.

  • normalization (str) –

    Normalization scheme:

    1. "sym": Symmetric normalization \(\mathbf{T} = \mathbf{D}^{-1/2} \mathbf{A} \mathbf{D}^{-1/2}\).

    2. "col": Column-wise normalization \(\mathbf{T} = \mathbf{A} \mathbf{D}^{-1}\).

    3. "row": Row-wise normalization \(\mathbf{T} = \mathbf{D}^{-1} \mathbf{A}\).

    4. None: No normalization.

Return type

(LongTensor, Tensor)

diffusion_matrix_exact(edge_index, edge_weight, num_nodes, method, **kwargs)[source]

Calculate the (dense) diffusion on a given sparse graph. Note that these exact variants are not scalable. They densify the adjacency matrix and calculate either its inverse or its matrix exponential.

Parameters
  • edge_index (LongTensor) – The edge indices.

  • edge_weight (Tensor) – One-dimensional edge weights.

  • num_nodes (int) – Number of nodes.

  • method (str) –

    Diffusion method:

    1. "ppr": Use personalized PageRank as diffusion. Additionally expects the parameter:

      • alpha (float) - Return probability in PPR. Commonly lies in [0.05, 0.2].

    2. "heat": Use heat kernel diffusion. Additionally expects the parameter:

      • t (float) - Time of diffusion. Commonly lies in [2, 10].

    3. "coeff": Freely choose diffusion coefficients. Additionally expects the parameter:

      • coeffs (List[float]) - List of coefficients theta_k for each power of the transition matrix (starting at 0).

Return type

(Tensor)

diffusion_matrix_approx(edge_index, edge_weight, num_nodes, normalization, method, **kwargs)[source]

Calculate the approximate, sparse diffusion on a given sparse graph.

Parameters
  • edge_index (LongTensor) – The edge indices.

  • edge_weight (Tensor) – One-dimensional edge weights.

  • num_nodes (int) – Number of nodes.

  • normalization (str) – Transition matrix normalization scheme ("sym", "row", or "col"). See GDC.transition_matrix() for details.

  • method (str) –

    Diffusion method:

    1. "ppr": Use personalized PageRank as diffusion. Additionally expects the parameters:

      • alpha (float) - Return probability in PPR. Commonly lies in [0.05, 0.2].

      • eps (float) - Threshold for PPR calculation stopping criterion (edge_weight >= eps * out_degree). Recommended default: 1e-4.

Return type

(LongTensor, Tensor)

sparsify_dense(matrix, method, **kwargs)[source]

Sparsifies the given dense matrix.

Parameters
  • matrix (Tensor) – Matrix to sparsify.

  • method (str) –

    Method of sparsification. Options:

    1. "threshold": Remove all edges with weights smaller than eps. Additionally expects one of these parameters:

      • eps (float) - Threshold to bound edges at.

      • avg_degree (int) - If eps is not given, it can optionally be calculated by calculating the eps required to achieve a given avg_degree.

    2. "topk": Keep edges with top k edge weights per node (column). Additionally expects the following parameters:

      • k (int) - Specifies the number of edges to keep.

      • dim (int) - The axis along which to take the top k.

Return type

(LongTensor, Tensor)

sparsify_sparse(edge_index, edge_weight, num_nodes, method, **kwargs)[source]

Sparsifies a given sparse graph further.

Parameters
  • edge_index (LongTensor) – The edge indices.

  • edge_weight (Tensor) – One-dimensional edge weights.

  • num_nodes (int) – Number of nodes.

  • method (str) –

    Method of sparsification:

    1. "threshold": Remove all edges with weights smaller than eps. Additionally expects one of these parameters:

      • eps (float) - Threshold to bound edges at.

      • avg_degree (int) - If eps is not given, it can optionally be calculated by calculating the eps required to achieve a given avg_degree.

Return type

(LongTensor, Tensor)

class SIGN(K)[source]

The Scalable Inception Graph Neural Network module (SIGN) from the “SIGN: Scalable Inception Graph Neural Networks” paper, which precomputes the fixed representations

\[\mathbf{X}^{(i)} = {\left( \mathbf{D}^{-1/2} \mathbf{A} \mathbf{D}^{-1/2} \right)}^i \mathbf{X}\]

for \(i \in \{ 1, \ldots, K \}\) and saves them in data.x1, data.x2, …

Note

Since intermediate node representations are pre-computed, this operator is able to scale well to large graphs via classic mini-batching. For an example of using SIGN, see examples/sign.py.

Parameters

K (int) – The number of hops/layer.

class GridSampling(size: Union[float, List[float], torch.Tensor], start: Optional[Union[float, List[float], torch.Tensor]] = None, end: Optional[Union[float, List[float], torch.Tensor]] = None)[source]

Clusters points into voxels with size size. Each cluster returned is a new point based on the mean of all points inside the given cluster.

Parameters
  • size (float or [float] or Tensor) – Size of a voxel (in each dimension).

  • start (float or [float] or Tensor, optional) – Start coordinates of the grid (in each dimension). If set to None, will be set to the minimum coordinates found in data.pos. (default: None)

  • end (float or [float] or Tensor, optional) – End coordinates of the grid (in each dimension). If set to None, will be set to the maximum coordinates found in data.pos. (default: None)

class GCNNorm(add_self_loops: bool = True)[source]

Applies the GCN normalization from the “Semi-supervised Classification with Graph Convolutional Networks” paper.

\[\mathbf{\hat{A}} = \mathbf{\hat{D}}^{-1/2} (\mathbf{A} + \mathbf{I}) \mathbf{\hat{D}}^{-1/2}\]

where \(\hat{D}_{ii} = \sum_{j=0} \hat{A}_{ij} + 1\).

class SVDFeatureReduction(out_channels)[source]

Dimensionality reduction of node features via Singular Value Decomposition (SVD).

Parameters

out_channels (int) – The dimensionlity of node features after reduction.

class RemoveTrainingClasses(classes: List[int])[source]

Removes classes from the node-level training set as given by data.train_mask, e.g., in order to get a zero-shot label scenario.

Parameters

classes (List[int]) – The classes to remove from the training set.

class RandomNodeSplit(split: str = 'train_rest', num_splits: int = 1, num_train_per_class: int = 20, num_val: Union[int, float] = 500, num_test: Union[int, float] = 1000, key: Optional[str] = 'y')[source]

Performs a node-level random split by adding train_mask, val_mask and test_mask attributes to the Data or HeteroData object.

Parameters
  • split (string) –

    The type of dataset split ("train_rest", "test_rest", "random"). If set to "train_rest", all nodes except those in the validation and test sets will be used for training (as in the “FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling” paper). If set to "test_rest", all nodes except those in the training and validation sets will be used for test (as in the “Pitfalls of Graph Neural Network Evaluation” paper). If set to "random", train, validation, and test sets will be randomly generated, according to num_train_per_class, num_val and num_test (as in the “Semi-supervised Classification with Graph Convolutional Networks” paper). (default: "train_rest")

  • num_splits (int, optional) – The number of splits to add. If bigger than 1, the shape of masks will be [num_nodes, num_splits], and [num_nodes] otherwise. (default: 1)

  • num_train_per_class (int, optional) – The number of training samples per class in case of "test_rest" and "random" split. (default: 20)

  • num_val (int or float, optional) – The number of validation samples. If float, it represents the ratio of samples to include in the validation set. (default: 500)

  • num_test (int or float, optional) – The number of test samples in case of "train_rest" and "random" split. If float, it represents the ratio of samples to include in the test set. (default: 1000)

  • key (str, optional) – The name of the attribute holding ground-truth labels. By default, will only add node-level splits for node-level storages in which key is present. (default: "y").

class RandomLinkSplit(num_val: Union[int, float] = 0.1, num_test: Union[int, float] = 0.2, is_undirected: bool = False, key: str = 'edge_label', split_labels: bool = False, add_negative_train_samples: bool = True, neg_sampling_ratio: float = 1.0, disjoint_train_ratio: Union[int, float] = 0.0, edge_types: Optional[Union[Tuple[str, str, str], List[Tuple[str, str, str]]]] = None, rev_edge_types: Optional[Union[Tuple[str, str, str], List[Tuple[str, str, str]]]] = None)[source]

Performs an edge-level random split into training, validation and test sets of a Data or a HeteroData object. The split is performed such that the training split does not include edges in validation and test splits; and the validation split does not include edges in the test split.

from torch_geometric.transforms import RandomLinkSplit

transform = RandomLinkSplit(is_undirected=True)
train_data, val_data, test_data = transform(data)
Parameters
  • num_val (int or float, optional) – The number of validation edges. If set to a floating-point value in \([0, 1]\), it represents the ratio of edges to include in the validation set. (default: 0.1)

  • num_test (int or float, optional) – The number of test edges. If set to a floating-point value in \([0, 1]\), it represents the ratio of edges to include in the test set. (default: 0.2)

  • is_undirected (bool) – If set to True, the graph is assumed to be undirected, and positive and negative samples will not leak (reverse) edge connectivity across different splits. (default: False)

  • key (str, optional) – The name of the attribute holding ground-truth labels. If data[key] does not exist, it will be automatically created and represents a binary classification task (1 = edge, 0 = no edge). If data[key] exists, it has to be a categorical label from 0 to num_classes - 1. After negative sampling, label 0 represents negative edges, and labels 1 to num_classes represent the labels of positive edges. (default: "edge_label")

  • split_labels (bool, optional) – If set to True, will split positive and negative labels and save them in distinct attributes "pos_edge_label" and "neg_edge_label", respectively. (default: False)

  • add_negative_train_samples (bool, optional) – Whether to add negative training samples for link prediction. If the model already performs negative sampling, then the option should be set to False. Otherwise, the added negative samples will be the same across training iterations unless negative sampling is performed again. (default: True)

  • neg_sampling_ratio (float, optional) – The ratio of sampled negative edges to the number of positive edges. (default: 1.0)

  • disjoint_train_ratio (int or float, optional) – If set to a value greater than 0.0, training edges will not be shared for message passing and supervision. Instead, disjoint_train_ratio edges are used as ground-truth labels for supervision during training. (default: 0.0)

  • edge_types (Tuple[EdgeType] or List[EdgeType], optional) – The edge types used for performing edge-level splitting in case of operating on HeteroData objects. (default: None)

  • rev_edge_types (Tuple[EdgeType] or List[Tuple[EdgeType]], optional) – The reverse edge types of edge_types in case of operating on HeteroData objects. This will ensure that edges of the reverse direction will be splitted accordingly to prevent any data leakage. Can be None in case no reverse connection exists. (default: None)

class AddMetaPaths(metapaths: List[List[Tuple[str, str, str]]], drop_orig_edges: bool = False, keep_same_node_type: bool = False, drop_unconnected_nodes: bool = False)[source]

Adds additional edge types to a HeteroData object between the source node type and the destination node type of a given metapath, as described in the “Heterogenous Graph Attention Networks” paper. Meta-path based neighbors can exploit different aspects of structure information in heterogeneous graphs. Formally, a metapath is a path of the form

\[\mathcal{V}_1 \xrightarrow{R_1} \mathcal{V}_2 \xrightarrow{R_2} \ldots \xrightarrow{R_{\ell-1}} \mathcal{V}_{\ell}\]

in which \(\mathcal{V}_i\) represents node types, and \(R_j\) represents the edge type connecting two node types. The added edge type is given by the sequential multiplication of adjacency matrices along the metapath, and is added to the HeteroData object as edge type (src_node_type, "metapath_*", dst_node_type), where src_node_type and dst_node_type denote \(\mathcal{V}_1\) and \(\mathcal{V}_{\ell}\), repectively.

In addition, a metapath_dict object is added to the HeteroData object which maps the metapath-based edge type to its original metapath.

from torch_geometric.datasets import DBLP
from torch_geometric.data import HeteroData
from torch_geometric.transforms import AddMetaPaths

data = DBLP(root)[0]
# 4 node types: "paper", "author", "conference", and "term"
# 6 edge types: ("paper","author"), ("author", "paper"),
#               ("paper, "term"), ("paper", "conference"),
#               ("term, "paper"), ("conference", "paper")

# Add two metapaths:
# 1. From "paper" to "paper" through "conference"
# 2. From "author" to "conference" through "paper"
metapaths = [[("paper", "conference"), ("conference", "paper")],
             [("author", "paper"), ("paper", "conference")]]
data = AddMetaPaths(metapaths)(data)

print(data.edge_types)
>>> [("author", "to", "paper"), ("paper", "to", "author"),
     ("paper", "to", "term"), ("paper", "to", "conference"),
     ("term", "to", "paper"), ("conference", "to", "paper"),
     ("paper", "metapath_0", "paper"),
     ("author", "metapath_1", "conference")]

print(data.metapath_dict)
>>> {("paper", "metapath_0", "paper"): [("paper", "conference"),
                                        ("conference", "paper")],
     ("author", "metapath_1", "conference"): [("author", "paper"),
                                              ("paper", "conference")]}
Parameters
  • metapaths (List[List[Tuple[str, str, str]]]) – The metapaths described by a list of lists of (src_node_type, rel_type, dst_node_type) tuples.

  • drop_orig_edges (bool, optional) – If set to True, existing edge types will be dropped. (default: False)

  • keep_same_node_type (bool, optional) – If set to True, existing edge types between the same node type are not dropped even in case drop_orig_edges is set to True. (default: False)

  • drop_unconnected_nodes (bool, optional) – If set to True drop node types not connected by any edge type. (default: False)

class LargestConnectedComponents(num_components: int = 1)[source]

Selects the subgraph that corresponds to the largest connected components in the graph.

Parameters

num_components (int, optional) – Number of largest components to keep (default: 1)

class VirtualNode[source]

Appends a virtual node to the given homogeneous graph that is connected to all other nodes, as described in the “Neural Message Passing for Quantum Chemistry” paper. The virtual node serves as a global scratch space that each node both reads from and writes to in every step of message passing. This allows information to travel long distances during the propagation phase.

Node and edge features of the virtual node are added as zero-filled input features. Furthermore, special edge types will be added both for in-coming and out-going information to and from the virtual node.