torch_geometric.contrib
torch_geometric.contrib
is a staging area for early stage experimental code.
Modules might be moved to the main library in the future.
Warning
This module contains experimental code, which is not guaranteed to be stable.
Convolutional Layers
Models
The Projected Randomized Block Coordinate Descent (PRBCD) adversarial attack from the Robustness of Graph Neural Networks at Scale paper. 

The Greedy Randomized Block Coordinate Descent (GRBCD) adversarial attack from the Robustness of Graph Neural Networks at Scale paper. 
 class PRBCDAttack(model: Module, block_size: int, epochs: int = 125, epochs_resampling: int = 100, loss: Optional[Union[str, Callable[[Tensor, Tensor, Optional[Tensor]], Tensor]]] = 'prob_margin', metric: Optional[Union[str, Callable[[Tensor, Tensor, Optional[Tensor]], Tensor]]] = None, lr: float = 1000, is_undirected: bool = True, log: bool = True, **kwargs)[source]
The Projected Randomized Block Coordinate Descent (PRBCD) adversarial attack from the Robustness of Graph Neural Networks at Scale paper.
This attack uses an efficient gradient based approach that (during the attack) relaxes the discrete entries in the adjacency matrix \(\{0, 1\}\) to \([0, 1]\) and solely perturbs the adjacency matrix (no feature perturbations). Thus, this attack supports all models that can handle weighted graphs that are differentiable w.r.t. these edge weights, e.g.,
GCNConv
orGraphConv
. For nondifferentiable models you might need modifications, e.g., see example forGATConv
.The memory overhead is driven by the additional edges (at most
block_size
). For scalability reasons, the block is drawn with replacement and then the index is made unique. Thus, the actual block size is typically slightly smaller than specified.This attack can be used for both global and local attacks as well as testtime attacks (evasion) and trainingtime attacks (poisoning). Please see the provided examples.
This attack is designed with a focus on node or graphclassification, however, to adapt to other tasks you most likely only need to provide an appropriate loss and model. However, we currently do not support batching out of the box (sampling needs to be adapted).
Note
For examples of using the PRBCD Attack, see examples/contrib/rbcd_attack.py for a test time attack (evasion) or examples/contrib/rbcd_attack_poisoning.py for a training time (poisoning) attack.
 Parameters
model (torch.nn.Module) – The GNN module to assess.
block_size (int) – Number of randomly selected elements in the adjacency matrix to consider.
epochs (int, optional) – Number of epochs (aborts early if
mode='greedy'
and budget is satisfied) (default:125
)epochs_resampling (int, optional) – Number of epochs to resample the random block. (default: obj:100)
loss (str or callable, optional) – A loss to quantify the “strength” of an attack. Note that this function must match the output format of
model
. By default, it is assumed that the task is classification and that the model returns raw predictions (i.e., no output activation) or useslogsoftmax
. Moreover, and the number of predictions should match the number of labels passed toattack
. Either pass a callable or one of:'masked'
,'margin'
,'prob_margin'
,'tanh_margin'
. (default:'prob_margin'
)metric (callable, optional) – Second (potentially nondifferentiable) loss for monitoring or early stopping (if
mode='greedy'
). (default: same asloss
)lr (float, optional) – Learning rate for updating edge weights. Additionally, it is heuristically corrected for
block_size
, budget (seeattack
) and graph size. (default:1_000
)is_undirected (bool, optional) – If
True
the graph is assumed to be undirected. (default:True
)log (bool, optional) – If set to
False
, will not log any learning progress. (default:True
)
 coeffs = {'eps': 1e07, 'max_final_samples': 20, 'max_trials_sampling': 20, 'with_early_stopping': True}
 attack(x: Tensor, edge_index: Tensor, labels: Tensor, budget: int, idx_attack: Optional[Tensor] = None, **kwargs) Tuple[Tensor, Tensor] [source]
Attack the predictions for the provided model and graph.
A subset of predictions may be specified with
idx_attack
. The attack is allowed to flip (i.e. add or delete)budget
edges and will return the strongest perturbation it can find. It returns both the resulting perturbededge_index
as well as the perturbations. Parameters
x (torch.Tensor) – The node feature matrix.
edge_index (torch.Tensor) – The edge indices.
labels (torch.Tensor) – The labels.
budget (int) – The number of allowed perturbations (i.e. number of edges that are flipped at most).
idx_attack (torch.Tensor, optional) – Filter for predictions/labels. Shape and type must match that it can index
labels
and the model’s predictions.**kwargs (optional) – Additional arguments passed to the GNN module.
 Return type
 class GRBCDAttack(model: Module, block_size: int, epochs: int = 125, loss: Optional[Union[str, Callable[[Tensor, Tensor, Optional[Tensor]], Tensor]]] = 'masked', is_undirected: bool = True, log: bool = True, **kwargs)[source]
The Greedy Randomized Block Coordinate Descent (GRBCD) adversarial attack from the Robustness of Graph Neural Networks at Scale paper.
GRBCD shares most of the properties and requirements with
PRBCDAttack
. It also uses an efficient gradient based approach. However, it greedily flips edges based on the gradient towards the adjacency matrix.Note
For examples of using the GRBCD Attack, see examples/contrib/rbcd_attack.py for a test time attack (evasion).
 Parameters
model (torch.nn.Module) – The GNN module to assess.
block_size (int) – Number of randomly selected elements in the adjacency matrix to consider.
epochs (int, optional) – Number of epochs (aborts early if
mode='greedy'
and budget is satisfied) (default:125
)loss (str or callable, optional) – A loss to quantify the “strength” of an attack. Note that this function must match the output format of
model
. By default, it is assumed that the task is classification and that the model returns raw predictions (i.e., no output activation) or useslogsoftmax
. Moreover, and the number of predictions should match the number of labels passed toattack
. Either pass Callable or one of:'masked'
,'margin'
,'prob_margin'
,'tanh_margin'
. (default:'masked'
)is_undirected (bool, optional) – If
True
the graph is assumed to be undirected. (default:True
)log (bool, optional) – If set to
False
, will not log any learning progress. (default:True
)
 coeffs = {'eps': 1e07, 'max_trials_sampling': 20}
Datasets
Transforms
Explainer
The GraphMaskExplainer model from the "Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking" paper for identifying layerwise compact subgraph structures and node features that play a crucial role in the predictions made by a GNN. 

The PGMExplainer model from the "PGMExplainer: Probabilistic Graphical Model Explanations for Graph Neural Networks" paper. 
 class GraphMaskExplainer(num_layers: int, epochs: int = 100, lr: float = 0.01, penalty_scaling: int = 5, lambda_optimizer_lr: int = 0.01, init_lambda: int = 0.55, allowance: int = 0.03, allow_multiple_explanations: bool = False, log: bool = True, **kwargs)[source]
The GraphMaskExplainer model from the “Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking” paper for identifying layerwise compact subgraph structures and node features that play a crucial role in the predictions made by a GNN.
Note
For an example of using
GraphMaskExplainer
, see examples/contrib/graphmask_explainer.py. Parameters
num_layers (int) – The number of layers to use.
epochs (int, optional) – The number of epochs to train. (default:
100
)lr (float, optional) – The learning rate to apply. (default:
0.01
)penalty_scaling (int, optional) – Scaling value of penalty term. Value must lie between 0 and 10. (default:
5
)lambda_optimizer_lr (float, optional) – The learning rate to optimize the Lagrange multiplier. (default:
1e2
)init_lambda (float, optional) – The Lagrange multiplier. Value must lie between
0
and 1. (default:0.55
)allowance (float, optional) – A float value between
0
and1
denotes tolerance level. (default:0.03
)log (bool, optional) – If set to
False
, will not log any learning progress. (default:True
)**kwargs (optional) – Additional hyperparameters to override default settings in
coeffs
.
 forward(model: Module, x: Tensor, edge_index: Tensor, *, target: Tensor, index: Optional[Union[int, Tensor]] = None, **kwargs) Explanation [source]
Computes the explanation.
 Parameters
model (torch.nn.Module) – The model to explain.
x (Union[torch.Tensor, Dict[NodeType, torch.Tensor]]) – The input node features of a homogeneous or heterogeneous graph.
edge_index (Union[torch.Tensor, Dict[NodeType, torch.Tensor]]) – The input edge indices of a homogeneous or heterogeneous graph.
target (torch.Tensor) – The target of the model.
index (Union[int, Tensor], optional) – The index of the model output to explain. Can be a single index or a tensor of indices. (default:
None
)**kwargs (optional) – Additional keyword arguments passed to
model
.
 class PGMExplainer(feature_index: Optional[List] = None, perturbation_mode: str = 'randint', perturbations_is_positive_only: bool = False, is_perturbation_scaled: bool = False, num_samples: int = 100, max_subgraph_size: Optional[int] = None, significance_threshold: float = 0.05, pred_threshold: float = 0.1)[source]
The PGMExplainer model from the “PGMExplainer: Probabilistic Graphical Model Explanations for Graph Neural Networks” paper.
The generated
Explanation
provides anode_mask
and apgm_stats
tensor, which stores the \(p\)values of each node as calculated by the Chisquared test. Parameters
feature_index (List) – The indices of the perturbed features. If set to
None
, all features are perturbed. (default:None
)perturb_mode (str, optional) – The method to generate the variations in features. One of
"randint"
,"mean"
,"zero"
,"max"
or"uniform"
. (default:"randint"
)perturbations_is_positive_only (bool, optional) – If set to
True
, restrict perturbed values to be positive. (default:False
)is_perturbation_scaled (bool, optional) – If set to
True
, will normalize the range of the perturbed features. (default:False
)num_samples (int, optional) – The number of samples of perturbations used to test the significance of nodes to the prediction. (default:
100
)max_subgraph_size (int, optional) – The maximum number of neighbors to consider for the explanation. (default:
None
)significance_threshold (float, optional) – The statistical threshold (\(p\)value) for which a node is considered to have an effect on the prediction. (default:
0.05
)pred_threshold (float, optional) – The buffer value (in range
[0, 1]
) to consider the output from a perturbed data to be different from the original. (default:0.1
)
 forward(model: Module, x: Tensor, edge_index: Tensor, *, target: Tensor, index: Optional[Union[int, Tensor]] = None, **kwargs) Explanation [source]
Computes the explanation.
 Parameters
model (torch.nn.Module) – The model to explain.
x (Union[torch.Tensor, Dict[NodeType, torch.Tensor]]) – The input node features of a homogeneous or heterogeneous graph.
edge_index (Union[torch.Tensor, Dict[NodeType, torch.Tensor]]) – The input edge indices of a homogeneous or heterogeneous graph.
target (torch.Tensor) – The target of the model.
index (Union[int, Tensor], optional) – The index of the model output to explain. Can be a single index or a tensor of indices. (default:
None
)**kwargs (optional) – Additional keyword arguments passed to
model
.