torch_geometric.nn.conv.FAConv
- class FAConv(channels: int, eps: float = 0.1, dropout: float = 0.0, cached: bool = False, add_self_loops: bool = True, normalize: bool = True, **kwargs)[source]
Bases:
MessagePassing
The Frequency Adaptive Graph Convolution operator from the “Beyond Low-Frequency Information in Graph Convolutional Networks” paper.
\[\mathbf{x}^{\prime}_i= \epsilon \cdot \mathbf{x}^{(0)}_i + \sum_{j \in \mathcal{N}(i)} \frac{\alpha_{i,j}}{\sqrt{d_i d_j}} \mathbf{x}_{j}\]where \(\mathbf{x}^{(0)}_i\) and \(d_i\) denote the initial feature representation and node degree of node \(i\), respectively. The attention coefficients \(\alpha_{i,j}\) are computed as
\[\mathbf{\alpha}_{i,j} = \textrm{tanh}(\mathbf{a}^{\top}[\mathbf{x}_i, \mathbf{x}_j])\]based on the trainable parameter vector \(\mathbf{a}\).
- Parameters:
channels (int) – Size of each input sample, or
-1
to derive the size from the first input(s) to the forward method.eps (float, optional) – \(\epsilon\)-value. (default:
0.1
)dropout (float, optional) – Dropout probability of the normalized coefficients which exposes each node to a stochastically sampled neighborhood during training. (default:
0
).cached (bool, optional) – If set to
True
, the layer will cache the computation of \(\sqrt{d_i d_j}\) on first execution, and will use the cached version for further executions. This parameter should only be set toTrue
in transductive learning scenarios. (default:False
)add_self_loops (bool, optional) – If set to
False
, will not add self-loops to the input graph. (default:True
)normalize (bool, optional) – Whether to add self-loops (if
add_self_loops
isTrue
) and compute symmetric normalization coefficients on the fly. If set toFalse
,edge_weight
needs to be provided in the layer’sforward()
method. (default:True
)**kwargs (optional) – Additional arguments of
torch_geometric.nn.conv.MessagePassing
.
- Shapes:
input: node features \((|\mathcal{V}|, F)\), initial node features \((|\mathcal{V}|, F)\), edge indices \((2, |\mathcal{E}|)\), edge weights \((|\mathcal{E}|)\) (optional)
output: node features \((|\mathcal{V}|, F)\) or \(((|\mathcal{V}|, F), ((2, |\mathcal{E}|), (|\mathcal{E}|)))\) if
return_attention_weights=True
- forward(x: Tensor, x_0: Tensor, edge_index: Union[Tensor, SparseTensor], edge_weight: Optional[Tensor] = None, return_attention_weights: Optional[Tensor] = None) Tensor [source]
- forward(x: Tensor, x_0: Tensor, edge_index: Tensor, edge_weight: Optional[Tensor] = None, return_attention_weights: bool = None) Tuple[Tensor, Tuple[Tensor, Tensor]]
- forward(x: Tensor, x_0: Tensor, edge_index: SparseTensor, edge_weight: Optional[Tensor] = None, return_attention_weights: bool = None) Tuple[Tensor, SparseTensor]
Runs the forward pass of the module.
- Parameters:
x (torch.Tensor) – The node features.
x_0 (torch.Tensor) – The initial input node features.
edge_index (torch.Tensor or SparseTensor) – The edge indices.
edge_weight (torch.Tensor, optional) – The edge weights. (default:
None
)return_attention_weights (bool, optional) – If set to
True
, will additionally return the tuple(edge_index, attention_weights)
, holding the computed attention weights for each edge. (default:None
)
- Return type:
Union
[Tensor
,Tuple
[Tensor
,Tuple
[Tensor
,Tensor
]],Tuple
[Tensor
,SparseTensor
]]