torch_geometric.nn.conv.ARMAConv
- class ARMAConv(in_channels: int, out_channels: int, num_stacks: int = 1, num_layers: int = 1, shared_weights: bool = False, act: Optional[Callable] = ReLU(), dropout: float = 0.0, bias: bool = True, **kwargs)[source]
Bases:
MessagePassing
The ARMA graph convolutional operator from the “Graph Neural Networks with Convolutional ARMA Filters” paper.
\[\mathbf{X}^{\prime} = \frac{1}{K} \sum_{k=1}^K \mathbf{X}_k^{(T)},\]with \(\mathbf{X}_k^{(T)}\) being recursively defined by
\[\mathbf{X}_k^{(t+1)} = \sigma \left( \mathbf{\hat{L}} \mathbf{X}_k^{(t)} \mathbf{W} + \mathbf{X}^{(0)} \mathbf{V} \right),\]where \(\mathbf{\hat{L}} = \mathbf{I} - \mathbf{L} = \mathbf{D}^{-1/2} \mathbf{A} \mathbf{D}^{-1/2}\) denotes the modified Laplacian \(\mathbf{L} = \mathbf{I} - \mathbf{D}^{-1/2} \mathbf{A} \mathbf{D}^{-1/2}\).
- Parameters:
in_channels (int) – Size of each input sample, or
-1
to derive the size from the first input(s) to the forward method.out_channels (int) – Size of each output sample \(\mathbf{x}^{(t+1)}\).
num_stacks (int, optional) – Number of parallel stacks \(K\). (default:
1
).num_layers (int, optional) – Number of layers \(T\). (default:
1
)act (callable, optional) – Activation function \(\sigma\). (default:
torch.nn.ReLU()
)shared_weights (int, optional) – If set to
True
the layers in each stack will share the same parameters. (default:False
)dropout (float, optional) – Dropout probability of the skip connection. (default:
0.
)bias (bool, optional) – If set to
False
, the layer will not learn an additive bias. (default:True
)**kwargs (optional) – Additional arguments of
torch_geometric.nn.conv.MessagePassing
.
- Shapes:
input: node features \((|\mathcal{V}|, F_{in})\), edge indices \((2, |\mathcal{E}|)\), edge weights \((|\mathcal{E}|)\) (optional)
output: node features \((|\mathcal{V}|, F_{out})\)