torch_geometric.nn.conv.MixHopConv
- class MixHopConv(in_channels: int, out_channels: int, powers: Optional[List[int]] = None, add_self_loops: bool = True, bias: bool = True, **kwargs)[source]
Bases:
MessagePassing
The Mix-Hop graph convolutional operator from the “MixHop: Higher-Order Graph Convolutional Architectures via Sparsified Neighborhood Mixing” paper.
\[\mathbf{X}^{\prime}={\Bigg\Vert}_{p\in P} {\left( \mathbf{\hat{D}}^{-1/2} \mathbf{\hat{A}} \mathbf{\hat{D}}^{-1/2} \right)}^p \mathbf{X} \mathbf{\Theta},\]where \(\mathbf{\hat{A}} = \mathbf{A} + \mathbf{I}\) denotes the adjacency matrix with inserted self-loops and \(\hat{D}_{ii} = \sum_{j=0} \hat{A}_{ij}\) its diagonal degree matrix.
- Parameters:
in_channels (int) – Size of each input sample, or
-1
to derive the size from the first input(s) to the forward method.out_channels (int) – Size of each output sample.
powers (List[int], optional) – The powers of the adjacency matrix to use. (default:
[0, 1, 2]
)add_self_loops (bool, optional) – If set to
False
, will not add self-loops to the input graph. (default:True
)bias (bool, optional) – If set to
False
, the layer will not learn an additive bias. (default:True
)**kwargs (optional) – Additional arguments of
torch_geometric.nn.conv.MessagePassing
.
- Shapes:
input: node features \((|\mathcal{V}|, F_{in})\), edge indices \((2, |\mathcal{E}|)\), edge weights \((|\mathcal{E}|)\) (optional)
output: node features \((|\mathcal{V}|, |P| \cdot F_{out})\)