torch_geometric.nn.conv.GENConv
- class GENConv(in_channels: Union[int, Tuple[int, int]], out_channels: int, aggr: Optional[Union[str, List[str], Aggregation]] = 'softmax', t: float = 1.0, learn_t: bool = False, p: float = 1.0, learn_p: bool = False, msg_norm: bool = False, learn_msg_scale: bool = False, norm: str = 'batch', num_layers: int = 2, expansion: int = 2, eps: float = 1e-07, bias: bool = False, edge_dim: Optional[int] = None, **kwargs)[source]
Bases:
MessagePassing
The GENeralized Graph Convolution (GENConv) from the “DeeperGCN: All You Need to Train Deeper GCNs” paper.
GENConv
supports both \(\textrm{softmax}\) (seeSoftmaxAggregation
) and \(\textrm{powermean}\) (seePowerMeanAggregation
) aggregation. Its message construction is given by:\[\mathbf{x}_i^{\prime} = \mathrm{MLP} \left( \mathbf{x}_i + \mathrm{AGG} \left( \left\{ \mathrm{ReLU} \left( \mathbf{x}_j + \mathbf{e_{ji}} \right) +\epsilon : j \in \mathcal{N}(i) \right\} \right) \right)\]Note
For an example of using
GENConv
, see examples/ogbn_proteins_deepgcn.py.- Parameters:
in_channels (int or tuple) – Size of each input sample, or
-1
to derive the size from the first input(s) to the forward method. A tuple corresponds to the sizes of source and target dimensionalities.out_channels (int) – Size of each output sample.
aggr (str or Aggregation, optional) – The aggregation scheme to use. Any aggregation of
torch_geometric.nn.aggr
can be used, ("softmax"
,"powermean"
,"add"
,"mean"
,max
). (default:"softmax"
)t (float, optional) – Initial inverse temperature for softmax aggregation. (default:
1.0
)learn_t (bool, optional) – If set to
True
, will learn the valuet
for softmax aggregation dynamically. (default:False
)p (float, optional) – Initial power for power mean aggregation. (default:
1.0
)learn_p (bool, optional) – If set to
True
, will learn the valuep
for power mean aggregation dynamically. (default:False
)msg_norm (bool, optional) – If set to
True
, will use message normalization. (default:False
)learn_msg_scale (bool, optional) – If set to
True
, will learn the scaling factor of message normalization. (default:False
)norm (str, optional) – Norm layer of MLP layers (
"batch"
,"layer"
,"instance"
) (default:batch
)num_layers (int, optional) – The number of MLP layers. (default:
2
)expansion (int, optional) – The expansion factor of hidden channels in MLP layers. (default:
2
)eps (float, optional) – The epsilon value of the message construction function. (default:
1e-7
)bias (bool, optional) – If set to
False
, the layer will not learn an additive bias. (default:True
)edge_dim (int, optional) – Edge feature dimensionality. If set to
None
, Edge feature dimensionality is expected to match the out_channels. Other-wise, edge features are linearly transformed to match out_channels of node feature dimensionality. (default:None
)**kwargs (optional) – Additional arguments of
torch_geometric.nn.conv.GenMessagePassing
.
- Shapes:
input: node features \((|\mathcal{V}|, F_{in})\) or \(((|\mathcal{V_s}|, F_{s}), (|\mathcal{V_t}|, F_{t}))\) if bipartite, edge indices \((2, |\mathcal{E}|)\), edge attributes \((|\mathcal{E}|, D)\) (optional)
output: node features \((|\mathcal{V}|, F_{out})\) or \((|\mathcal{V}_t|, F_{out})\) if bipartite