torch_geometric.nn.norm.LayerNorm
- class LayerNorm(in_channels: int, eps: float = 1e-05, affine: bool = True, mode: str = 'graph')[source]
Bases:
Module
Applies layer normalization over each individual example in a batch of features as described in the “Layer Normalization” paper.
\[\mathbf{x}^{\prime}_i = \frac{\mathbf{x} - \textrm{E}[\mathbf{x}]}{\sqrt{\textrm{Var}[\mathbf{x}] + \epsilon}} \odot \gamma + \beta\]The mean and standard-deviation are calculated across all nodes and all node channels separately for each object in a mini-batch.
- Parameters:
in_channels (int) – Size of each input sample.
eps (float, optional) – A value added to the denominator for numerical stability. (default:
1e-5
)affine (bool, optional) – If set to
True
, this module has learnable affine parameters \(\gamma\) and \(\beta\). (default:True
)mode (str, optinal) – The normalization mode to use for layer normalization (
"graph"
or"node"
). If"graph"
is used, each graph will be considered as an element to be normalized. If “node” is used, each node will be considered as an element to be normalized. (default:"graph"
)
- forward(x: Tensor, batch: Optional[Tensor] = None, batch_size: Optional[int] = None) Tensor [source]
Forward pass.
- Parameters:
x (torch.Tensor) – The source tensor.
batch (torch.Tensor, optional) – The batch vector \(\mathbf{b} \in {\{ 0, \ldots, B-1\}}^N\), which assigns each element to a specific example. (default:
None
)batch_size (int, optional) – The number of examples \(B\). Automatically calculated if not given. (default:
None
)
- Return type: