# torch_geometric.nn.dense.dense_mincut_pool

dense_mincut_pool(x: Tensor, adj: Tensor, s: Tensor, mask: = None, temp: float = 1.0) [source]

The MinCut pooling operator from the “Spectral Clustering in Graph Neural Networks for Graph Pooling” paper.

\begin{align}\begin{aligned}\mathbf{X}^{\prime} &= {\mathrm{softmax}(\mathbf{S})}^{\top} \cdot \mathbf{X}\\\mathbf{A}^{\prime} &= {\mathrm{softmax}(\mathbf{S})}^{\top} \cdot \mathbf{A} \cdot \mathrm{softmax}(\mathbf{S})\end{aligned}\end{align}

based on dense learned assignments $$\mathbf{S} \in \mathbb{R}^{B \times N \times C}$$. Returns the pooled node feature matrix, the coarsened and symmetrically normalized adjacency matrix and two auxiliary objectives: (1) The MinCut loss

$\mathcal{L}_c = - \frac{\mathrm{Tr}(\mathbf{S}^{\top} \mathbf{A} \mathbf{S})} {\mathrm{Tr}(\mathbf{S}^{\top} \mathbf{D} \mathbf{S})}$

where $$\mathbf{D}$$ is the degree matrix, and (2) the orthogonality loss

$\mathcal{L}_o = {\left\| \frac{\mathbf{S}^{\top} \mathbf{S}} {{\|\mathbf{S}^{\top} \mathbf{S}\|}_F} -\frac{\mathbf{I}_C}{\sqrt{C}} \right\|}_F.$
Parameters:
• x (torch.Tensor) – Node feature tensor $$\mathbf{X} \in \mathbb{R}^{B \times N \times F}$$, with batch-size $$B$$, (maximum) number of nodes $$N$$ for each graph, and feature dimension $$F$$.

• adj (torch.Tensor) – Adjacency tensor $$\mathbf{A} \in \mathbb{R}^{B \times N \times N}$$.

• s (torch.Tensor) – Assignment tensor $$\mathbf{S} \in \mathbb{R}^{B \times N \times C}$$ with number of clusters $$C$$. The softmax does not have to be applied before-hand, since it is executed within this method.

• mask (torch.Tensor, optional) – Mask matrix $$\mathbf{M} \in {\{ 0, 1 \}}^{B \times N}$$ indicating the valid nodes for each graph. (default: None)

• temp (float, optional) – Temperature parameter for softmax function. (default: 1.0)

Return type: