mindspore_gl.nn.DOTGATConv
- class mindspore_gl.nn.DOTGATConv(in_feat_size: int, out_feat_size: int, num_heads: int, bias=False)[source]
Applying a dot product version of self-attention in GAT. From the paper Graph Attention Network .
\[h_i^{(l+1)} = \sum_{j\in \mathcal{N}(i)} \alpha_{i, j} h_j^{(l)}\]\(\alpha_{i, j}\) represents the attention score between node \(i\) and node \(j\).
\[\begin{split}\alpha_{i, j} = \mathrm{softmax_i}(e_{ij}^{l}) \\ e_{ij}^{l} = ({W_i^{(l)} h_i^{(l)}})^T \cdot {W_j^{(l)} h_j^{(l)}}\end{split}\]- Parameters
- Inputs:
x (Tensor): The input node features. The shape is \((N,*)\) where \(N\) is the number of nodes, and \(*\) could be of any shape.
g (Graph): The input graph.
- Outputs:
Tensor, output node features. The shape is \((N, num\_heads, out\_feat\_size)\).
- Raises
- Supported Platforms:
Ascend
GPU
Examples
>>> import mindspore as ms >>> from mindspore_gl.nn import DOTGATConv >>> from mindspore_gl import GraphField >>> n_nodes = 4 >>> n_edges = 8 >>> feat_size = 16 >>> src_idx = ms.Tensor([0, 0, 0, 1, 1, 1, 2, 3], ms.int32) >>> dst_idx = ms.Tensor([0, 1, 3, 1, 2, 3, 3, 2], ms.int32) >>> ones = ms.ops.Ones() >>> nodes_feat = ones((n_nodes, feat_size), ms.float32) >>> graph_field = GraphField(src_idx, dst_idx, n_nodes, n_edges) >>> out_size = 4 >>> conv = DOTGATConv(feat_size, out_size, num_heads=2, bias=True) >>> ret = conv(nodes_feat, *graph_field.get_graph()) >>> print(ret.shape) (4, 2, 4)