mindspore_gl.nn.GatedGraphConv
- class mindspore_gl.nn.GatedGraphConv(in_feat_size: int, out_feat_size: int, n_steps: int, n_etype: int, bias=True)[source]
Gated Graph Convolution Layer. From the paper Gated Graph Sequence Neural Networks .
\[ \begin{align}\begin{aligned}\begin{split}h_{i}^{0} = [ x_i \| \mathbf{0} ] \\\end{split}\\\begin{split}a_{i}^{t} = \sum_{j\in\mathcal{N}(i)} W_{e_{ij}} h_{j}^{t} \\\end{split}\\h_{i}^{t+1} = \mathrm{GRU}(a_{i}^{t}, h_{i}^{t})\end{aligned}\end{align} \]- Parameters
- Inputs:
x (Tensor): The input node features. The shape is \((N,*)\) where \(N\) is the number of nodes, and \(*\) could be of any shape.
src_idx (List): The source index for each edge type.
dst_idx (List): The destination index for each edge type.
n_nodes (int): The number of nodes of the whole graph.
n_edges (List): The number of edges for each edge type.
- Outputs:
Tensor, output node features. The shape is \((N, out\_feat\_size)\).
- Raises
- Supported Platforms:
Ascend
GPU
Examples
>>> import mindspore as ms >>> from mindspore_gl.nn import GatedGraphConv >>> from mindspore_gl import GraphField >>> feat_size = 16 >>> n_nodes = 4 >>> h = ms.ops.Ones()((n_nodes, feat_size), ms.float32) >>> src_idx = [ms.Tensor([0, 1, 2, 3], ms.int32), ms.Tensor([0, 0, 1, 1], ms.int32), ... ms.Tensor([0, 0, 1, 2, 3], ms.int32), ms.Tensor([2, 3, 3, 0, 1], ms.int32), ... ms.Tensor([0, 1, 2, 3], ms.int32), ms.Tensor([2, 0, 2, 1], ms.int32)] >>> dst_idx = [ms.Tensor([0, 0, 1, 1], ms.int32), ms.Tensor([0, 1, 2, 3], ms.int32), ... ms.Tensor([2, 3, 3, 0, 1], ms.int32), ms.Tensor([0, 0, 1, 2, 3], ms.int32), ... ms.Tensor([2, 0, 2, 1], ms.int32), ms.Tensor([0, 1, 2, 3], ms.int32)] >>> n_edges = [4, 4, 5, 5, 4, 4] >>> conv = GatedGraphConv(feat_size, 16, 2, 6, True) >>> ret = conv(h, src_idx, dst_idx, n_nodes, n_edges) >>> print(ret.shape) (4, 16)