mindspore_gl.nn.SAGEConv
- class mindspore_gl.nn.SAGEConv(in_feat_size: int, out_feat_size: int, aggregator_type: str = 'pool', bias=True, norm=None, activation=None)[source]
GraphSAGE Layer. From the paper Inductive Representation Learning on Large Graphs .
If weights are provided on each edge, the weighted graph convolution is defined as:
- Parameters
in_feat_size (int) – Input node feature size.
out_feat_size (int) – Output node feature size.
aggregator_type (str, optional) – Type of aggregator, should in
'pool'
,'lstm'
and'mean'
. Default:'pool'
.bias (bool, optional) – Whether use bias. Default:
True
.norm (Cell, optional) – Normalization function Cell. Default:
None
.activation (Cell, optional) – Activation function Cell. Default:
None
.
- Inputs:
x (Tensor) - The input node features. The shape is
where is the number of nodes and could be of any shape.edge_weight (Tensor) - Edge weights. The shape is
where is the number of edges.g (Graph) - The input graph.
- Outputs:
Tensor, the output feature of shape
. where is the number of nodes and could be of any shape.
- Raises
- Supported Platforms:
Ascend
GPU
Examples
>>> import mindspore as ms >>> from mindspore import nn >>> from mindspore.numpy import ones >>> from mindspore_gl.nn import SAGEConv >>> from mindspore_gl import GraphField >>> n_nodes = 4 >>> n_edges = 7 >>> feat_size = 4 >>> src_idx = ms.Tensor([0, 1, 1, 2, 2, 3, 3], ms.int32) >>> dst_idx = ms.Tensor([0, 0, 2, 1, 3, 0, 1], ms.int32) >>> ones = ms.ops.Ones() >>> feat = ones((n_nodes, feat_size), ms.float32) >>> graph_field = GraphField(src_idx, dst_idx, n_nodes, n_edges) >>> sageconv = SAGEConv(in_feat_size=4, out_feat_size=2, activation=nn.ReLU()) >>> edge_weight = ones((n_edges, 1), ms.float32) >>> res = sageconv(feat, edge_weight, *graph_field.get_graph()) >>> print(res.shape) (4,2)