文档反馈

问题文档片段

问题文档片段包含公式时,显示为空格。

提交类型
issue

有点复杂...

找人问问吧。

请选择提交类型

问题类型
规范和低错类

- 规范和低错类:

- 错别字或拼写错误,标点符号使用错误、公式错误或显示异常。

- 链接错误、空单元格、格式错误。

- 英文中包含中文字符。

- 界面和描述不一致,但不影响操作。

- 表述不通顺,但不影响理解。

- 版本号不匹配:如软件包名称、界面版本号。

易用性

- 易用性:

- 关键步骤错误或缺失,无法指导用户完成任务。

- 缺少主要功能描述、关键词解释、必要前提条件、注意事项等。

- 描述内容存在歧义指代不明、上下文矛盾。

- 逻辑不清晰,该分类、分项、分步骤的没有给出。

正确性

- 正确性:

- 技术原理、功能、支持平台、参数类型、异常报错等描述和软件实现不一致。

- 原理图、架构图等存在错误。

- 命令、命令参数等错误。

- 代码片段错误。

- 命令无法完成对应功能。

- 界面错误,无法指导操作。

- 代码样例运行报错、运行结果不符。

风险提示

- 风险提示:

- 对重要数据或系统存在风险的操作,缺少安全提示。

内容合规

- 内容合规:

- 违反法律法规,涉及政治、领土主权等敏感词。

- 内容侵权。

请选择问题类型

问题描述

点击输入详细问题描述,以帮助我们快速定位问题。

mindspore_gl.nn

APIs for graph convolutions.

class mindspore_gl.nn.APPNPConv(k: int, alpha: float, edge_drop=1.0)[source]

Approximate Personalization Propagation in Neural Prediction Layers. From the paper Predict then Propagate: Graph Neural Networks meet Personalized PageRank.

H0=XHl+1=(1α)(D~1/2A~D~1/2Hl)+αH0

Where A~=A+I

Parameters
  • k (int) – Number of iters.

  • alpha (float) – Transmission probability.

  • edge_drop (float) – The drop rate on the edge of messages received by each node. Default: 1.0.

Inputs:
  • x (Tensor): The input node features. The shape is (N,) where N is the number of nodes, and could be of any shape.

  • in_deg (Tensor): In degree for nodes. In degree for nodes. The shape is (N,) where N is the number of nodes.

  • out_deg (Tensor): Out degree for nodes. Out degree for nodes. The shape is (N,) where N is the number of nodes.

  • g (Graph): The input graph.

Outputs:

Tensor, the output feature of shape (N,) where should be the same as input shape.

Raises
  • TypeError – If k is not an int.

  • TypeError – If alpha or edge_drop is not a float.

  • ValueError – If alpha is not in range [0.0, 1.0]

  • ValueError – If edge_drop is not in range (0.0, 1.0]

Examples

>>> import mindspore as ms
>>> from mindspore_gl.nn.conv import APPNPConv
>>> from mindspore_gl import GraphField
>>> n_nodes = 4
>>> n_edges = 7
>>> feat_size = 4
>>> src_idx = ms.Tensor([0, 1, 1, 2, 2, 3, 3], ms.int32)
>>> dst_idx = ms.Tensor([0, 0, 2, 1, 3, 0, 1], ms.int32)
>>> ones = ms.ops.Ones()
>>> feat = ones((n_nodes, feat_size), ms.float32)
>>> graph_field = GraphField(src_idx, dst_idx, n_nodes, n_edges)
>>> in_degree = ms.Tensor([3, 2, 1, 1], ms.int32)
>>> out_degree = ms.Tensor([1, 2, 1, 2], ms.int32)
>>> appnpconv = APPNPConv(k=3, alpha=0.5, edge_drop=1.0)
>>> res = appnpconv(feat, in_degree, out_degree, *graph_field.get_graph())
>>> print(res.shape)
(4, 4)
class mindspore_gl.nn.GATConv(in_feat_size: int, out_size: int, num_attn_head: int, input_drop_out_rate: float = 1.0, attn_drop_out_rate: float = 1.0, leaky_relu_slope: float = 0.2, activation=None, add_norm=False)[source]

Graph Attention Network, from the paper Graph Attention Network.

hi(l+1)=jN(i)αi,jW(l)hj(l)

αi,j represents the attention score between node i and node j.

αijl=softmaxi(eijl)eijl=LeakyReLU(aT[WhiWhj])
Parameters
  • in_feat_size (int) – Input node feature size.

  • out_size (int) – Output node feature size.

  • num_attn_head (int) – Number of attention head used in GAT.

  • input_drop_out_rate (float) – Input drop out rate. Default: 1.0.

  • attn_drop_out_rate (float) – Attention drop out rate. Default: 1.0.

  • leaky_relu_slope (float) – Slope for leaky relu. Default: 0.2.

  • activation (Cell) – Activation function, default is None.

  • add_norm – Whether the edge information needs normalization or not. Default: False.

Inputs:
  • x (Tensor) - The input node features. The shape is (N,Din) where N is the number of nodes and Din could be of any shape.

  • g (Graph) - The input graph.

Outputs:

Tensor, the output feature of shape (N,Dout) where Dout should be equal to Dinnum_attn_head.

Raises
  • TypeError – If in_feat_size, out_size, or num_attn_head is not an int.

  • TypeError – If input_drop_out_rate, attn_drop_out_rate, or leaky_relu_slope is not a float.

  • TypeError – If activation is not a Cell.

  • ValueError – If input_drop_out_rate or attn_drop_out_rate is not in range (0.0, 1.0]

Examples

>>> import mindspore as ms
>>> from mindspore_gl.nn.conv import GATConv
>>> from mindspore_gl import GraphField
>>> n_nodes = 4
>>> n_edges = 7
>>> feat_size = 4
>>> src_idx = ms.Tensor([0, 1, 1, 2, 2, 3, 3], ms.int32)
>>> dst_idx = ms.Tensor([0, 0, 2, 1, 3, 0, 1], ms.int32)
>>> ones = ms.ops.Ones()
>>> feat = ones((n_nodes, feat_size), ms.float32)
>>> graph_field = GraphField(src_idx, dst_idx, n_nodes, n_edges)
>>> gatconv = GATConv(in_feat_size=4, out_size=2, num_attn_head=3)
>>> res = gatconv(feat, *graph_field.get_graph())
>>> print(res.shape)
(4, 6)
class mindspore_gl.nn.GCNConv(in_feat_size: int, out_size: int, activation=None, dropout=0.5)[source]

Graph Convolution Network Layer. from the paper Semi-Supervised Classification with Graph Convolutional Networks.

hi(l+1)=σ(b(l)+jN(i)1cjihj(l)W(l))

N(i) represents the neighbour node of i. cji=|N(j)||N(i)|.

hi(l+1)=σ(b(l)+jN(i)ejicjihj(l)W(l))
Parameters
  • in_feat_size (int) – Input node feature size.

  • out_size (int) – Output node feature size.

  • activation (Cell) – Activation function, default is None.

  • dropout (float) – The keep rate, greater than 0 and less equal than 1. E.g. dropout=0.9, dropping out 10% of input units. Default: 0.5.

Inputs:
  • x (Tensor) - The input node features. The shape is (N,Din) where N is the number of nodes, and Din should be equal to in_feat_size in Args.

  • in_deg (Tensor) - In degree for nodes. The shape is (N,) where N is the number of nodes.

  • out_deg (Tensor) - Out degree for nodes. The shape is (N,) where N is the number of nodes.

  • g (Graph) - The input graph.

Outputs:

Tensor, output node features with shape of (N,Dout), where (Dout) should be the same as out_size in Args.

Raises
  • TypeError – If in_feat_size or out_size is not an int.

  • TypeError – If dropout is not a float.

  • TypeError – If activation is not a Cell.

  • ValueError – If dropout is not in range (0.0, 1.0]

Supported Platforms:

GPU

Examples

>>> import mindspore as ms
>>> from mindspore_gl.nn.conv import GCNConv
>>> from mindspore_gl import GraphField
>>> n_nodes = 4
>>> n_edges = 7
>>> feat_size = 4
>>> src_idx = ms.Tensor([0, 1, 1, 2, 2, 3, 3], ms.int32)
>>> dst_idx = ms.Tensor([0, 0, 2, 1, 3, 0, 1], ms.int32)
>>> ones = ms.ops.Ones()
>>> feat = ones((n_nodes, feat_size), ms.float32)
>>> graph_field = GraphField(src_idx, dst_idx, n_nodes, n_edges)
>>> in_degree = ms.Tensor([3, 2, 1, 1], ms.int32)
>>> out_degree = ms.Tensor([1, 2, 1, 2], ms.int32)
>>> gcnconv = GCNConv(in_feat_size=4, out_size=2, activation=None, dropout=1.0)
>>> res = gcnconv(feat, in_degree, out_degree, *graph_field.get_graph())
>>> print(res.shape)
(4, 2)
class mindspore_gl.nn.GNNCell[source]

GNN Cell class.

Construct function will be translated by default.

static disable_display()[source]

Disable display code comparison.

static enable_display(screen_width=200)[source]

Enable display code comparison.

Parameters

screen_width (int) – Determines the screen width on which the code is displayed.