文档反馈

问题文档片段

问题文档片段包含公式时,显示为空格。

提交类型
issue

有点复杂...

找人问问吧。

请选择提交类型

问题类型
规范和低错类

- 规范和低错类:

- 错别字或拼写错误,标点符号使用错误、公式错误或显示异常。

- 链接错误、空单元格、格式错误。

- 英文中包含中文字符。

- 界面和描述不一致,但不影响操作。

- 表述不通顺,但不影响理解。

- 版本号不匹配:如软件包名称、界面版本号。

易用性

- 易用性:

- 关键步骤错误或缺失,无法指导用户完成任务。

- 缺少主要功能描述、关键词解释、必要前提条件、注意事项等。

- 描述内容存在歧义指代不明、上下文矛盾。

- 逻辑不清晰,该分类、分项、分步骤的没有给出。

正确性

- 正确性:

- 技术原理、功能、支持平台、参数类型、异常报错等描述和软件实现不一致。

- 原理图、架构图等存在错误。

- 命令、命令参数等错误。

- 代码片段错误。

- 命令无法完成对应功能。

- 界面错误,无法指导操作。

- 代码样例运行报错、运行结果不符。

风险提示

- 风险提示:

- 对重要数据或系统存在风险的操作,缺少安全提示。

内容合规

- 内容合规:

- 违反法律法规,涉及政治、领土主权等敏感词。

- 内容侵权。

请选择问题类型

问题描述

点击输入详细问题描述,以帮助我们快速定位问题。

mindspore.nn

Neural Networks Cells.

Pre-defined building blocks or computing units to construct neural networks.

Cell

API Name

Description

Supported Platforms

mindspore.nn.Cell

Base class for all neural networks.

Ascend GPU CPU

mindspore.nn.GraphKernel

Base class for GraphKernel.

To Be Developed

Containers

API Name

Description

Supported Platforms

mindspore.nn.CellList

Holds Cells in a list.

Ascend GPU

mindspore.nn.SequentialCell

Sequential cell container.

Ascend GPU

Convolution Layers

API Name

Description

Supported Platforms

mindspore.nn.Conv1d

1D convolution layer.

Ascend GPU

mindspore.nn.Conv1dTranspose

1D transposed convolution layer.

Ascend GPU

mindspore.nn.Conv2d

2D convolution layer.

Ascend GPU CPU

mindspore.nn.Conv2dTranspose

2D transposed convolution layer.

Ascend GPU

Recurrent Layers

API Name

Description

Supported Platforms

mindspore.nn.LSTMCell

LSTM (Long Short-Term Memory) layer.

GPU CPU

mindspore.nn.LSTM

Stacked LSTM (Long Short-Term Memory) layers.

Ascend GPU

Sparse Layers

API Name

Description

Supported Platforms

mindspore.nn.Embedding

A simple lookup table that stores embeddings of a fixed dictionary and size.

Ascend GPU

mindspore.nn.EmbeddingLookup

Returns a slice of the input tensor based on the specified indices.

Ascend CPU

mindspore.nn.MultiFieldEmbeddingLookup

Returns a slice of input tensor based on the specified indices and the field ids.

Ascend GPU

Non-linear Activations

API Name

Description

Supported Platforms

mindspore.nn.ELU

Exponential Linear Uint activation function.

Ascend GPU

mindspore.nn.FastGelu

Fast Gaussian error linear unit activation function.

Ascend

mindspore.nn.GELU

Gaussian error linear unit activation function.

Ascend GPU

mindspore.nn.get_activation

Gets the activation function.

To Be Developed

mindspore.nn.HSigmoid

Hard sigmoid activation function.

GPU

mindspore.nn.HSwish

Hard swish activation function.

GPU

mindspore.nn.LeakyReLU

Leaky ReLU activation function.

Ascend GPU

mindspore.nn.LogSigmoid

Logsigmoid activation function.

Ascend GPU

mindspore.nn.LogSoftmax

LogSoftmax activation function.

Ascend GPU

mindspore.nn.PReLU

PReLU activation function.

Ascend

mindspore.nn.ReLU

Rectified Linear Unit activation function.

Ascend GPU CPU

mindspore.nn.ReLU6

Compute ReLU6 activation function.

Ascend GPU CPU

mindspore.nn.Sigmoid

Sigmoid activation function.

Ascend GPU CPU

mindspore.nn.Softmax

Softmax activation function.

Ascend GPU CPU

mindspore.nn.Tanh

Tanh activation function.

Ascend GPU CPU

Utilities

API Name

Description

Supported Platforms

mindspore.nn.ClipByNorm

Clips tensor values to a maximum L2-norm.

Ascend GPU

mindspore.nn.Dense

The dense connected layer.

Ascend GPU CPU

mindspore.nn.Dropout

Dropout layer for the input.

Ascend GPU CPU

mindspore.nn.Flatten

Flatten layer for the input.

Ascend GPU CPU

mindspore.nn.L1Regularizer

Apply l1 regularization to weights

Ascend GPU CPU

mindspore.nn.Norm

Computes the norm of vectors, currently including Euclidean norm, i.e., L2-norm.

Ascend GPU

mindspore.nn.OneHot

Returns a one-hot tensor.

Ascend GPU CPU

mindspore.nn.Pad

Pads the input tensor according to the paddings and mode.

Ascend GPU

mindspore.nn.Range

Creates a sequence of numbers in range [start, limit) with step size delta.

Ascend

mindspore.nn.ResizeBilinear

Samples the input tensor to the given size or scale_factor by using bilinear interpolate.

Ascend

mindspore.nn.Tril

Returns a tensor with elements above the kth diagonal zeroed.

Ascend GPU CPU

mindspore.nn.Triu

Returns a tensor with elements below the kth diagonal zeroed.

Ascend GPU CPU

mindspore.nn.Unfold

Extract patches from images.

Ascend

Images Functions

API Name

Description

Supported Platforms

mindspore.nn.CentralCrop

Crop the centeral region of the images with the central_fraction.

Ascend GPU CPU

mindspore.nn.ImageGradients

Returns two tensors, the first is along the height dimension and the second is along the width dimension.

Ascend GPU

mindspore.nn.MSSSIM

Returns MS-SSIM index between two images.

Ascend

mindspore.nn.PSNR

Returns Peak Signal-to-Noise Ratio of two image batches.

Ascend GPU

mindspore.nn.SSIM

Returns SSIM index between two images.

Ascend GPU

Normalization Layers

API Name

Description

Supported Platforms

mindspore.nn.BatchNorm1d

Batch normalization layer over a 2D input.

Ascend GPU

mindspore.nn.BatchNorm2d

Batch normalization layer over a 4D input.

Ascend GPU CPU

mindspore.nn.GlobalBatchNorm

Global normalization layer over a N-dimension input.

Ascend

mindspore.nn.GroupNorm

Group Normalization over a mini-batch of inputs.

Ascend GPU

mindspore.nn.LayerNorm

Applies Layer Normalization over a mini-batch of inputs.

Ascend GPU

mindspore.nn.MatrixDiag

Returns a batched diagonal tensor with a given batched diagonal values.

Ascend

mindspore.nn.MatrixDiagPart

Returns the batched diagonal part of a batched tensor.

Ascend

mindspore.nn.MatrixSetDiag

Modifies the batched diagonal part of a batched tensor.

Ascend

Pooling layers

API Name

Description

Supported Platforms

mindspore.nn.AvgPool1d

1D average pooling for temporal data.

Ascend

mindspore.nn.AvgPool2d

2D average pooling for temporal data.

Ascend GPU

mindspore.nn.MaxPool1d

1D max pooling operation for temporal data.

Ascend

mindspore.nn.MaxPool2d

2D max pooling operation for temporal data.

Ascend GPU CPU

Quantized Functions

API Name

Description

Supported Platforms

mindspore.nn.ActQuant

Quantization aware training activation function.

Ascend GPU

mindspore.nn.Conv2dBnAct

A combination of convolution, Batchnorm, and activation layer.

Ascend GPU

mindspore.nn.Conv2dBnFoldQuant

2D convolution with BatchNormal operation folded construct.

Ascend GPU

mindspore.nn.Conv2dBnFoldQuantOneConv

2D convolution which use the convolution layer statistics once to calculate BatchNormal operation folded construct.

To Be Developed

mindspore.nn.Conv2dBnWithoutFoldQuant

2D convolution and batchnorm without fold with fake quantized construct.

Ascend GPU

mindspore.nn.Conv2dQuant

2D convolution with fake quantized operation layer.

Ascend GPU

mindspore.nn.DenseBnAct

A combination of Dense, Batchnorm, and the activation layer.

Ascend

mindspore.nn.DenseQuant

The fully connected layer with fake quantized operation.

Ascend GPU

mindspore.nn.FakeQuantWithMinMaxObserver

Quantization aware operation which provides the fake quantization observer function on data with min and max.

To Be Developed

mindspore.nn.MulQuant

Add fake quantized operation after Mul operation.

Ascend GPU

mindspore.nn.TensorAddQuant

Add fake quantized operation after TensorAdd operation.

Ascend GPU

Loss Functions

API Name

Description

Supported Platforms

mindspore.nn.BCELoss

BCELoss creates a criterion to measure the binary cross entropy between the true labels and predicted labels.

Ascend GPU

mindspore.nn.CosineEmbeddingLoss

Computes the similarity between two tensors using cosine distance.

Ascend GPU

mindspore.nn.L1Loss

L1Loss creates a criterion to measure the mean absolute error (MAE) between x and y element-wise, where x is the input Tensor and y is the target Tensor.

Ascend GPU

mindspore.nn.MSELoss

MSELoss creates a criterion to measure the mean squared error (squared L2-norm) between x and y element-wise, where x is the input and y is the target.

Ascend GPU

mindspore.nn.SampledSoftmaxLoss

Computes the sampled softmax training loss.

GPU

mindspore.nn.SmoothL1Loss

A loss class for learning region proposals.

Ascend GPU CPU

mindspore.nn.SoftmaxCrossEntropyWithLogits

Computes softmax cross entropy between logits and labels.

Ascend GPU CPU

Optimizer Functions

API Name

Description

Supported Platforms

mindspore.nn.Adagrad

Implements the Adagrad algorithm with ApplyAdagrad Operator.

Ascend CPU GPU

mindspore.nn.Adam

Updates gradients by the Adaptive Moment Estimation (Adam) algorithm.

Ascend GPU

mindspore.nn.AdamOffload

This optimizer will offload Adam optimizer to host CPU and keep parameters being updated on the device, to minimize the memory cost.

Ascend GPU CPU

mindspore.nn.AdamWeightDecay

Implements the Adam algorithm to fix the weight decay.

Ascend GPU

mindspore.nn.FTRL

Implements the FTRL algorithm with ApplyFtrl Operator.

Ascend GPU

mindspore.nn.Lamb

Lamb Dynamic Learning Rate.

Ascend GPU

mindspore.nn.LARS

Implements the LARS algorithm with LARSUpdate Operator.

Ascend

mindspore.nn.LazyAdam

This optimizer will apply a lazy adam algorithm when gradient is sparse.

Ascend

mindspore.nn.Momentum

Implements the Momentum algorithm.

Ascend GPU CPU

mindspore.nn.Optimizer

Base class for all optimizers.

Ascend GPU

mindspore.nn.ProximalAdagrad

Implements the ProximalAdagrad algorithm with ApplyProximalAdagrad Operator.

Ascend

mindspore.nn.RMSProp

Implements Root Mean Squared Propagation (RMSProp) algorithm.

Ascend GPU

mindspore.nn.SGD

Implements stochastic gradient descent.

Ascend GPU

Wrapper Functions

API Name

Description

Supported Platforms

mindspore.nn.DistributedGradReducer

A distributed optimizer.

Ascend, GPU

mindspore.nn.DynamicLossScaleUpdateCell

Dynamic Loss scale update cell.

Ascend GPU

mindspore.nn.FixedLossScaleUpdateCell

Static scale update cell, the loss scaling value will not be updated.

Ascend GPU

mindspore.nn.GetNextSingleOp

Cell to run for getting the next operation.

Ascend GPU

mindspore.nn.ParameterUpdate

Cell that updates parameters.

Ascend

mindspore.nn.TrainOneStepCell

Network training package class.

Ascend GPU

mindspore.nn.TrainOneStepWithLossScaleCell

Network training with loss scaling.

Ascend GPU

mindspore.nn.WithEvalCell

Cell that returns loss, output and label for evaluation.

Ascend GPU

mindspore.nn.WithGradCell

Cell that returns the gradients.

Ascend GPU

mindspore.nn.WithLossCell

Cell with loss function.

Ascend GPU

Math Functions

API Name

Description

Supported Platforms

mindspore.nn.DiGamma

Calculates Digamma using Lanczos' approximation referring to "A Precision Approximation of the Gamma Function".

Ascend GPU

mindspore.nn.IGamma

Calculates lower regularized incomplete Gamma function.

Ascend GPU

mindspore.nn.LBeta

This is semantically equal to lgamma(x) + lgamma(y) - lgamma(x + y).

Ascend GPU

mindspore.nn.LGamma

Calculates LGamma using Lanczos' approximation referring to "A Precision Approximation of the Gamma Function".

Ascend GPU

mindspore.nn.MatDet

Calculates the determinant of Positive-Definite Hermitian matrix using Cholesky decomposition.

GPU

mindspore.nn.MatInverse

Calculates the inverse of Positive-Definite Hermitian matrix using Cholesky decomposition.

GPU

mindspore.nn.MatMul

Multiplies matrix x1 by matrix x2.

Ascend GPU CPU

mindspore.nn.Moments

Calculates the mean and variance of x.

Ascend

mindspore.nn.ReduceLogSumExp

Reduces a dimension of a tensor by calculating exponential for all elements in the dimension, then calculate logarithm of the sum.

Ascend GPU

Metrics

mindspore.nn.Accuracy

Calculates the accuracy for classification and multilabel data.

mindspore.nn.F1

Calculates the F1 score.

mindspore.nn.Fbeta

Calculates the fbeta score.

mindspore.nn.get_metric_fn

Gets the metric method based on the input name.

mindspore.nn.Loss

Calculates the average of the loss.

mindspore.nn.MAE

Calculates the mean absolute error.

mindspore.nn.Metric

Base class of metric.

mindspore.nn.MSE

Measures the mean squared error.

mindspore.nn.names

Gets the names of the metric methods.

mindspore.nn.Precision

Calculates precision for classification and multilabel data.

mindspore.nn.Recall

Calculates recall for classification and multilabel data.

mindspore.nn.Top1CategoricalAccuracy

Calculates the top-1 categorical accuracy.

mindspore.nn.Top5CategoricalAccuracy

Calculates the top-5 categorical accuracy.

mindspore.nn.TopKCategoricalAccuracy

Calculates the top-k categorical accuracy.

Dynamic Learning Rate

LearningRateSchedule

The dynamic learning rates in this module are all subclasses of LearningRateSchedule. Pass the instance of LearningRateSchedule to an optimizer. During the training process, the optimizer calls the instance taking current step as input to get the current learning rate.

import mindspore.nn as nn

min_lr = 0.01
max_lr = 0.1
decay_steps = 4
cosine_decay_lr = nn.CosineDecayLR(min_lr, max_lr, decay_steps)

net = Net()
optim = nn.Momentum(net.trainable_params(), learning_rate=cosine_decay_lr, momentum=0.9)

mindspore.nn.CosineDecayLR

Calculates learning rate base on cosine decay function.

mindspore.nn.ExponentialDecayLR

Calculates learning rate base on exponential decay function.

mindspore.nn.InverseDecayLR

Calculates learning rate base on inverse-time decay function.

mindspore.nn.NaturalExpDecayLR

Calculates learning rate base on natural exponential decay function.

mindspore.nn.PolynomialDecayLR

Calculates learning rate base on polynomial decay function.

mindspore.nn.WarmUpLR

Gets learning rate warming up.

Dynamic LR

The dynamic learning rates in this module are all functions. Call the function and pass the result to an optimizer. During the training process, the optimizer takes result[current step] as current learning rate.

import mindspore.nn as nn

min_lr = 0.01
max_lr = 0.1
total_step = 6
step_per_epoch = 1
decay_epoch = 4

lr= nn.cosine_decay_lr(min_lr, max_lr, total_step, step_per_epoch, decay_epoch)

net = Net()
optim = nn.Momentum(net.trainable_params(), learning_rate=lr, momentum=0.9)

mindspore.nn.cosine_decay_lr

Calculate learning rate base on cosine decay function.

mindspore.nn.exponential_decay_lr

Calculate learning rate base on exponential decay function.

mindspore.nn.inverse_decay_lr

Calculate learning rate base on inverse-time decay function.

mindspore.nn.natural_exp_decay_lr

Calculate learning rate base on natural exponential decay function.

mindspore.nn.piecewise_constant_lr

Get piecewise constant learning rate.

mindspore.nn.polynomial_decay_lr

Calculate learning rate base on polynomial decay function.

mindspore.nn.warmup_lr

Get learning rate warming up.