mindspore.nn

Neural Network Cell

For building predefined building blocks or computational units in neural networks.

For more information about dynamic shape support status, please refer to Dynamic Shape Support Status of nn Interface .

Compared with the previous version, the added, deleted and supported platforms change information of mindspore.nn operators in MindSpore, please refer to the link mindspore.nn API Interface Change .

Basic Block

API Name

Description

Supported Platforms

mindspore.nn.Cell

The basic building block of neural networks in MindSpore.

Ascend GPU CPU

mindspore.nn.GraphCell

Base class for running the graph loaded from a MindIR.

Ascend GPU CPU

mindspore.nn.LossBase

Base class for other losses.

Ascend GPU CPU

mindspore.nn.Optimizer

Base class for updating parameters.

Ascend GPU CPU

Container

API Name

Description

Supported Platforms

mindspore.nn.CellDict

Holds Cells in a dictionary.

Ascend GPU CPU

mindspore.nn.CellList

Holds Cells in a list.

Ascend GPU CPU

mindspore.nn.SequentialCell

Sequential Cell container.

Ascend GPU CPU

Wrapper Layer

API Name

Description

Supported Platforms

mindspore.nn.DistributedGradReducer

A distributed optimizer.

Ascend GPU

mindspore.nn.DynamicLossScaleUpdateCell

Dynamic Loss scale update cell.

Ascend GPU

mindspore.nn.FixedLossScaleUpdateCell

Update cell with fixed loss scaling value.

Ascend GPU

mindspore.nn.ForwardValueAndGrad

Encapsulate training network.

Ascend GPU CPU

mindspore.nn.GetNextSingleOp

Cell to run for getting the next operation.

Ascend GPU

mindspore.nn.GradAccumulationCell

Wrap the network with Micro Batch to enable the grad accumulation in semi_auto_parallel/auto_parallel mode.

Ascend GPU

mindspore.nn.MicroBatchInterleaved

This function splits the input at the 0th into interleave_num pieces and then performs the computation of the wrapped cell.

Ascend GPU

mindspore.nn.ParameterUpdate

Cell that updates parameter.

Ascend GPU CPU

mindspore.nn.PipelineCell

Slice MiniBatch into finer-grained MicroBatch for use in pipeline-parallel training.

Ascend GPU

mindspore.nn.PipelineGradReducer

PipelineGradReducer is a gradient reducer for pipeline parallelism.

Ascend GPU

mindspore.nn.TimeDistributed

The time distributed layer.

Ascend GPU CPU

mindspore.nn.TrainOneStepCell

Network training package class.

Ascend GPU CPU

mindspore.nn.TrainOneStepWithLossScaleCell

Network training with loss scaling.

Ascend GPU

mindspore.nn.WithEvalCell

Wraps the forward network with the loss function.

Ascend GPU CPU

mindspore.nn.WithLossCell

Cell with loss function.

Ascend GPU CPU

Convolutional Layer

API Name

Description

Supported Platforms

mindspore.nn.Conv1d

1D convolution layer.

Ascend GPU CPU

mindspore.nn.Conv1dTranspose

Calculates a 1D transposed convolution, which can be regarded as Conv1d for the gradient of the input, also called deconvolution (although it is not an actual deconvolution).

Ascend GPU CPU

mindspore.nn.Conv2d

2D convolution layer.

Ascend GPU CPU

mindspore.nn.Conv2dTranspose

Calculates a 2D transposed convolution, which can be regarded as Conv2d for the gradient of the input, also called deconvolution (although it is not an actual deconvolution).

Ascend GPU CPU

mindspore.nn.Conv3d

3D convolution layer.

Ascend GPU CPU

mindspore.nn.Conv3dTranspose

Calculates a 3D transposed convolution, which can be regarded as Conv3d for the gradient of the input.

Ascend GPU CPU

mindspore.nn.Unfold

Extracts patches from images.

Ascend GPU

Recurrent Layer

API Name

Description

Supported Platforms

mindspore.nn.RNN

Stacked Elman RNN layers, applying RNN layer with \(\tanh\) or \(\text{ReLU}\) non-linearity to the input.

Ascend GPU CPU

mindspore.nn.RNNCell

An Elman RNN cell with tanh or ReLU non-linearity.

Ascend GPU CPU

mindspore.nn.GRU

Stacked GRU (Gated Recurrent Unit) layers.

Ascend GPU CPU

mindspore.nn.GRUCell

A GRU(Gated Recurrent Unit) cell.

Ascend GPU CPU

mindspore.nn.LSTM

Stacked LSTM (Long Short-Term Memory) layers.

Ascend GPU CPU

mindspore.nn.LSTMCell

A LSTM (Long Short-Term Memory) cell.

Ascend GPU CPU

Transformer Layer

API Name

Description

Supported Platforms

mindspore.nn.MultiheadAttention

This is an implementation of multihead attention in the paper Attention is all you need.

Ascend GPU CPU

mindspore.nn.TransformerEncoderLayer

Transformer Encoder Layer.

Ascend GPU CPU

mindspore.nn.TransformerDecoderLayer

Transformer Decoder Layer.

Ascend GPU CPU

mindspore.nn.TransformerEncoder

Transformer Encoder module with multi-layer stacked of mindspore.nn.TransformerEncoderLayer, including multihead attention and feedforward layer.

Ascend GPU CPU

mindspore.nn.TransformerDecoder

Transformer Decoder module with multi-layer stacked of mindspore.nn.TransformerDecoderLayer, including multihead self attention, cross attention and feedforward layer.

Ascend GPU CPU

mindspore.nn.Transformer

Transformer module including encoder and decoder.

Ascend GPU CPU

Embedding Layer

API Name

Description

Supported Platforms

mindspore.nn.Embedding

A simple lookup table that stores embeddings of a fixed dictionary and size.

Ascend GPU CPU

mindspore.nn.EmbeddingLookup

EmbeddingLookup layer.

Ascend GPU CPU

mindspore.nn.MultiFieldEmbeddingLookup

Returns a slice of input tensor based on the specified indices and the field ids.

Ascend GPU

Nonlinear Activation Layer

API Name

Description

Supported Platforms

mindspore.nn.CELU

CELU Activation Operator.

Ascend GPU CPU

mindspore.nn.ELU

Applies the exponential linear unit function element-wise.

Ascend GPU CPU

mindspore.nn.FastGelu

Applies FastGelu function to each element of the input.

Ascend GPU CPU

mindspore.nn.GELU

Applies GELU function to each element of the input.

Ascend GPU CPU

mindspore.nn.GLU

The gated linear unit function.

Ascend GPU CPU

mindspore.nn.get_activation

Gets the activation function.

Ascend GPU CPU

mindspore.nn.Hardtanh

Applies the Hardtanh function element-wise.

Ascend GPU CPU

mindspore.nn.HShrink

Applies Hard Shrink activation function element-wise.

Ascend GPU CPU

mindspore.nn.HSigmoid

Applies Hard sigmoid activation function element-wise.

Ascend GPU CPU

mindspore.nn.HSwish

Applies hswish-type activation element-wise.

Ascend GPU CPU

mindspore.nn.LeakyReLU

Leaky ReLU activation function.

Ascend GPU CPU

mindspore.nn.LogSigmoid

Applies logsigmoid activation element-wise.

Ascend GPU CPU

mindspore.nn.LogSoftmax

Applies the LogSoftmax function to n-dimensional input tensor element-wise.

Ascend GPU CPU

mindspore.nn.LRN

Local Response Normalization.

GPU CPU

mindspore.nn.Mish

Computes MISH (A Self Regularized Non-Monotonic Neural Activation Function) of input tensors element-wise.

Ascend GPU CPU

mindspore.nn.Softsign

Applies softsign activation function element-wise.

Ascend GPU CPU

mindspore.nn.PReLU

Applies PReLU activation function element-wise.

Ascend GPU CPU

mindspore.nn.ReLU

Applies ReLU (Rectified Linear Unit activation function) element-wise.

Ascend GPU CPU

mindspore.nn.ReLU6

Compute ReLU6 activation function element-wise.

Ascend GPU CPU

mindspore.nn.RReLU

Applies RReLU (Randomized Leaky ReLU activation function) element-wise.

Ascend GPU CPU

mindspore.nn.SeLU

Applies activation function SeLU (Scaled exponential Linear Unit) element-wise.

Ascend GPU CPU

mindspore.nn.SiLU

Applies the silu linear unit function element-wise.

Ascend GPU CPU

mindspore.nn.Sigmoid

Applies sigmoid activation function element-wise.

Ascend GPU CPU

mindspore.nn.Softmin

Softmin activation function, which is a two-category function mindspore.nn.Sigmoid in the promotion of multi-classification, and the purpose is to show the results of multi-classification in the form of probability.

Ascend GPU CPU

mindspore.nn.Softmax

Softmax activation function, which is a two-category function mindspore.nn.Sigmoid in the promotion of multi-classification, the purpose is to show the results of multi-classification in the form of probability.

Ascend GPU CPU

mindspore.nn.Softmax2d

Softmax function applied to 2D features data.

Ascend GPU CPU

mindspore.nn.SoftShrink

Applies the SoftShrink function element-wise.

Ascend GPU CPU

mindspore.nn.Tanh

Applies the Tanh function element-wise, returns a new tensor with the hyperbolic tangent of the elements of input, The input is a Tensor with any valid shape.

Ascend GPU CPU

mindspore.nn.Tanhshrink

Applies Tanhshrink activation function element-wise and returns a new tensor.

Ascend GPU CPU

mindspore.nn.Threshold

Thresholds each element of the input Tensor.

Ascend GPU CPU

Linear Layer

API Name

Description

Supported Platforms

mindspore.nn.Dense

The dense connected layer.

Ascend GPU CPU

mindspore.nn.BiDense

The bilinear dense connected layer.

Ascend GPU CPU

Dropout Layer

API Name

Description

Supported Platforms

mindspore.nn.Dropout

Dropout layer for the input.

Ascend GPU CPU

mindspore.nn.Dropout1d

During training, randomly zeroes entire channels of the input tensor with probability p from a Bernoulli distribution (For a 3-dimensional tensor with a shape of \((N, C, L)\), the channel feature map refers to a 1-dimensional feature map with the shape of \(L\)).

Ascend GPU CPU

mindspore.nn.Dropout2d

During training, randomly zeroes some channels of the input tensor with probability p from a Bernoulli distribution (For a 4-dimensional tensor with a shape of \(NCHW\), the channel feature map refers to a 2-dimensional feature map with the shape of \(HW\)).

Ascend GPU CPU

mindspore.nn.Dropout3d

During training, randomly zeroes some channels of the input tensor with probability p from a Bernoulli distribution (For a 5-dimensional tensor with a shape of \(NCDHW\), the channel feature map refers to a 3-dimensional feature map with a shape of \(DHW\)).

Ascend GPU CPU

Normalization Layer

API Name

Description

Supported Platforms

mindspore.nn.BatchNorm1d

This layer applies Batch Normalization over a 2D or 3D input (a mini-batch of 1D or 2D inputs) to reduce internal covariate shift.

Ascend GPU CPU

mindspore.nn.BatchNorm2d

Batch Normalization is widely used in convolutional networks.

Ascend GPU CPU

mindspore.nn.BatchNorm3d

Batch Normalization is widely used in convolutional networks.

Ascend GPU CPU

mindspore.nn.GroupNorm

Group Normalization over a mini-batch of inputs.

Ascend GPU CPU

mindspore.nn.InstanceNorm1d

This layer applies Instance Normalization over a 3D input (a mini-batch of 1D inputs with additional channel dimension).

GPU

mindspore.nn.InstanceNorm2d

This layer applies Instance Normalization over a 4D input (a mini-batch of 2D inputs with additional channel dimension).

GPU

mindspore.nn.InstanceNorm3d

This layer applies Instance Normalization over a 5D input (a mini-batch of 3D inputs with additional channel dimension).

GPU

mindspore.nn.LayerNorm

Applies Layer Normalization over a mini-batch of inputs.

Ascend GPU CPU

mindspore.nn.SyncBatchNorm

Sync Batch Normalization layer over a N-dimension input.

Ascend

Pooling Layer

API Name

Description

Supported Platforms

mindspore.nn.AdaptiveAvgPool1d

Applies a 1D adaptive average pooling over an input Tensor which can be regarded as a composition of 1D input planes.

Ascend GPU CPU

mindspore.nn.AdaptiveAvgPool2d

This operator applies a 2D adaptive average pooling to an input signal composed of multiple input planes.

Ascend GPU CPU

mindspore.nn.AdaptiveAvgPool3d

This operator applies a 3D adaptive average pooling to an input signal composed of multiple input planes.

Ascend GPU CPU

mindspore.nn.AdaptiveMaxPool1d

Applies a 1D adaptive maximum pooling over an input Tensor which can be regarded as a composition of 1D input planes.

Ascend GPU CPU

mindspore.nn.AdaptiveMaxPool2d

This operator applies a 2D adaptive max pooling to an input signal composed of multiple input planes.

Ascend GPU CPU

mindspore.nn.AdaptiveMaxPool3d

Calculates the 3D adaptive max pooling for an input Tensor.

GPU CPU

mindspore.nn.AvgPool1d

Applies a 1D average pooling over an input Tensor which can be regarded as a composition of 1D input planes.

Ascend GPU CPU

mindspore.nn.AvgPool2d

Applies a 2D average pooling over an input Tensor which can be regarded as a composition of 2D input planes.

Ascend GPU CPU

mindspore.nn.AvgPool3d

Applies a 3D average pooling over an input Tensor which can be regarded as a composition of 3D input planes.

Ascend GPU CPU

mindspore.nn.FractionalMaxPool3d

Applies the 3D FractionalMaxPool operatin over input.

GPU CPU

mindspore.nn.LPPool1d

Applying 1D LPPooling operation on an input Tensor can be regarded as forming a 1D input plane.

Ascend GPU CPU

mindspore.nn.LPPool2d

Applying 2D LPPooling operation on an input Tensor can be regarded as forming a 1D input plane.

Ascend GPU CPU

mindspore.nn.MaxPool1d

Applies a 1D max pooling over an input Tensor which can be regarded as a composition of 1D planes.

Ascend GPU CPU

mindspore.nn.MaxPool2d

Applies a 2D max pooling over an input Tensor which can be regarded as a composition of 2D planes.

Ascend GPU CPU

mindspore.nn.MaxPool3d

3D max pooling operation.

Ascend GPU CPU

mindspore.nn.MaxUnpool1d

Computes the inverse of mindspore.nn.MaxPool1d.

GPU CPU

mindspore.nn.MaxUnpool2d

Computes the inverse of mindspore.nn.MaxPool2d.

GPU CPU

mindspore.nn.MaxUnpool3d

Computes the inverse of mindspore.nn.MaxPool3d.

GPU CPU

Padding Layer

API Name

Description

Supported Platforms

mindspore.nn.Pad

Pads the input tensor according to the paddings and mode.

Ascend GPU CPU

mindspore.nn.ConstantPad1d

Using a given constant value to pads the last dimensions of input tensor.

Ascend GPU CPU

mindspore.nn.ConstantPad2d

Using a given constant value to pads the last two dimensions of input tensor.

Ascend GPU CPU

mindspore.nn.ConstantPad3d

Using a given constant value to pads the last three dimensions of input tensor.

Ascend GPU CPU

mindspore.nn.ReflectionPad1d

Using a given padding to do reflection pad on the given tensor.

Ascend GPU CPU

mindspore.nn.ReflectionPad2d

Using a given padding to do reflection pad the given tensor.

Ascend GPU CPU

mindspore.nn.ReflectionPad3d

Pad the given tensor in a reflecting way using the input boundaries as the axis of symmetry.

Ascend GPU CPU

mindspore.nn.ReplicationPad1d

Pad on W dimension of input x according to padding.

GPU

mindspore.nn.ReplicationPad2d

Pad on HW dimension of input x according to padding.

GPU

mindspore.nn.ReplicationPad3d

Pad on DHW dimension of input x according to padding.

GPU

mindspore.nn.ZeroPad2d

Pads the last two dimensions of input tensor with zero.

Ascend GPU CPU

Loss Function

API Name

Description

Supported Platforms

mindspore.nn.BCELoss

BCELoss creates a criterion to measure the binary cross entropy between the true labels and predicted labels.

Ascend GPU CPU

mindspore.nn.BCEWithLogitsLoss

Adds sigmoid activation function to input logits, and uses the given logits to compute binary cross entropy between the logits and the labels.

Ascend GPU CPU

mindspore.nn.CosineEmbeddingLoss

CosineEmbeddingLoss creates a criterion to measure the similarity between two tensors using cosine distance.

Ascend GPU CPU

mindspore.nn.CrossEntropyLoss

The cross entropy loss between input and target.

Ascend GPU CPU

mindspore.nn.CTCLoss

Calculates the CTC (Connectionist Temporal Classification) loss.

Ascend GPU CPU

mindspore.nn.DiceLoss

The Dice coefficient is a set similarity loss, which is used to calculate the similarity between two samples.

Ascend GPU CPU

mindspore.nn.FocalLoss

It is a loss function to solve the imbalance of categories and the difference of classification difficulty.

Ascend

mindspore.nn.GaussianNLLLoss

Gaussian negative log likelihood loss.

Ascend GPU CPU

mindspore.nn.HingeEmbeddingLoss

Calculate the Hinge Embedding Loss value based on the input 'logits' and' labels' (only including 1 or -1).

Ascend GPU CPU

mindspore.nn.HuberLoss

HuberLoss calculate the error between the predicted value and the target value.

Ascend GPU CPU

mindspore.nn.KLDivLoss

Computes the Kullback-Leibler divergence between the logits and the labels.

Ascend GPU CPU

mindspore.nn.L1Loss

L1Loss is used to calculate the mean absolute error between the predicted value and the target value.

Ascend GPU CPU

mindspore.nn.MarginRankingLoss

MarginRankingLoss creates a criterion that measures the loss.

Ascend GPU CPU

mindspore.nn.MAELoss

MAELoss creates a criterion to measure the average absolute error between \(x\) and \(y\) element-wise, where \(x\) is the input and \(y\) is the labels.

Ascend GPU CPU

mindspore.nn.MSELoss

Calculates the mean squared error between the predicted value and the label value.

Ascend GPU CPU

mindspore.nn.MultiClassDiceLoss

When there are multiple classifications, label is transformed into multiple binary classifications by one hot.

Ascend GPU CPU

mindspore.nn.MultilabelMarginLoss

Creates a loss criterion that minimizes the hinge loss for multi-class classification tasks.

Ascend GPU

mindspore.nn.MultiLabelSoftMarginLoss

Calculates the MultiLabelSoftMarginLoss.

Ascend GPU CPU

mindspore.nn.MultiMarginLoss

Creates a criterion that optimizes a multi-class classification hinge loss (margin-based loss) between input \(x\) (a 2D mini-batch Tensor) and output \(y\) (which is a 1D tensor of target class indices, \(0 \leq y \leq \text{x.size}(1)-1\)):

Ascend GPU CPU

mindspore.nn.NLLLoss

Gets the negative log likelihood loss between logits and labels.

Ascend GPU CPU

mindspore.nn.PoissonNLLLoss

Poisson negative log likelihood loss.

Ascend GPU CPU

mindspore.nn.RMSELoss

RMSELoss creates a criterion to measure the root mean square error between \(x\) and \(y\) element-wise, where \(x\) is the input and \(y\) is the labels.

Ascend GPU CPU

mindspore.nn.SampledSoftmaxLoss

Computes the sampled softmax training loss.

GPU

mindspore.nn.SmoothL1Loss

SmoothL1 loss function, if the absolute error element-wise between the predicted value and the target value is less than the set threshold beta, the square term is used, otherwise the absolute error term is used.

Ascend GPU CPU

mindspore.nn.SoftMarginLoss

A loss class for two-class classification problems.

Ascend GPU

mindspore.nn.SoftmaxCrossEntropyWithLogits

Computes softmax cross entropy between logits and labels.

Ascend GPU CPU

mindspore.nn.TripletMarginLoss

TripletMarginLoss operation.

GPU

Optimizer

API Name

Description

Supported Platforms

mindspore.nn.Adadelta

Implements the Adadelta algorithm.

Ascend GPU CPU

mindspore.nn.Adagrad

Implements the Adagrad algorithm.

Ascend GPU CPU

mindspore.nn.Adam

Implements the Adaptive Moment Estimation (Adam) algorithm.

Ascend GPU CPU

mindspore.nn.AdaMax

Implements the AdaMax algorithm, a variant of Adaptive Movement Estimation (Adam) based on the infinity norm.

Ascend GPU CPU

mindspore.nn.AdamOffload

This optimizer will offload Adam optimizer to host CPU and keep parameters being updated on the device, to minimize the memory cost.

Ascend GPU CPU

mindspore.nn.AdamWeightDecay

Implements the Adam algorithm with weight decay.

Ascend GPU CPU

mindspore.nn.AdaSumByDeltaWeightWrapCell

Enable the adasum in "auto_parallel/semi_auto_parallel" mode.

Ascend GPU

mindspore.nn.AdaSumByGradWrapCell

Enable the adasum in "auto_parallel/semi_auto_parallel" mode.

Ascend GPU

mindspore.nn.ASGD

Implements Average Stochastic Gradient Descent.

Ascend GPU CPU

mindspore.nn.FTRL

Implements the FTRL algorithm.

Ascend GPU

mindspore.nn.Lamb

Implements the Lamb(Layer-wise Adaptive Moments optimizer for Batching training) algorithm.

Ascend GPU

mindspore.nn.LARS

Implements the LARS algorithm.

Ascend

mindspore.nn.LazyAdam

Implements the Adaptive Moment Estimation (Adam) algorithm.

Ascend GPU CPU

mindspore.nn.Momentum

Implements the Momentum algorithm.

Ascend GPU CPU

mindspore.nn.ProximalAdagrad

Implements the ProximalAdagrad algorithm that is an online Learning and Stochastic Optimization.

Ascend GPU

mindspore.nn.RMSProp

Implements Root Mean Squared Propagation (RMSProp) algorithm.

Ascend GPU CPU

mindspore.nn.Rprop

Implements Resilient backpropagation.

Ascend GPU CPU

mindspore.nn.SGD

Implements stochastic gradient descent.

Ascend GPU CPU

mindspore.nn.thor

Updates gradients by second-order algorithm--THOR.

Ascend GPU

Dynamic Learning Rate

LearningRateSchedule Class

The dynamic learning rates in this module are all subclasses of LearningRateSchedule. Pass the instance of LearningRateSchedule to an optimizer. During the training process, the optimizer calls the instance taking current step as input to get the current learning rate.

import mindspore.nn as nn

min_lr = 0.01
max_lr = 0.1
decay_steps = 4
cosine_decay_lr = nn.CosineDecayLR(min_lr, max_lr, decay_steps)

net = Net()
optim = nn.Momentum(net.trainable_params(), learning_rate=cosine_decay_lr, momentum=0.9)

API Name

Description

Supported Platforms

mindspore.nn.CosineDecayLR

Calculates learning rate based on cosine decay function.

Ascend GPU

mindspore.nn.ExponentialDecayLR

Calculates learning rate based on exponential decay function.

Ascend GPU CPU

mindspore.nn.InverseDecayLR

Calculates learning rate base on inverse-time decay function.

Ascend GPU CPU

mindspore.nn.NaturalExpDecayLR

Calculates learning rate base on natural exponential decay function.

Ascend GPU CPU

mindspore.nn.PolynomialDecayLR

Calculates learning rate base on polynomial decay function.

Ascend GPU

mindspore.nn.WarmUpLR

Gets learning rate warming up.

Ascend GPU CPU

Dynamic LR Function

The dynamic learning rates in this module are all functions. Call the function and pass the result to an optimizer. During the training process, the optimizer takes result[current step] as current learning rate.

import mindspore.nn as nn

min_lr = 0.01
max_lr = 0.1
total_step = 6
step_per_epoch = 1
decay_epoch = 4

lr= nn.cosine_decay_lr(min_lr, max_lr, total_step, step_per_epoch, decay_epoch)

net = Net()
optim = nn.Momentum(net.trainable_params(), learning_rate=lr, momentum=0.9)

API Name

Description

Supported Platforms

mindspore.nn.cosine_decay_lr

Calculates learning rate base on cosine decay function.

Ascend GPU CPU

mindspore.nn.exponential_decay_lr

Calculates learning rate base on exponential decay function.

Ascend GPU CPU

mindspore.nn.inverse_decay_lr

Calculates learning rate base on inverse-time decay function.

Ascend GPU CPU

mindspore.nn.natural_exp_decay_lr

Calculates learning rate base on natural exponential decay function.

Ascend GPU CPU

mindspore.nn.piecewise_constant_lr

Get piecewise constant learning rate.

Ascend GPU CPU

mindspore.nn.polynomial_decay_lr

Calculates learning rate base on polynomial decay function.

Ascend GPU CPU

mindspore.nn.warmup_lr

Gets learning rate warming up.

Ascend GPU CPU

Random Number Generator

Generator Class

API Name

Description

Supported Platforms

mindspore.nn.Generator

A generator that manages the state of random numbers and provides seed and offset for random functions.

Ascend GPU CPU

Default Generator Function

The random state management in this module consists of functions used to manage the default generator. When the user does not specify a generator, random operators invoke the default generator to produce random numbers.

API Name

Description

Supported Platforms

mindspore.nn.default_generator

Return the default generator object.

Ascend GPU CPU

mindspore.nn.get_rng_state

Get the default generator state.

Ascend GPU CPU

mindspore.nn.initial_seed

Return the initial seed of default generator.

Ascend GPU CPU

mindspore.nn.manual_seed

Sets the default generator seed.

Ascend GPU CPU

mindspore.nn.seed

Generate random seeds that can be used as seeds for default generator.

Ascend GPU CPU

mindspore.nn.set_rng_state

Sets the default generator state.

Ascend GPU CPU

Image Processing Layer

API Name

Description

Supported Platforms

mindspore.nn.PixelShuffle

Applies the PixelShuffle operation over input which implements sub-pixel convolutions with stride \(1/r\) .

Ascend GPU CPU

mindspore.nn.PixelUnshuffle

Applies the PixelUnshuffle operation over input which is the inverse of PixelShuffle.

Ascend GPU CPU

mindspore.nn.Upsample

For details, please refer to mindspore.ops.interpolate().

Ascend GPU CPU

Tools

API Name

Description

Supported Platforms

mindspore.nn.ChannelShuffle

Divide the channels of Tensor whose shape is \((*, C, H, W)\) into \(g\) groups to obtain a Tensor with shape \((*, C \frac g, g, H, W)\), and transpose along the corresponding axis of \(C\), \(\frac{g}{}\) and \(g\) to restore Tensor to the original shape.

Ascend GPU CPU

mindspore.nn.Flatten

Flatten the input Tensor along dimensions from start_dim to end_dim.

Ascend GPU CPU

mindspore.nn.Identity

A placeholder identity operator that returns the same as input.

Ascend GPU CPU

mindspore.nn.Unflatten

Unflattens a Tensor dim according to axis and unflattened_size.

Ascend GPU CPU