mindspore.mint
mindspore.mint provides a large number of functional, nn, optimizer interfaces. The API usages and functions are consistent with the mainstream usage in the industry for easy reference. The mint interface is currently an experimental interface and performs better than ops in graph mode of O0 and PyNative mode. Currently, the graph sinking mode and CPU/GPU backend are not supported, and it will be gradually improved in the future.
The module import method is as follows:
from mindspore import mint
Compared with the previous version, the added, deleted and supported platforms change information of mindspore.mint operators in MindSpore, please refer to the link mindspore.mint API Interface Change .
Tensor
Creation Operations
API Name |
Description |
Supported Platforms |
Warning |
Creates a sequence of numbers that begins at start and extends by increments of step up to but not including end. |
|
None |
|
Creates a tensor with ones on the diagonal and zeros in the rest. |
|
None |
|
Create a Tensor of the specified shape and fill it with the specified value. |
|
None |
|
Returns a Tensor whose value is steps evenly spaced in the interval start and end (including start and end), and the length of the output Tensor is steps. |
|
Atlas training series does not support int16 dtype currently. |
|
Creates a tensor filled with value ones. |
|
None |
|
Creates a tensor filled with 1, with the same shape as input, and its data type is determined by the given dtype. |
|
None |
|
Creates a tensor filled with 0 with shape described by size and fills it with value 0 in type of dtype. |
|
None |
|
Creates a tensor filled with 0, with the same size as input. |
|
None |
Indexing, Slicing, Joining, Mutating Operations
API Name |
Description |
Supported Platforms |
Warning |
Connect input tensors along with the given dimension. |
|
None |
|
Gather data from a tensor by indices. |
|
On Ascend, the behavior is unpredictable in the following cases: the value of index is not in the range [-input.shape[dim], input.shape[dim]) in forward; the value of index is not in the range [0, input.shape[dim]) in backward. |
|
Generates a new Tensor that accesses the values of input along the specified dim dimension using the indices specified in index. |
|
None |
|
Returns a new 1-D Tensor which indexes the input tensor according to the boolean mask. |
|
None |
|
Permutes the dimensions of the input tensor according to input dims . |
|
None |
|
Update the value in src to input according to the specified index. |
|
None |
|
Add all elements in src to the index specified by index to input along dimension specified by dim. |
|
None |
|
Splits the Tensor into chunks along the given dim. |
|
None |
|
Obtains a tensor of a specified length at a specified start position along a specified axis. |
|
None |
|
Return the positions of all non-zero values. |
|
None |
|
Creates a new tensor by repeating input dims times. |
|
None |
|
Returns the lower triangle part of input (elements that contain the diagonal and below), and set the other elements to zeros. |
|
None |
|
Stacks a list of tensors in specified dim. |
|
None |
|
Selects elements from input or other based on condition and returns a tensor. |
|
None |
Random Sampling
API Name |
Description |
Supported Platforms |
Warning |
Returns a tensor sampled from the multinomial probability distribution located in the corresponding row of the input tensor. |
|
This is an experimental API that is subject to change or deletion. |
|
Generates random numbers according to the standard Normal (or Gaussian) random number distribution. |
|
None |
|
Returns a new tensor that fills numbers from the uniform distribution over an interval \([0, 1)\) based on the given dtype and shape of the input tensor. |
|
None |
|
Returns a new tensor that fills numbers from the uniform distribution over an interval \([0, 1)\) based on the given shape and dtype. |
|
None |
Math Operations
Pointwise Operations
API Name |
Description |
Supported Platforms |
Warning |
Returns absolute value of a tensor element-wise. |
|
None |
|
Adds scaled other value to input Tensor. |
|
None |
|
Computes arccosine of input tensors element-wise. |
|
None |
|
Computes inverse hyperbolic cosine of the inputs element-wise. |
|
None |
|
Alias for |
|
None |
|
Alias for |
|
None |
|
Alias for |
|
None |
|
Alias for |
|
None |
|
Alias for |
|
None |
|
Alias for |
|
None |
|
Alias for |
|
None |
|
Computes arcsine of input tensors element-wise. |
|
None |
|
Computes inverse hyperbolic sine of the input element-wise. |
|
None |
|
Computes the trigonometric inverse tangent of the input element-wise. |
|
None |
|
Returns arctangent of input/other element-wise. |
|
None |
|
Computes inverse hyperbolic tangent of the input element-wise. |
|
None |
|
Returns bitwise and of two tensors element-wise. |
|
None |
|
Returns bitwise or of two tensors element-wise. |
|
None |
|
Returns bitwise xor of two tensors element-wise. |
|
None |
|
Rounds a tensor up to the closest integer element-wise. |
|
None |
|
Clamps tensor values between the specified minimum value and maximum value. |
|
None |
|
Computes cosine of input element-wise. |
|
Using float64 may cause a problem of missing precision. |
|
Computes hyperbolic cosine of input element-wise. |
|
None |
|
Computes the cross product of input and other in dimension dim. |
|
None |
|
Divides the first input tensor by the second input tensor in floating-point type element-wise. |
|
None |
|
Alias for |
|
None |
|
Computes the Gauss error function of input element-wise. |
|
None |
|
Computes the complementary error function of input element-wise. |
|
None |
|
Returns the result of the inverse error function with input. |
|
None |
|
Returns exponential of a tensor element-wise. |
|
None |
|
Calculates the base-2 exponent of the Tensor input element by element. |
|
None |
|
Returns exponential then minus 1 of a tensor element-wise. |
|
None |
|
Alias for |
|
None |
|
Rounds a tensor down to the closest integer element-wise. |
|
None |
|
Returns the natural logarithm of a tensor element-wise. |
|
If the input value of operator Log is within the range (0, 0.01] or [0.95, 1.05], the output accuracy may be affacted. |
|
Returns the natural logarithm of one plus the input tensor element-wise. |
|
None |
|
Computes the "logical AND" of two tensors element-wise. |
|
None |
|
Computes the "logical NOT" of a tensor element-wise. |
|
None |
|
Computes the "logical OR" of two tensors element-wise. |
|
None |
|
Computes the "logical XOR" of two tensors element-wise. |
|
None |
|
Multiplies two tensors element-wise. |
|
None |
|
Multiply matrix input and vector vec. |
|
This is an experimental API that is subject to change or deletion. |
|
Replace the NaN, positive infinity and negative infinity values in input with the specified values in nan, posinf and neginf respectively. |
|
For Ascend, it is only supported on Atlas A2 Training Series Products. This is an experimental API that is subject to change or deletion. |
|
Returns a tensor with negative values of the input tensor element-wise. |
|
None |
|
Alias for |
|
None |
|
Calculates the exponent power of each element in input. |
|
This is an experimental API that is subject to change or deletion. |
|
Returns reciprocal of a tensor element-wise. |
|
None |
|
Computes the remainder of input divided by other element-wise. |
|
None |
|
Rolls the elements of a tensor along an axis. |
|
None |
|
Returns half to even of a tensor element-wise. |
|
None |
|
Computes reciprocal of square root of input tensor element-wise. |
|
None |
|
Computes Sigmoid of input element-wise. |
|
None |
|
Returns an element-wise indication of the sign of a number. |
|
None |
|
Computes sine of the input element-wise. |
|
None |
|
Computes the normalized sinc of input. |
|
None |
|
Computes hyperbolic sine of the input element-wise. |
|
None |
|
Returns sqrt of a tensor element-wise. |
|
None |
|
Returns square of a tensor element-wise. |
|
None |
|
Subtracts scaled other value from input Tensor. |
|
None |
|
Computes tangent of input element-wise. |
|
None |
|
Computes hyperbolic tangent of input element-wise. |
|
None |
|
Returns a new tensor with the truncated integer values of the elements of the input tensor. |
|
None |
|
Computes the first input multiplied by the logarithm of second input element-wise. |
|
None |
Reduction Operations
API Name |
Description |
Supported Platforms |
Warning |
Return the indices of the maximum values of a tensor across a dimension. |
|
None |
|
Return the indices of the minimum values of a tensor across a dimension. |
|
None |
|
Reduces a dimension of input by the "logical AND" of all elements in the dimension, by default. |
|
None |
|
Reduces a dimension of input by the "logical OR" of all elements in the dimension, by default. |
|
None |
|
Calculates the maximum value along with the given dimension for the input tensor. |
|
None |
|
Reduces all dimension of a tensor by averaging all elements in the dimension, by default. |
|
None |
|
Output the median on the specified dimension |
|
None |
|
Calculates the minimum value along with the given dimension for the input tensor. |
|
None |
|
Reduces a dimension of a tensor by multiplying all elements in the dimension, by default. |
|
None |
|
Calculate sum of Tensor elements over a given dim. |
|
None |
|
Returns the unique elements of input tensor. |
|
None |
Comparison Operations
API Name |
Description |
Supported Platforms |
Warning |
Computes the equivalence between two tensors element-wise. |
|
None |
|
Compare the value of the input parameters \(input > other\) element-wise, and the output result is a bool value. |
|
None |
|
Given two Tensors, compares them element-wise to check if each element in the first Tensor is greater than or equal to the corresponding element in the second Tensor. |
|
None |
|
Compare the value of the input parameters \(input,other\) element-wise, and the output result is a bool value. |
|
None |
|
Returns a new Tensor with boolean elements representing if each element of input is “close” to the corresponding element of other. |
|
None |
|
Determine which elements are finite for each position. |
|
None |
|
Computes the boolean value of \(input <= other\) element-wise. |
|
None |
|
Computes the boolean value of \(input < other\) element-wise. |
|
None |
|
Computes the boolean value of \(input <= other\) element-wise. |
|
None |
|
Alias for |
|
None |
|
Computes the maximum of input tensors element-wise. |
|
If all inputs are scalar of integers. In GRAPH mode, the output will be Tensor of int32, while in PYNATIVE mode, the output will be Tensor of int64. |
|
Computes the minimum of input tensors element-wise. |
|
None |
|
Computes the non-equivalence of two tensors element-wise. |
|
None |
|
Alias of mint.ne(). |
|
None |
|
Finds values and indices of the k largest or smallest entries along a given dimension. |
|
If sorted is set to False, due to different memory layout and traversal methods on different platforms, the display order of calculation results may be inconsistent when sorted is False. |
|
Sorts the elements of the input tensor along the given dimension in the specified order. |
|
Currently, the data types of float16, uint8, int8, int16, int32, int64 are well supported. If use float32, it may cause loss of accuracy. |
BLAS and LAPACK Operations
API Name |
Description |
Supported Platforms |
Warning |
The result is the sum of the input and a batch matrix-matrix product of matrices in batch1 and batch2. |
|
None |
|
Performs batch matrix-matrix multiplication of two three-dimensional tensors. |
|
None |
|
Compute the inverse of the input matrix. |
|
None |
|
Returns the matrix product of two tensors. |
|
None |
|
Returns a new tensor that is the sum of the input main trace. |
|
None |
Other Operations
API Name |
Description |
Supported Platforms |
Warning |
Broadcasts input tensor to a given shape. |
|
None |
|
Returns a tuple (values, indices) where values is the cumulative maximum value of input Tensor input along the dimension dim, and indices is the index location of each maximum value. |
|
None |
|
Returns a tuple (values, indices) where values is the cumulative minimum value of input Tensor input along the dimension dim, and indices is the index location of each minimum value. |
|
None |
|
Computes the cumulative sum of input Tensor along dim. |
|
None |
|
Flatten a tensor along dimensions from start_dim to end_dim. |
|
None |
|
Reverses the order of elements in a tensor along the given axis. |
|
None |
|
Repeat elements of a tensor along an axis, like numpy.repeat. |
|
Only support on Atlas A2 training series. |
|
Return the position indices such that after inserting the values into the sorted_sequence, the order of innermost dimension of the sorted_sequence remains unchanged. |
|
None |
|
Returns the lower triangle part of input (elements that contain the diagonal and below), and set the other elements to zeros. |
|
None |
mindspore.mint.nn
Loss Functions
API Name |
Description |
Supported Platforms |
Warning |
L1Loss is used to calculate the mean absolute error between the predicted value and the target value. |
|
None |
Convolution Layers
API Name |
Description |
Supported Platforms |
Warning |
Combines an array of sliding local blocks into a large containing tensor. |
|
None |
|
Extracts sliding local blocks from a batched input tensor. |
|
None |
Normalization Layers
API Name |
Description |
Supported Platforms |
Warning |
Group Normalization over a mini-batch of inputs. |
|
None |
Non-linear Activations (weighted sum, nonlinearity)
API Name |
Description |
Supported Platforms |
Warning |
Activation function GELU (Gaussian Error Linear Unit). |
|
None |
|
Applies Hard Shrink activation function element-wise. |
|
None |
|
Applies Hard Sigmoid activation function element-wise. |
|
None |
|
Applies Hard Swish activation function element-wise. |
|
None |
|
Applies the Log Softmax function to the input tensor on the specified axis. |
|
None |
|
Computes MISH (A Self Regularized Non-Monotonic Neural Activation Function) of input tensors element-wise. |
|
None |
|
Applies PReLU activation function element-wise. |
|
None |
|
Applies ReLU (Rectified Linear Unit activation function) element-wise. |
|
None |
|
Activation function SELU (Scaled exponential Linear Unit). |
|
None |
|
Applies the Softmax function to an n-dimensional input Tensor. |
|
None |
|
Applies the SoftShrink function element-wise. |
|
None |
Linear Layers
API Name |
Description |
Supported Platforms |
Warning |
The linear connected layer. |
|
In PYNATIVE mode, if bias is |
Dropout Layers
API Name |
Description |
Supported Platforms |
Warning |
Dropout layer for the input. |
|
None |
Pooling Layers
API Name |
Description |
Supported Platforms |
Warning |
Applies a 2D average pooling over an input Tensor which can be regarded as a composition of 2D input planes. |
|
None |
Loss Functions
API Name |
Description |
Supported Platforms |
Warning |
Adds sigmoid activation function to input as logits, and uses this logits to compute binary cross entropy between the logits and the target. |
|
None |
|
Calculates the mean squared error between the predicted value and the label value. |
|
None |
mindspore.mint.nn.functional
Convolution functions
API Name |
Description |
Supported Platforms |
Warning |
Combines an array of sliding local blocks into a large containing tensor. |
|
Currently, only unbatched(3D) or batched(4D) image-like output tensors are supported. |
|
Extracts sliding local blocks from a batched input tensor. |
|
Currently, batched(4D) image-like tensors are supported. For Ascend, it is only supported on platforms above Atlas A2. |
Pooling functions
API Name |
Description |
Supported Platforms |
Warning |
Applies a 2D average pooling over an input Tensor which can be regarded as a composition of 2D input planes. |
|
None |
|
Performs a 2D max pooling on the input Tensor. |
|
Only support on Atlas A2 training series. |
Non-linear activation functions
API Name |
Description |
Supported Platforms |
Warning |
Batch Normalization for input data and updated parameters. |
|
None |
|
Exponential Linear Unit activation function. |
|
None |
|
Gaussian Error Linear Units activation function. |
|
None |
|
Group Normalization over a mini-batch of inputs. |
|
None |
|
Hard Shrink activation function. |
|
None |
|
Hard Sigmoid activation function. |
|
None |
|
Hard Swish activation function. |
|
None |
|
Applies the Layer Normalization on the mini-batch input. |
|
None |
|
leaky_relu activation function. |
|
None |
|
Applies the Log Softmax function to the input tensor on the specified axis. |
|
None |
|
Computes MISH (A Self Regularized Non-Monotonic Neural Activation Function) of input tensors element-wise. |
|
None |
|
Parametric Rectified Linear Unit activation function. |
|
None |
|
Computes ReLU (Rectified Linear Unit activation function) of input tensors element-wise. |
|
None |
|
Activation function SELU (Scaled exponential Linear Unit). |
|
None |
|
Computes Sigmoid of input element-wise. |
|
None |
|
Computes Sigmoid Linear Unit of input element-wise. |
|
None |
|
Applies the Softmax operation to the input tensor on the specified axis. |
|
None |
|
Applies softplus function to input element-wise. |
|
None |
|
Soft Shrink activation function. |
|
None |
|
Computes hyperbolic tangent of input element-wise. |
|
None |
Linear functions
API Name |
Description |
Supported Platforms |
Warning |
Applies the dense connected operation to the input. |
|
This is an experimental API that is subject to change or deletion. In PYNATIVE mode, if bias is not 1D, the input cannot be greater than 6D. |
Dropout functions
API Name |
Description |
Supported Platforms |
Warning |
During training, randomly zeroes some of the elements of the input tensor with probability p from a Bernoulli distribution. |
|
None |
Sparse functions
API Name |
Description |
Supported Platforms |
Warning |
Retrieve the word embeddings in weight using indices specified in input. |
|
On Ascend, the behavior is unpredictable when the value of input is invalid. |
|
Computes a one-hot tensor. |
|
None |
Loss Functions
API Name |
Description |
Supported Platforms |
Warning |
Computes the binary cross entropy(Measure the difference information between two probability distributions) between predictive value input and target value target. |
|
The value of input must range from 0 to l. |
|
|
Adds sigmoid activation function to input as logits, and uses this logits to compute binary cross entropy between the logits and the target. |
|
None |
Calculate the mean absolute error between the input value and the target value. |
|
None |
|
Calculates the mean squared error between the predicted value and the label value. |
|
None |
Vision functions
API Name |
Description |
Supported Platforms |
Warning |
Given an input and a flow-field grid, computes the output using input values and pixel locations from grid. |
|
None |
|
Pads the input tensor according to the pad. |
|
circular mode has poor performance and is not recommended. |
mindspore.mint.optim
API Name |
Description |
Supported Platforms |
Warning |
Implements Adam Weight Decay algorithm. |
|
This is an experimental optimizer API that is subject to change. This module must be used with lr scheduler module in LRScheduler Class . For Ascend, it is only supported on platforms above Atlas A2. |
mindspore.mint.linalg
Inverses
API Name |
Description |
Supported Platforms |
Warning |
Compute the inverse of the input matrix. |
|
None |
mindspore.mint.special
Pointwise Operations
API Name |
Description |
Supported Platforms |
Warning |
Computes the complementary error function of input element-wise. |
|
None |
|
Calculates the base-2 exponent of the Tensor input element by element. |
|
None |
|
Returns exponential then minus 1 of a tensor element-wise. |
|
None |
|
Returns the natural logarithm of one plus the input tensor element-wise. |
|
None |
|
Applies the Log Softmax function to the input tensor on the specified axis. |
|
None |
|
Returns half to even of a tensor element-wise. |
|
None |
|
Computes the normalized sinc of input. |
|
None |
mindspore.mint.distributed
API Name |
Description |
Supported Platforms |
Warning |
Gathers tensors from the specified communication group and returns the tensor which is all gathered. |
|
None |
|
Gathers tensors from the specified communication group and returns the tensor which is all gathered. |
|
None |
|
Aggregates Python objects in a specified communication group. |
|
None |
|
Reduce tensors across all devices in such a way that all deviceswill get the same final result, returns the tensor which is all reduced. |
|
None |
|
scatter and gather list of tensor to/from all rank according to input/output tensor list. |
|
None |
|
scatter and gather input with split size to/from all rank, and return result in a single tensor. |
|
None |
|
Synchronizes all processes in the specified group. |
|
None |
|
Batch send and recv tensors asynchronously. |
|
None |
|
Broadcasts the tensor to the whole group. |
|
None |
|
Broadcasts the entire group of input Python objects. |
|
None |
|
Destroy the user collective communication group. |
|
None |
|
Gathers tensors from the specified communication group. |
|
None |
|
Gathers python objects from the whole group in a single process. |
|
None |
|
Get the backend of communication process groups. |
|
None |
|
A function that returns the rank id in the world group corresponding to the rank which id is 'group_rank' in the user group. |
|
None |
|
Get the rank ID in the specified user communication group corresponding to the rank ID in the world communication group. |
|
None |
|
Gets the ranks of the specific group and returns the process ranks in the communication group as a list. |
|
None |
|
Get the rank ID for the current device in the specified collective communication group. |
|
None |
|
Get the rank size of the specified collective communication group. |
|
None |
|
Init collective communication lib. |
|
None |
|
Receive tensors from src asynchronously. |
|
None |
|
Send tensors to the specified dest_rank asynchronously. |
|
None |
|
Create a new distributed group. |
|
None |
|
Object for batch_isend_irecv input, to store information of |
|
None |
|
Receive tensors from src. |
|
None |
|
Reduces tensors across the processes in the specified communication group, sends the result to the target dst(global rank), and returns the tensor which is sent to the target process. |
|
None |
|
Reduces and scatters tensors from the specified communication group and returns the tensor which is reduced and scattered. |
|
None |
|
Reduces and scatters tensors from the specified communication group and returns the tensor which is reduced and scattered. |
|
None |
|
Scatter tensor evently across the processes in the specified communication group. |
|
None |
|
Scatters picklable objects in scatter_object_input_list to the whole group. |
|
None |
|
Send tensors to the specified dest_rank. |
|
None |