mindspore.mint
mindspore.mint provides a large number of functional, nn, optimizer interfaces. The API usages and functions are consistent with the mainstream usage in the industry for easy reference. The mint interface is currently an experimental interface and performs better than ops in graph mode of O0 and PyNative mode. Currently, the O2 (graph sinking mode) and CPU/GPU backend are not supported, and it will be gradually improved in the future.
The module import method is as follows:
from mindspore import mint
Compared with the previous version, the added, deleted and supported platforms change information of mindspore.mint operators in MindSpore, please refer to the link mindspore.mint API Interface Change .
Tensor
Creation Operations
API Name |
Description |
Supported Platforms |
Creates a sequence of numbers that begins at start and extends by increments of step up to but not including end. |
|
|
Sample from the Bernoulli distribution and randomly set the i^{th} element of the output to (0 or 1) according to the i^{th} probability value given in the input. |
|
|
Count the occurrences of each value in the input. |
|
|
Returns a copy of the input tensor. |
|
|
Returns a tensor with ones on the diagonal and zeros in the rest. |
|
|
According to the Einstein summation Convention (Einsum), the product of the input tensor elements is summed along the specified dimension. |
|
|
Creates a tensor with uninitialized data, whose shape, dtype and device are described by the argument size, dtype and device respectively. |
|
|
Returns an uninitialized Tensor with the same shape as the input. |
|
|
Create a Tensor of the specified shape and fill it with the specified value. |
|
|
Return a Tensor of the same shape as input and filled with fill_value. |
|
|
Generate a one-dimensional tensor with steps elements, evenly distributed in the interval [start, end]. |
|
|
Creates a tensor filled with value ones. |
|
|
Creates a tensor filled with 1, with the same shape as input, and its data type is determined by the given dtype. |
|
|
Returns a new tensor filled with integer numbers from the uniform distribution over an interval |
|
|
Returns a new tensor filled with integer numbers from the uniform distribution over an interval |
|
|
Returns a new tensor filled with numbers from the normal distribution over an interval |
|
|
Returns a new tensor filled with numbers from the normal distribution over an interval |
|
|
Generates random permutation of integers from 0 to n-1. |
|
|
Creates a tensor filled with 0 with shape described by size and fills it with value 0 in type of dtype. |
|
|
Creates a tensor filled with 0, with the same size as input. |
|
Indexing, Slicing, Joining, Mutating Operations
API Name |
Description |
Supported Platforms |
Connect input tensors along with the given dimension. |
|
|
Cut the input Tensor into chunks sub-tensors along the specified axis. |
|
|
Alias for |
|
|
Count the number of non-zero elements in the Tensor input on a given dimension dim. |
|
|
Gather data from a tensor by indices. |
|
|
Accumulate the elements of alpha times source into the input by adding to the index in the order given in index. |
|
|
Generates a new Tensor that accesses the values of input along the specified dim dimension using the indices specified in index. |
|
|
Return a new 1-D tensor which indexes the input tensor according to the boolean mask. |
|
|
Permutes the dimensions of the input tensor according to input dims . |
|
|
Reshape the input tensor based on the given shape. |
|
|
Update the value in src to input according to the specified index. |
|
|
Add all elements in src to the index specified by index to input along dimension specified by dim. |
|
|
Splits the Tensor into chunks along the given dim. |
|
|
Obtains a tensor of a specified length at a specified start position along a specified axis. |
|
|
Return the positions of all non-zero values. |
|
|
Creates a new tensor by repeating the elements in the input tensor dims times. |
|
|
Zero the input tensor above the diagonal specified. |
|
|
Slices the input tensor along the selected dimension at the given index. |
|
|
Return the Tensor after deleting the dimension of size 1 in the specified dim. |
|
|
Stacks a list of tensors in specified dim. |
|
|
Alias for |
|
|
Interchange two axes of a tensor. |
|
|
Zero the input tensor below the diagonal specified. |
|
|
Unbind a tensor dimension in specified axis. |
|
|
Returns the elements that are unique in each consecutive group of equivalent elements in the input tensor. |
|
|
Adds an additional dimension to the input tensor at the given dimension. |
|
|
Selects elements from input or other based on condition and returns a tensor. |
|
Random Sampling
API Name |
Description |
Supported Platforms |
Returns a tensor sampled from the multinomial probability distribution located in the corresponding row of the input tensor. |
|
|
Generates random numbers according to the standard Normal (or Gaussian) random number distribution. |
|
|
Returns a new tensor that fills numbers from the uniform distribution over an interval |
|
|
Returns a new tensor that fills numbers from the uniform distribution over an interval |
|
Math Operations
Pointwise Operations
API Name |
Description |
Supported Platforms |
Compute the absolute value of a tensor element-wise. |
|
|
Adds scaled other value to self. |
|
|
Performs a matrix-vector product of mat and vec, and add the input vector input to the final result. |
|
|
Computes arccosine of input tensors element-wise. |
|
|
Computes inverse hyperbolic cosine of the inputs element-wise. |
|
|
Alias for |
|
|
Alias for |
|
|
Alias for |
|
|
Alias for |
|
|
Alias for |
|
|
Alias for |
|
|
Alias for |
|
|
Computes arcsine of input tensors element-wise. |
|
|
Computes inverse hyperbolic sine of the input element-wise. |
|
|
Computes the trigonometric inverse tangent of the input element-wise. |
|
|
Returns arctangent of input/other element-wise. |
|
|
Computes inverse hyperbolic tangent of the input element-wise. |
|
|
Returns bitwise and of two tensors element-wise. |
|
|
Returns bitwise or of two tensors element-wise. |
|
|
Returns bitwise xor of two tensors element-wise. |
|
|
Rounds a tensor up to the closest integer element-wise. |
|
|
Clamps tensor values between the specified minimum value and maximum value. |
|
|
Computes cosine of input element-wise. |
|
|
Computes hyperbolic cosine of input element-wise. |
|
|
Compute the cross product of two input tensors along the specified dimension. |
|
|
Computes the n-th forward difference along the given dimension. |
|
|
Divides each element of the input by the corresponding element of the other . |
|
|
Alias for |
|
|
Compute the Gauss error of input tensor element-wise. |
|
|
Compute the complementary error function of input tensor element-wise. |
|
|
Compute the inverse error of input tensor element-wise. |
|
|
Compute exponential of the input tensor element-wise. |
|
|
Calculates the base-2 exponent of the Tensor input element by element. |
|
|
Compute exponential of the input tensor, then minus 1, element-wise. |
|
|
Alias for |
|
|
Computes input to the power of exponent element-wise in double precision, and always returns a mindspore.float64 tensor. |
|
|
Rounds a tensor down to the closest integer element-wise. |
|
|
Computes the floating-point remainder of the division operation input/other. |
|
|
Calculates the fractional part of each element in the input. |
|
|
Perform a linear interpolation of two tensors input and end based on a float or tensor weight. |
|
|
Compute the natural logarithm of the input tensor element-wise. |
|
|
Compute the natural logarithm of (tensor + 1) element-wise. |
|
|
Returns the logarithm to the base 2 of a tensor element-wise. |
|
|
Returns the logarithm to the base 10 of a tensor element-wise. |
|
|
Computes the logarithm of the sum of exponentiations of the inputs. |
|
|
Logarithm of the sum of exponentiations of the inputs in base of 2. |
|
|
Compute the "logical AND" of two tensors element-wise. |
|
|
Compute the "logical NOT" of the input tensor element-wise. |
|
|
Compute the "logical OR" of two tensors element-wise. |
|
|
Compute the "logical XOR" of two tensors element-wise. |
|
|
Multiply other value by input Tensor. |
|
|
Multiply matrix input and vector vec. |
|
|
Computes sum of input over a given dimension, treating NaNs as zero. |
|
|
Replace the NaN, positive infinity and negative infinity values in input with the specified values in nan, posinf and neginf respectively. |
|
|
Returns a tensor with negative values of the input tensor element-wise. |
|
|
Alias for |
|
|
Calculates the exponent power of each element in input. |
|
|
Converts polar coordinates to Cartesian coordinates. |
|
|
Expand the multidimensional Tensor into 1D along the 0 axis direction. |
|
|
Returns reciprocal of a tensor element-wise. |
|
|
Computes the remainder of input divided by other element-wise. |
|
|
Roll the elements of a tensor along a dimension. |
|
|
Round elements of input to the nearest integer. |
|
|
Compute reciprocal of square root of input tensor element-wise. |
|
|
Computes Sigmoid of input element-wise. |
|
|
Return an element-wise indication of the sign of a number. |
|
|
Compute sine of the input tensor element-wise. |
|
|
Compute the normalized sinc of input. |
|
|
Compute hyperbolic sine of the input element-wise. |
|
|
Alias for |
|
|
Returns sqrt of a tensor element-wise. |
|
|
Return square of a tensor element-wise. |
|
|
Subtracts scaled other value from self Tensor. |
|
|
Transpose the input tensor. |
|
|
Computes tangent of input element-wise. |
|
|
Computes hyperbolic tangent of input element-wise. |
|
|
Returns a tensor with the truncated integer values of the elements of the input tensor. |
|
|
Computes the first input multiplied by the logarithm of second input element-wise. |
|
Reduction Operations
API Name |
Description |
Supported Platforms |
Computes the maximum value of of all elements along the specified dim dimension of the input, and retains the dimension based on the keepdim parameter. |
|
|
Computes the minimum value of of all elements along the specified dim dimension of the input, and retains the dimension based on the keepdim parameter. |
|
|
Return the indices of the maximum values of a tensor. |
|
|
Return the indices of the minimum values of a tensor across a dimension. |
|
|
Sorts the input tensor along the given dimension in specified order and return the sorted indices. |
|
|
Tests if all element in input evaluates to True. |
|
|
Tests if any element in input evaluates to True along the given axes. |
|
|
Return the cumulative product along the given dimension of the tensor. |
|
|
Computes the histogram of a tensor. |
|
|
Computes the logarithm of the sum of exponentiations of all elements along the specified dim dimension of the input (with numerical stabilization), and retains the dimension based on the keepdim parameter. |
|
|
Returns the maximum value of the input tensor. |
|
|
Reduces all dimension of a tensor by averaging all elements. |
|
|
Output the median on the specified dimension |
|
|
Returns the minimum value of the input tensor. |
|
|
Returns the matrix norm or vector norm of a given tensor. |
|
|
Multiplying all elements of input. |
|
|
Calculate sum of all elements in Tensor. |
|
|
Calculates the standard deviation over the dimensions specified by dim. |
|
|
By default, return the standard deviation and mean of each dimension in Tensor. |
|
|
Returns the unique elements of input tensor. |
|
|
Calculates the variance over the dimensions specified by dim. |
|
|
By default, return the variance and mean of each dimension in Tensor. |
|
Comparison Operations
API Name |
Description |
Supported Platforms |
Returns a new Tensor with boolean elements representing if each element of input is “close” to the corresponding element of other. |
|
|
Sorts the input tensor along the given dimension in specified order and return the sorted indices. |
|
|
Compute the equivalence of the two inputs element-wise. |
|
|
Computes the equivalence between two tensors. |
|
|
Compute the value of |
|
|
Computes the boolean value of |
|
|
Compute the value of |
|
|
Return a boolean tensor where two tensors are element-wise equal within a tolerance. |
|
|
Determine which elements are finite for each position. |
|
|
Return a boolean tensor indicating which elements are +/- inifnity. |
|
|
Determines which elements are -inf for each position. |
|
|
Compute the value of |
|
|
Compute the value of |
|
|
Compute the value of |
|
|
Alias for |
|
|
Compute the maximum of the two input tensors element-wise. |
|
|
Compute the minimum of the two input tensors element-wise. |
|
|
Compute the non-equivalence of two inputs element-wise. |
|
|
Alias for |
|
|
Finds values and indices of the k largest or smallest entries along a given dimension. |
|
|
Sorts the elements of the input tensor along the given dimension in the specified order. |
|
BLAS and LAPACK Operations
API Name |
Description |
Supported Platforms |
Applies batch matrix multiplication to batch1 and batch2, with a reduced add step and add input to the result. |
|
|
Performs a matrix multiplication of the 2-D matrices mat1 and mat2. |
|
|
The result is the sum of the input and a batch matrix-matrix product of matrices in batch1 and batch2. |
|
|
Performs batch matrix-matrix multiplication of two three-dimensional tensors. |
|
|
Computes the dot product of two 1D tensor. |
|
|
Compute the inverse of the input matrix. |
|
|
Return the matrix product of two tensors. |
|
|
Generates coordinate matrices from given coordinate tensors. |
|
|
Returns the matrix product of two arrays. |
|
|
Return outer product of input and vec2. |
|
|
Returns a new tensor that is the sum of the input main trace. |
|
Other Operations
API Name |
Description |
Supported Platforms |
Broadcasts input tensor to a given shape. |
|
|
Computes p-norm distance between each pair of row vectors of two input Tensors. |
|
|
Returns a tuple (values, indices) where values is the cumulative maximum value of input Tensor input along the dimension dim, and indices is the index location of each maximum value. |
|
|
Returns a tuple (values, indices) where values is the cumulative minimum value of input Tensor input along the dimension dim, and indices is the index location of each minimum value. |
|
|
Computes the cumulative sum of input Tensor along dim. |
|
|
If input is a vector (1-D tensor), then returns a 2-D square tensor with the elements of input as the diagonal. |
|
|
Flatten a tensor along dimensions from start_dim to end_dim. |
|
|
Reverses elements in a tensor along the given dims. |
|
|
Repeat elements of a tensor along an axis, like |
|
|
Return the position indices where the elements can be inserted into the input tensor to maintain the increasing order of the input tensor. |
|
|
Zero the input tensor above the diagonal specified. |
|
|
Solves a system of equations with a square upper or lower triangular invertible matrix A and multiple right-hand sides b. |
|
mindspore.mint.nn
Convolution Layers
API Name |
Description |
Supported Platforms |
2D convolution layer. |
|
|
3D convolution layer. |
|
|
Applies a 2D transposed convolution operator over an input image composed of several input planes. |
|
|
Combines an array of sliding local blocks into a large containing tensor. |
|
|
Extracts sliding local blocks from a batched input tensor. |
|
Normalization Layers
API Name |
Description |
Supported Platforms |
Applies Batch Normalization over a 2D or 3D input as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift . |
|
|
Applies Batch Normalization over a 4D input as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift . |
|
|
Applies Batch Normalization over a 5D input as described in the paper Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift . |
|
|
Group Normalization over a mini-batch of inputs. |
|
|
Applies Layer Normalization over a mini-batch of inputs. |
|
|
Sync Batch Normalization layer over a N-dimension input. |
|
Non-linear Activations (weighted sum, nonlinearity)
API Name |
Description |
Supported Platforms |
Exponential Linear Unit activation function |
|
|
Activation function GELU (Gaussian Error Linear Unit). |
|
|
Computes GLU (Gated Linear Unit activation function) of the input tensor. |
|
|
Applies Hard Shrink activation function element-wise. |
|
|
Applies Hard Sigmoid activation function element-wise. |
|
|
Applies Hard Swish activation function element-wise. |
|
|
Applies logsigmoid activation element-wise. |
|
|
Applies the Log Softmax function to the input tensor on the specified axis. |
|
|
Computes MISH (A Self Regularized Non-Monotonic Neural Activation Function) of input tensors element-wise. |
|
|
Applies PReLU activation function element-wise. |
|
|
Applies ReLU (Rectified Linear Unit activation function) element-wise. |
|
|
Activation function ReLU6. |
|
|
Activation function SELU (Scaled exponential Linear Unit). |
|
|
Calculates the SiLU activation function element-wise. |
|
|
Applies sigmoid activation function element-wise. |
|
|
Applies the Softmax function to an n-dimensional input Tensor. |
|
|
Applies the Softshrink function element-wise. |
|
|
Applies the Tanh function element-wise, returns a new tensor with the hyperbolic tangent of the elements of input. |
|
Embedding Layers
API Name |
Description |
Supported Platforms |
The value in input is used as the index, and the corresponding embedding vector is queried from weight . |
|
Linear Layers
API Name |
Description |
Supported Platforms |
The linear connected layer. |
|
Dropout Layers
API Name |
Description |
Supported Platforms |
Dropout layer for the input. |
|
|
During training, randomly zeroes some channels of the input tensor with probability p from a Bernoulli distribution (For a 4-dimensional tensor with a shape of |
|
Pooling Layers
API Name |
Description |
Supported Platforms |
Applies a 1D adaptive average pooling over an input signal composed of several input planes. |
|
|
Applies a 2D adaptive average pooling over an input signal composed of several input planes. |
|
|
This operator applies a 3D adaptive average pooling to an input signal composed of multiple input planes. |
|
|
Applies a 1D adaptive max pooling over an input signal composed of several input planes. |
|
|
Applies a 2D average pooling over an input Tensor which can be regarded as a composition of 2D input planes. |
|
|
Applies a 3D average pooling over an input Tensor which can be regarded as a composition of 3D input planes. |
|
|
Computes the inverse of Maxpool2d. |
|
Padding Layers
API Name |
Description |
Supported Platforms |
Pad the last dimension of input tensor using padding and value. |
|
|
Pad the last 2 dimensions of input tensor using padding and value. |
|
|
Pad the last 3 dimension of input tensor using padding and value. |
|
|
Pad the last dimension of input tensor using the reflection of the input boundary. |
|
|
Pad the last 2 dimension of input tensor using the reflection of the input boundary. |
|
|
Pad the last 3 dimension of input tensor using the reflection of the input boundary. |
|
|
Pad the last dimension of input tensor using the replication of the input boundary. |
|
|
Pad the last 2 dimension of input tensor using the replication of the input boundary. |
|
|
Pad the last 3 dimension of input tensor using the replication of the input boundary. |
|
|
Pad the last dimension of input tensor with 0 using padding. |
|
|
Pad the last 2 dimension of input tensor with 0 using padding. |
|
|
Pad the last 3 dimension of input tensor with 0 using padding. |
|
Loss Functions
API Name |
Description |
Supported Platforms |
Compute the binary cross entropy between the true labels and predicted labels. |
|
|
Adds sigmoid activation function to input as logits, and uses this logits to compute binary cross entropy between the logits and the target. |
|
|
The cross entropy loss between input and target. |
|
|
Computes the Kullback-Leibler divergence between the input and the target. |
|
|
L1Loss is used to calculate the mean absolute error between the predicted value and the target value. |
|
|
Calculates the mean squared error between the predicted value and the label value. |
|
|
Gets the negative log likelihood loss between inputs and target. |
|
|
Computes smooth L1 loss, a robust L1 loss. |
|
Vision Layer
API Name |
Description |
Supported Platforms |
Rearrange elements in a tensor according to an upscaling factor. |
|
|
For details, please refer to |
|
Tools
API Name |
Description |
Supported Platforms |
A placeholder identity operator that returns the same as input. |
|
mindspore.mint.nn.functional
Convolution functions
API Name |
Description |
Supported Platforms |
Applies a 2D convolution over an input tensor. |
|
|
Applies a 3D convolution over an input tensor. |
|
|
Applies a 2D transposed convolution operator over an input image composed of several input planes, sometimes also called deconvolution (although it is not an actual deconvolution). |
|
|
Combines an array of sliding local blocks into a large containing tensor. |
|
|
Extracts sliding local blocks from a batched input tensor. |
|
Pooling functions
API Name |
Description |
Supported Platforms |
Performs 1D adaptive average pooling on a multi-plane input signal. |
|
|
Performs 2D adaptive average pooling on a multi-plane input signal. |
|
|
Performs 3D adaptive average pooling on a multi-plane input signal. |
|
|
Performs 1D adaptive max pooling on a multi-plane input signal. |
|
|
Applies a 1D average pooling over an input Tensor which can be regarded as a composition of 1D input planes. |
|
|
Applies a 2D average pooling over an input Tensor which can be regarded as a composition of 2D input planes. |
|
|
Applies a 3D average pooling over an input Tensor which can be regarded as a composition of 3D input planes. |
|
|
Performs a 2D max pooling on the input Tensor. |
|
|
Computes the inverse of max_pool2d. |
|
Non-linear activation functions
API Name |
Description |
Supported Platforms |
Batch Normalization for input data and updated parameters. |
|
|
Exponential Linear Unit activation function |
|
|
Exponential Linear Unit activation function |
|
|
Gaussian Error Linear Units activation function. |
|
|
Computes GLU (Gated Linear Unit activation function) of the input tensor. |
|
|
Group Normalization over a mini-batch of inputs. |
|
|
Hard Shrink activation function. |
|
|
Hard Sigmoid activation function. |
|
|
Hard Swish activation function. |
|
|
Applies the Layer Normalization on the mini-batch input. |
|
|
leaky_relu activation function. |
|
|
Applies the Log Softmax function to the input tensor on the specified axis. |
|
|
Applies logsigmoid activation element-wise. |
|
|
Computes MISH (A Self Regularized Non-Monotonic Neural Activation Function) of input tensors element-wise. |
|
|
Parametric Rectified Linear Unit activation function. |
|
|
Computes ReLU (Rectified Linear Unit activation function) of input tensors element-wise. |
|
|
Computes ReLU (Rectified Linear Unit) upper bounded by 6 of input tensors element-wise. |
|
|
ReLuComputes ReLU (Rectified Linear Unit activation function) inplace of input tensors element-wise. |
|
|
Activation function SELU (Scaled exponential Linear Unit). |
|
|
Computes Sigmoid of input element-wise. |
|
|
Computes Sigmoid Linear Unit of input element-wise, also known as Swish function. |
|
|
Applies the Softmax operation to the input tensor on the specified axis. |
|
|
Applies softplus function to input element-wise. |
|
|
Soft Shrink activation function. |
|
|
Computes hyperbolic tangent of input element-wise. |
|
Normalization functions
API Name |
Description |
Supported Platforms |
Perform normalization of inputs over specified dimension |
|
Linear functions
API Name |
Description |
Supported Platforms |
Applies the dense connected operation to the input. |
|
Dropout functions
API Name |
Description |
Supported Platforms |
During training, randomly zeroes some of the elements of the input tensor with probability p from a Bernoulli distribution. |
|
|
During training, randomly zeroes some channels of the input tensor with probability p from a Bernoulli distribution (For a 4-dimensional tensor with a shape of |
|
Sparse functions
API Name |
Description |
Supported Platforms |
Retrieve the word embeddings in weight using indices specified in input. |
|
|
Computes a one-hot tensor. |
|
Loss Functions
API Name |
Description |
Supported Platforms |
The cross entropy loss between input and target. |
|
|
Computes the binary cross entropy(Measure the difference information between two probability distributions) between predictive value input and target value target. |
|
|
|
Adds sigmoid activation function to input as logits, and uses this logits to compute binary cross entropy between the logits and the target. |
|
Computes the Kullback-Leibler divergence between the input and the target. |
|
|
Calculate the mean absolute error between the input value and the target value. |
|
|
Calculates the mean squared error between the predicted value and the label value. |
|
|
Gets the negative log likelihood loss between input and target. |
|
|
Computes smooth L1 loss, a robust L1 loss. |
|
Vision functions
API Name |
Description |
Supported Platforms |
Samples the input Tensor to the given size or scale_factor by using one of the interpolate algorithms. |
|
|
Given an input and a flow-field grid, computes the output using input values and pixel locations from grid. |
|
|
Pads the input tensor according to the pad. |
|
|
Rearrange elements in a tensor according to an upscaling factor. |
|
mindspore.mint.optim
API Name |
Description |
Supported Platforms |
Implements Adaptive Moment Estimation (Adam) algorithm. |
|
|
Implements Adam Weight Decay algorithm. |
|
|
Stochastic Gradient Descent optimizer. |
|
mindspore.mint.linalg
Inverses
API Name |
Description |
Supported Platforms |
Compute the inverse of the input matrix. |
|
|
Returns the matrix norm of a given tensor on the specified dimensions. |
|
|
Returns the matrix norm or vector norm of a given tensor. |
|
|
Returns the vector norm of the given tensor on the specified dimensions. |
|
|
Orthogonal decomposition of the input |
|
mindspore.mint.special
Pointwise Operations
API Name |
Description |
Supported Platforms |
Compute the complementary error function of input tensor element-wise. |
|
|
Calculates the base-2 exponent of the Tensor input element by element. |
|
|
Compute exponential of the input tensor, then minus 1, element-wise. |
|
|
Compute the natural logarithm of (tensor + 1) element-wise. |
|
|
Applies the Log Softmax function to the input tensor on the specified axis. |
|
|
Returns half to even of a tensor element-wise. |
|
|
Compute the normalized sinc of input. |
|
mindspore.mint.distributed
API Name |
Description |
Supported Platforms |
Gathers tensors from the specified communication group and returns the tensor list which is all gathered. |
|
|
Gathers tensors from the specified communication group and returns the tensor which is all gathered. |
|
|
Aggregates Python objects in a specified communication group. |
|
|
Reduce tensors across all devices in such a way that all deviceswill get the same final result, returns the tensor which is all reduced. |
|
|
scatter and gather list of tensor to/from all rank according to input/output tensor list. |
|
|
scatter and gather input with split size to/from all rank, and return result in a single tensor. |
|
|
Synchronizes all processes in the specified group. |
|
|
Batch send and recv tensors asynchronously. |
|
|
Broadcasts the tensor to the whole group. |
|
|
Broadcasts the entire group of input Python objects. |
|
|
Destroy the user collective communication group. |
|
|
Gathers tensors from the specified communication group. |
|
|
Gathers python objects from the whole group in a single process. |
|
|
Get the backend of communication process groups. |
|
|
A function that returns the rank id in the world group corresponding to the rank which id is 'group_rank' in the user group. |
|
|
Get the rank ID in the specified user communication group corresponding to the rank ID in the world communication group. |
|
|
Gets the ranks of the specific group and returns the process ranks in the communication group as a list. |
|
|
Get the rank ID for the current device in the specified collective communication group. |
|
|
Get the rank size of the specified collective communication group. |
|
|
Init collective communication lib. |
|
|
Receive tensors from src asynchronously. |
|
|
Send tensors to the specified dest_rank asynchronously. |
|
|
Checks if distributed module is available. |
|
|
Checks if default process group has been initialized. |
|
|
Create a new distributed group. |
|
|
Object for batch_isend_irecv input, to store information of |
|
|
Receive tensors from src. |
|
|
Reduces tensors across the processes in the specified communication group, sends the result to the target dst(global rank), and returns the tensor which is sent to the target process. |
|
|
Reduces and scatters tensors from the specified communication group and returns the tensor which is reduced and scattered. |
|
|
Reduces and scatters tensors from the specified communication group and returns the tensor which is reduced and scattered. |
|
|
Scatter tensor evently across the processes in the specified communication group. |
|
|
Scatters picklable objects in scatter_object_input_list to the whole group. |
|
|
Send tensors to the specified dest_rank. |
|