mindspore

class mindspore.dtype

Create a data type object of MindSpore.

The actual path of dtype is /mindspore/common/dtype.py. Run the following command to import the package:

from mindspore import dtype as mstype
  • Numeric Type

    Currently, MindSpore supports Int type, Uint type and Float type. The following table lists the details.

    Definition

    Description

    mindspore.int8 , mindspore.byte

    8-bit integer

    mindspore.int16 , mindspore.short

    16-bit integer

    mindspore.int32 , mindspore.intc

    32-bit integer

    mindspore.int64 , mindspore.intp

    64-bit integer

    mindspore.uint8 , mindspore.ubyte

    unsigned 8-bit integer

    mindspore.uint16 , mindspore.ushort

    unsigned 16-bit integer

    mindspore.uint32 , mindspore.uintc

    unsigned 32-bit integer

    mindspore.uint64 , mindspore.uintp

    unsigned 64-bit integer

    mindspore.float16 , mindspore.half

    16-bit floating-point number

    mindspore.float32 , mindspore.single

    32-bit floating-point number

    mindspore.float64 , mindspore.double

    64-bit floating-point number

  • Other Type

    For other defined types, see the following table.

    Type

    Description

    tensor

    MindSpore’s tensor type. Data format uses NCHW. For details, see tensor.

    bool_

    Boolean True or False.

    int_

    Integer scalar.

    uint

    Unsigned integer scalar.

    float_

    Floating-point scalar.

    number

    Number, including int_ , uint , float_ and bool_ .

    list_

    List constructed by tensor , such as List[T0,T1,...,Tn] , where the element Ti can be of different types.

    tuple_

    Tuple constructed by tensor , such as Tuple[T0,T1,...,Tn] , where the element Ti can be of different types.

    function

    Function. Return in two ways, when function is not None, returns Func directly, the other returns Func(args: List[T0,T1,…,Tn], retval: T) when function is None.

    type_type

    Type definition of type.

    type_none

    No matching return type, corresponding to the type(None) in Python.

    symbolic_key

    The value of a variable is used as a key of the variable in env_type .

    env_type

    Used to store the gradient of the free variable of a function, where the key is the symbolic_key of the free variable’s node and the value is the gradient.

  • Tree Topology

    The relationships of the above types are as follows:

    └─────── number
        │   ├─── bool_
        │   ├─── int_
        │   │   ├─── int8, byte
        │   │   ├─── int16, short
        │   │   ├─── int32, intc
        │   │   └─── int64, intp
        │   ├─── uint
        │   │   ├─── uint8, ubyte
        │   │   ├─── uint16, ushort
        │   │   ├─── uint32, uintc
        │   │   └─── uint64, uintp
        │   └─── float_
        │       ├─── float16
        │       ├─── float32
        │       └─── float64
        ├─── tensor
        │   ├─── Array[Float32]
        │   └─── ...
        ├─── list_
        │   ├─── List[Int32,Float32]
        │   └─── ...
        ├─── tuple_
        │   ├─── Tuple[Int32,Float32]
        │   └─── ...
        ├─── function
        │   ├─── Func
        │   ├─── Func[(Int32, Float32), Int32]
        │   └─── ...
        ├─── type_type
        ├─── type_none
        ├─── symbolic_key
        └─── env_type
    
class mindspore.DatasetHelper(dataset, dataset_sink_mode=True, sink_size=- 1, epoch_num=1)[source]

DatasetHelper is a class to process the MindData dataset and it provides the information of dataset.

According to different contexts, change the iterations of dataset and use the same iteration for loop in different contexts.

Note

The iteration of DatasetHelper will provide one epoch data.

Parameters
  • dataset (Dataset) – The training dataset iterator.

  • dataset_sink_mode (bool) – If true use GetNext to fetch the data, or else feed the data from host. Default: True.

  • sink_size (int) – Control the amount of data in each sink. If sink_size=-1, sink the complete dataset for each epoch. If sink_size>0, sink sink_size data for each epoch. Default: -1.

  • epoch_num (int) – Control the number of epoch data to send. Default: 1.

Examples

>>> from mindspore import nn, DatasetHelper
>>>
>>> network = Net()
>>> net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
>>> network = nn.WithLossCell(network, net_loss)
>>> train_dataset = create_custom_dataset()
>>> dataset_helper = DatasetHelper(train_dataset, dataset_sink_mode=False)
>>> for next_element in dataset_helper:
...     outputs = network(*next_element)
continue_send()[source]

continue send data to device at the beginning of epoch.

get_data_info()[source]

Get the types and shape of current batch.

release()[source]

Free up resources about data sink.

sink_size()[source]

Get sink_size for each iteration.

stop_send()[source]

stop send data about data sink.

types_shapes()[source]

Get the types and shapes from dataset on the current configuration.

class mindspore.DynamicLossScaleManager(init_loss_scale=2 ** 24, scale_factor=2, scale_window=2000)[source]

Dynamic loss-scale manager.

Parameters
  • init_loss_scale (float) – Initialize loss scale. Default: 2**24.

  • scale_factor (int) – Coefficient of increase and decrease. Default: 2.

  • scale_window (int) – Maximum continuous normal steps when there is no overflow. Default: 2000.

Examples

>>> from mindspore import Model, nn
>>> from mindspore.train.loss_scale_manager import DynamicLossScaleManager
>>>
>>> net = Net()
>>> loss_scale_manager = DynamicLossScaleManager()
>>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> model = Model(net, loss_scale_manager=loss_scale_manager, optimizer=optim)
get_drop_overflow_update()[source]

Get the flag whether to drop optimizer update when there is an overflow.

get_loss_scale()[source]

Get loss scale value.

get_update_cell()[source]

Returns the cell for TrainOneStepWithLossScaleCell

update_loss_scale(overflow)[source]

Update loss scale value.

Parameters

overflow – Boolean. Whether it overflows.

class mindspore.FixedLossScaleManager(loss_scale=128.0, drop_overflow_update=True)[source]

Fixed loss-scale manager.

Parameters
  • loss_scale (float) – Loss scale. Note that if drop_overflow_update is set to False, the value of loss_scale in optimizer that you used need to be set to the same value as here. Default: 128.0.

  • drop_overflow_update (bool) – Whether to execute optimizer if there is an overflow. If True, the optimizer will not executed when overflow occurs. Default: True.

Examples

>>> from mindspore import Model, nn, FixedLossScaleManager
>>>
>>> net = Net()
>>> #1) Drop the parameter update if there is an overflow
>>> loss_scale_manager = FixedLossScaleManager()
>>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> model = Model(net, loss_scale_manager=loss_scale_manager, optimizer=optim)
>>>
>>> #2) Execute parameter update even if overflow occurs
>>> loss_scale = 1024
>>> loss_scale_manager = FixedLossScaleManager(loss_scale, False)
>>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9, loss_scale=loss_scale)
>>> model = Model(net, loss_scale_manager=loss_scale_manager, optimizer=optim)
get_drop_overflow_update()[source]

Get the flag whether to drop optimizer update when there is an overflow.

get_loss_scale()[source]

Get loss scale value.

get_update_cell()[source]

Returns the cell for TrainOneStepWithLossScaleCell

update_loss_scale(overflow)[source]

Update loss scale value.

Parameters

overflow (bool) – Whether it overflows.

class mindspore.LossScaleManager[source]

Loss scale manager abstract class.

get_loss_scale()[source]

Get loss scale value.

get_update_cell()[source]

Get the loss scaling update logic cell.

update_loss_scale(overflow)[source]

Update loss scale value.

Parameters

overflow (bool) – Whether it overflows.

class mindspore.Model(network, loss_fn=None, optimizer=None, metrics=None, eval_network=None, eval_indexes=None, amp_level='O0', **kwargs)[source]

High-Level API for Training or Testing.

Model groups layers into an object with training and inference features.

Parameters
  • network (Cell) – A training or testing network.

  • loss_fn (Cell) – Objective function, if loss_fn is None, the network should contain the logic of loss and grads calculation, and the logic of parallel if needed. Default: None.

  • optimizer (Cell) – Optimizer for updating the weights. Default: None.

  • metrics (Union[dict, set]) – A Dictionary or a set of metrics to be evaluated by the model during training and testing. eg: {‘accuracy’, ‘recall’}. Default: None.

  • eval_network (Cell) – Network for evaluation. If not defined, network and loss_fn would be wrapped as eval_network. Default: None.

  • eval_indexes (list) – When defining the eval_network, if eval_indexes is None, all outputs of the eval_network would be passed to metrics, otherwise eval_indexes must contain three elements, including the positions of loss value, predicted value and label. The loss value would be passed to the Loss metric, the predicted value and label would be passed to other metric. Default: None.

  • amp_level (str) –

    Option for argument level in mindspore.amp.build_train_network, level for mixed precision training. Supports [“O0”, “O2”, “O3”, “auto”]. Default: “O0”.

    • O0: Do not change.

    • O2: Cast network to float16, keep batchnorm run in float32, using dynamic loss scale.

    • O3: Cast network to float16, with additional property ‘keep_batchnorm_fp32=False’.

    • auto: Set to level to recommended level in different devices. Set level to O2 on GPU, Set level to O3 Ascend. The recommended level is choose by the export experience, cannot always generalize. User should specify the level for special network.

    O2 is recommended on GPU, O3 is recommended on Ascend.

  • loss_scale_manager (Union[None, LossScaleManager]) – If it is None, the loss would not be scaled. Otherwise, scale the loss by LossScaleManager and optimizer can not be None.It is a key argument. e.g. Use loss_scale_manager=None to set the value.

  • keep_batchnorm_fp32 (bool) – Keep Batchnorm running in float32. If it is set to true, the level setting before will be overwritten. Default: True.

Examples

>>> from mindspore import Model, nn
>>>
>>> class Net(nn.Cell):
...     def __init__(self, num_class=10, num_channel=1):
...         super(Net, self).__init__()
...         self.conv1 = nn.Conv2d(num_channel, 6, 5, pad_mode='valid')
...         self.conv2 = nn.Conv2d(6, 16, 5, pad_mode='valid')
...         self.fc1 = nn.Dense(16*5*5, 120, weight_init='ones')
...         self.fc2 = nn.Dense(120, 84, weight_init='ones')
...         self.fc3 = nn.Dense(84, num_class, weight_init='ones')
...         self.relu = nn.ReLU()
...         self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)
...         self.flatten = nn.Flatten()
...
...     def construct(self, x):
...         x = self.max_pool2d(self.relu(self.conv1(x)))
...         x = self.max_pool2d(self.relu(self.conv2(x)))
...         x = self.flatten(x)
...         x = self.relu(self.fc1(x))
...         x = self.relu(self.fc2(x))
...         x = self.fc3(x)
...         return x
>>>
>>> net = Net()
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim, metrics=None)
>>> # For details about how to build the dataset, please refer to the tutorial
>>> # document on the official website.
>>> dataset = create_custom_dataset()
>>> model.train(2, dataset)
eval(valid_dataset, callbacks=None, dataset_sink_mode=True)[source]

Evaluation API where the iteration is controlled by python front-end.

Configure to pynative mode or CPU, the evaluating process will be performed with dataset non-sink mode.

Note

If dataset_sink_mode is True, data will be sent to device. If device is Ascend, features of data will be transferred one by one. The limitation of data transmission per time is 256M.

Parameters
  • valid_dataset (Dataset) – Dataset to evaluate the model.

  • callbacks (Optional[list(Callback)]) – List of callback objects which should be executed while training. Default: None.

  • dataset_sink_mode (bool) – Determines whether to pass the data through dataset channel. Default: True.

Returns

Dict, which returns the loss value and metrics values for the model in the test mode.

Examples

>>> from mindspore import Model, nn
>>>
>>> # For details about how to build the dataset, please refer to the tutorial
>>> # document on the official website.
>>> dataset = create_custom_dataset()
>>> net = Net()
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> model = Model(net, loss_fn=loss, optimizer=None, metrics={'acc'})
>>> acc = model.eval(dataset, dataset_sink_mode=False)
property eval_network

Get the model’s eval_network.

infer_predict_layout(*predict_data)[source]

Generate parameter layout for the predict network in auto or semi auto parallel mode.

Data could be a single tensor or multiple tensors.

Note

Batch data should be put together in one tensor.

Parameters

predict_data (Tensor) – One tensor or multiple tensors of predict data.

Returns

Dict, Parameter layout dictionary used for load distributed checkpoint

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore import Model, context, Tensor
>>> from mindspore.context import ParallelMode
>>>
>>> context.set_context(mode=context.GRAPH_MODE)
>>> context.set_auto_parallel_context(full_batch=True, parallel_mode=ParallelMode.SEMI_AUTO_PARALLEL)
>>> input_data = Tensor(np.random.randint(0, 255, [1, 3, 224, 224]), ms.float32)
>>> model = Model(Net())
>>> model.infer_predict_layout(input_data)
predict(*predict_data)[source]

Generate output predictions for the input samples.

Data could be a single tensor, a list of tensor, or a tuple of tensor.

Note

Batch data should be put together in one tensor.

Parameters

predict_data (Tensor) – The predict data, can be bool, int, float, str, None, tensor, or tuple, list and dict that store these types.

Returns

Tensor, array(s) of predictions.

Examples

>>> import mindspore as ms
>>> from mindspore import Model, Tensor
>>>
>>> input_data = Tensor(np.random.randint(0, 255, [1, 1, 32, 32]), ms.float32)
>>> model = Model(Net())
>>> result = model.predict(input_data)
property predict_network

Get the model’s predict_network.

train(epoch, train_dataset, callbacks=None, dataset_sink_mode=True, sink_size=- 1)[source]

Training API where the iteration is controlled by python front-end.

When setting pynative mode or CPU, the training process will be performed with dataset not sink.

Note

If dataset_sink_mode is True, data will be sent to device. If device is Ascend, features of data will be transferred one by one. The limitation of data transmission per time is 256M. If sink_size > 0, each epoch the dataset can be traversed unlimited times until you get sink_size elements of the dataset. Next epoch continues to traverse from the end position of the previous traversal.

Parameters
  • epoch (int) – Generally, total number of iterations on the data per epoch. When dataset_sink_mode is set to true and sink_size>0, each epoch sink sink_size steps on the data instead of total number of iterations.

  • train_dataset (Dataset) – A training dataset iterator. If there is no loss_fn, a tuple with multiple data (data1, data2, data3, …) should be returned and passed to the network. Otherwise, a tuple (data, label) should be returned. The data and label would be passed to the network and loss function respectively.

  • callbacks (Optional[list[Callback], Callback]) – List of callback objects or callback object, which should be executed while training. Default: None.

  • dataset_sink_mode (bool) – Determines whether to pass the data through dataset channel. Default: True. Configure pynative mode or CPU, the training process will be performed with dataset not sink. Default: True.

  • sink_size (int) – Control the amount of data in each sink. If sink_size = -1, sink the complete dataset for each epoch. If sink_size > 0, sink sink_size data for each epoch. If dataset_sink_mode is False, set sink_size as invalid. Default: -1.

Examples

>>> from mindspore import Model, nn
>>> from mindspore.train.loss_scale_manager import FixedLossScaleManager
>>>
>>> # For details about how to build the dataset, please refer to the tutorial
>>> # document on the official website.
>>> dataset = create_custom_dataset()
>>> net = Net()
>>> loss = nn.SoftmaxCrossEntropyWithLogits()
>>> loss_scale_manager = FixedLossScaleManager()
>>> optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9)
>>> model = Model(net, loss_fn=loss, optimizer=optim, metrics=None, loss_scale_manager=loss_scale_manager)
>>> model.train(2, dataset)
property train_network

Get the model’s train_network.

class mindspore.Parameter(default_input, *args, **kwargs)[source]

Parameter types of cell models.

After initialized Parameter is a subtype of Tensor.

In auto_parallel mode of “semi_auto_parallel” and “auto_parallel”, if init Parameter by an Tensor, the type of Parameter will be Tensor. Tensor will save the shape and type info of a tensor with no memory usage. The shape can be changed while compiling for auto-parallel. Call init_data will return a Tensor Parameter with initialized data.

Note

Each parameter of Cell is represented by Parameter class. A Parameter has to belong to a Cell. If there is an operator in the network that requires part of the inputs to be Parameter, then the Parameters as this part of the inputs are not allowed to be cast. It is recommended to use the default value of name when initialize a parameter as one attribute of a cell, otherwise, the parameter name may be different than expected.

Parameters
  • default_input (Union[Tensor, int, float, numpy.ndarray, list]) – Parameter data, to be set initialized.

  • name (str) – Name of the child parameter. Default: None.

  • requires_grad (bool) – True if the parameter requires gradient. Default: True.

  • layerwise_parallel (bool) – When layerwise_parallel is true in data parallel mode, broadcast and gradients communication would not be applied to parameters. Default: False.

  • parallel_optimizer (bool) – It is used to filter the weight shard operation in semi auto or auto parallel mode. It works only when enable parallel optimizer in mindspore.context.set_auto_parallel_context(). Default: True.

Examples

>>> from mindspore import Parameter, Tensor
>>> from mindspore.common import initializer as init
>>> from mindspore.ops import operations as P
>>> from mindspore.nn import Cell
>>> import mindspore
>>> import numpy as np
>>> from mindspore import context
>>>
>>> class Net(Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.matmul = P.MatMul()
...         self.weight = Parameter(Tensor(np.ones((1, 2)), mindspore.float32), name="w", requires_grad=True)
...
...     def construct(self, x):
...         out = self.matmul(self.weight, x)
...         return out
>>> net = Net()
>>> x = Tensor(np.ones((2, 1)), mindspore.float32)
>>> print(net(x))
[[2.]]
>>> _ = net.weight.set_data(Tensor(np.zeros((1, 2)), mindspore.float32))
>>> print(net(x))
[[0.]]
property cache_enable

Return whether the parameter is cache enable.

property cache_shape

Return the cache shape corresponding to the parameter if use cache.

clone(init='same')[source]

Clone the parameter.

Parameters

init (Union[Tensor, str, numbers.Number]) – Initialize the shape of the parameter. Default: ‘same’.

Returns

Parameter, a new parameter.

property comm_fusion

Get the fusion type for communication operators corresponding to this parameter.

init_data(layout=None, set_sliced=False)[source]

Initialize the parameter data.

Parameters
  • layout (Union[None, list(list(int))]) –

    Parameter slice layout [dev_mat, tensor_map, slice_shape]. Default: None.

    • dev_mat (list(int)): Device matrix.

    • tensor_map (list(int)): Tensor map.

    • slice_shape (list(int)): Shape of slice.

  • set_sliced (bool) – True if the parameter is set sliced after initializing the data. Default: False.

Raises

RuntimeError – If it is from Initializer, and parallel mode has changed after the Initializer created.

Returns

Parameter, the Parameter after initializing data. If current Parameter was already initialized before, returns the same initialized Parameter.

property inited_param

Get the new parameter after call the init_data.

Default is a None, If self is a Parameter with out data, after call the init_data the initialized Parameter with data will be recorded here.

property is_init

Get the initialization status of the parameter.

In GE backend, the Parameter need a “init graph” to sync the data from host to device. This flag indicates whether the data as been sync to the device.

This flag only work in GE, and it will be set to False in other backend.

property name

Get the name of the parameter.

property parallel_optimizer

Return whether the parameter requires weight shard for parallel optimizer.

property requires_grad

Return whether the parameter requires gradient.

set_data(data, slice_shape=False)[source]

Set set_data of current Parameter.

Parameters
  • data (Union[Tensor, int, float]) – new data.

  • slice_shape (bool) – If slice the parameter is set to true, the shape is not checked for consistency. Default: False.

Returns

Parameter, the parameter after set data.

set_param_ps(init_in_server=False)[source]

Set whether the trainable parameter is updated by parameter server and whether the trainable parameter is initialized on server.

Note

It only works when a running task is in the parameter server mode.

Parameters

init_in_server (bool) – Whether trainable parameter updated by parameter server is initialized on server. Default: False.

property sliced

Get slice status of the parameter.

property unique

whether the parameter is already unique or not.

class mindspore.ParameterTuple(iterable)[source]

Class for storing tuple of parameters.

Note

It is used to store the parameters of the network into the parameter tuple collection.

clone(prefix, init='same')[source]

Clone the parameter.

Parameters
  • prefix (str) – Namespace of parameter.

  • init (str) – Initialize the shape of the parameter. Default: ‘same’.

Returns

Tuple, the new Parameter tuple.

class mindspore.RowTensor(indices, values, dense_shape)[source]

A sparse representation of a set of tensor slices at given indices.

An RowTensor is typically used to represent a subset of a larger tensor dense of shape [L0, D1, .. , DN] where L0 >> D0.

The values in indices are the indices in the first dimension of the slices that have been extracted from the larger tensor.

The dense tensor dense represented by an RowTensor slices has dense[slices.indices[i], :, :, :, …] = slices.values[i, :, :, :, …].

RowTensor can only be used in the Cell’s construct method.

It is not supported in pynative mode at the moment.

Parameters
  • indices (Tensor) – A 1-D integer Tensor of shape [D0].

  • values (Tensor) – A Tensor of any dtype of shape [D0, D1, …, Dn].

  • dense_shape (tuple(int)) – An integer tuple which contains the shape of the corresponding dense tensor.

Returns

RowTensor, composed of indices, values, and dense_shape.

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> class Net(nn.Cell):
...     def __init__(self, dense_shape):
...         super(Net, self).__init__()
...         self.dense_shape = dense_shape
...     def construct(self, indices, values):
...         x = RowTensor(indices, values, self.dense_shape)
...         return x.values, x.indices, x.dense_shape
>>>
>>> indices = Tensor([0])
>>> values = Tensor([[1, 2]], dtype=ms.float32)
>>> out = Net((3, 2))(indices, values)
>>> print(out[0])
[[1. 2.]]
>>> print(out[1])
[0]
>>> print(out[2])
(3, 2)
class mindspore.SparseTensor(indices, values, dense_shape)[source]

A sparse representation of a set of nonzero elememts from a tensor at given indices.

SparseTensor can only be used in the Cell’s construct method.

Pynative mode not supported at the moment.

For a tensor dense, its SparseTensor(indices, values, dense_shape) has dense[indices[i]] = values[i].

Parameters
  • indices (Tensor) – A 2-D integer Tensor of shape [N, ndims], where N and ndims are the number of values and number of dimensions in the SparseTensor, respectively.

  • values (Tensor) – A 1-D tensor of any type and shape [N], which supplies the values for each element in indices.

  • dense_shape (tuple(int)) – A integer tuple of size ndims, which specifies the dense_shape of the sparse tensor.

Returns

SparseTensor, composed of indices, values, and dense_shape.

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> class Net(nn.Cell):
...     def __init__(self, dense_shape):
...         super(Net, self).__init__()
...         self.dense_shape = dense_shape
...     def construct(self, indices, values):
...         x = SparseTensor(indices, values, self.dense_shape)
...         return x.values, x.indices, x.dense_shape
>>>
>>> indices = Tensor([[0, 1], [1, 2]])
>>> values = Tensor([1, 2], dtype=ms.float32)
>>> out = Net((3, 4))(indices, values)
>>> print(out[0])
[1. 2.]
>>> print(out[1])
[[0 1]
 [1 2]]
>>> print(out[2])
(3, 4)
class mindspore.Tensor(input_data=None, dtype=None, shape=None, init=None)[source]

Tensor is used for data storage.

Tensor inherits tensor object in C++. Some functions are implemented in C++ and some functions are implemented in Python.

Parameters
  • input_data (Tensor, float, int, bool, tuple, list, numpy.ndarray) – Input data of the tensor.

  • dtype (mindspore.dtype) – Input data should be None, bool or numeric type defined in mindspore.dtype. The argument is used to define the data type of the output tensor. If it is None, the data type of the output tensor will be as same as the input_data. Default: None.

  • shape (Union[tuple, list, int]) – A list of integers, a tuple of integers or an integer as the shape of output. If input_data is available, shape doesn’t need to be set. Default: None.

  • init (Initializer) – the information of init data. ‘init’ is used for delayed initialization in parallel mode. Usually, it is not recommended to use ‘init’ interface to initialize parameters in other conditions. If ‘init’ interface is used to initialize parameters, the init_data API need to be called to convert Tensor to the actual data.

Outputs:

Tensor, with the same shape as input_data.

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore.common.tensor import Tensor
>>> from mindspore.common.initializer import One
>>> # initialize a tensor with input data
>>> t1 = Tensor(np.zeros([1, 2, 3]), ms.float32)
>>> assert isinstance(t1, Tensor)
>>> assert t1.shape == (1, 2, 3)
>>> assert t1.dtype == ms.float32
>>>
>>> # initialize a tensor with a float scalar
>>> t2 = Tensor(0.1)
>>> assert isinstance(t2, Tensor)
>>> assert t2.dtype == ms.float64
...
>>> # initialize a tensor with init
>>> t3 = Tensor(shape = (1, 3), dtype=ms.float32, init=One())
>>> assert isinstance(t3, Tensor)
>>> assert t3.shape == (1, 3)
>>> assert t3.dtype == ms.float32
property T

Return the transposed tensor.

abs()[source]

Return absolute value element-wisely.

Returns

Tensor, has the same data type as x.

all(axis=(), keep_dims=False)[source]

Check all array elements along a given axis evaluate to True.

Parameters
  • axis (Union[None, int, tuple(int)) – Dimensions of reduction, when axis is None or empty tuple, reduce all dimensions. Default: ().

  • keep_dims (bool) – Whether to keep the reduced dimensions. Default: False.

Returns

Tensor, has the same data type as x.

any(axis=(), keep_dims=False)[source]

Check any array element along a given axis evaluate to True.

Parameters
  • axis (Union[None, int, tuple(int)) – Dimensions of reduction, when axis is None or empty tuple, reduce all dimensions. Default: ().

  • keep_dims (bool) – Whether to keep the reduced dimensions. Default: False.

Returns

Tensor, has the same data type as x.

asnumpy()[source]

Convert tensor to numpy array.

astype(dtype, copy=True)[source]

Return a copy of the tensor, casted to a specified type.

Parameters
  • dtype (Union[mindspore.dtype, str]) – Designated tensor dtype, can be in format of mindspore.dtype.float32 or float32. Default: mindspore.dtype.float32.

  • copy (bool, optional) – By default, astype always returns a newly allocated tensor. If this is set to false, the input tensor is returned instead of a copy if possible. Default: True.

Returns

Tensor, with the designated dtype.

property dtype

Return the dtype of the tensor (mindspore.dtype).

expand_as(x)[source]

Expand the dimension of target tensor to the dimension of input tensor.

Parameters

x (Tensor) – The input tensor. The shape of input tensor must obey the broadcasting rule.

Returns

Tensor, has the same dimension as input tensor.

flatten(order='C')[source]

Return a copy of the tensor collapsed into one dimension.

Parameters

order (str, optional) – Can choose between ‘C’ and ‘F’. ‘C’ means to flatten in row-major (C-style) order. ‘F’ means to flatten in column-major (Fortran-style) order. Only ‘C’ and ‘F’ are supported. Default: ‘C’.

Returns

Tensor, has the same data type as input.

flush_from_cache()[source]

Flush cache data to host if tensor is cache enable.

static from_numpy(array)[source]

Convert numpy array to Tensor without copy data.

property has_init

tensor is inited.

init_data(slice_index=None, shape=None, opt_shard_group=None)[source]

Get the tensor format data of this Tensor. The init_data function can be called once for the same tensor.

Parameters
  • slice_index (int) – Slice index of a parameter’s slices. It is used when initialize a slice of a parameter, it guarantees that devices using the same slice can generate the same tensor.

  • shape (list(int)) – Shape of the slice, it is used when initialize a slice of the parameter.

  • opt_shard_group (str) – Optimizer shard group which is used in auto or semi auto parallel mode to get one shard of a parameter’s slice.

property itemsize

Return the length of one tensor element in bytes.

mean(axis=(), keep_dims=False)[source]

Reduce a dimension of a tensor by averaging all elements in the dimension.

Parameters
  • axis (Union[None, int, tuple(int), list(int)]) – Dimensions of reduction, when axis is None or empty tuple, reduce all dimensions. Default: ().

  • keep_dims (bool) – Whether to keep the reduced dimensions. Default: False.

Returns

Tensor, has the same data type as x.

property nbytes

Return the total number of bytes taken by the tensor.

property ndim

Return the number of tensor dimensions.

ravel()[source]

Return a contiguous flattened tensor.

Returns

Tensor, a 1-D tensor, containing the same elements of the input.

reshape(*shape)[source]

Give a new shape to a tensor without changing its data.

Parameters

shape (Union[int, tuple(int), list(int)]) – The new shape should be compatible with the original shape. If an integer, then the result will be a 1-D array of that length. One shape dimension can be -1. In this case, the value is inferred from the length of the array and remaining dimensions.

Returns

Tensor, with new specified shape.

property shape

Returns the shape of the tensor as a tuple.

property size

Returns the total number of elements in tensor.

squeeze(axis=None)[source]

Remove single-dimensional entries from the shape of a tensor.

Parameters

axis (Union[None, int, list(int), tuple(int)], optional) – Default is None.

Returns

Tensor, with all or a subset of the dimensions of length 1 removed.

property strides

Return the tuple of bytes to step in each dimension when traversing a tensor.

swapaxes(axis1, axis2)[source]

Interchange two axes of a tensor.

Parameters
  • axis1 (int) – First axis.

  • axis2 (int) – Second axis.

Returns

Transposed tensor, has the same data type as the input.

to_tensor(slice_index=None, shape=None, opt_shard_group=None)[source]

Return init_data().

transpose(*axes)[source]

Return a view of the tensor with axes transposed.

For a 1-D tensor this has no effect, as a transposed vector is simply the same vector. For a 2-D tensor, this is a standard matrix transpose. For a n-D tensor, if axes are given, their order indicates how the axes are permuted. If axes are not provided and tensor.shape = (i[0], i[1],…i[n-2], i[n-1]), then tensor.transpose().shape = (i[n-1], i[n-2], … i[1], i[0]).

Parameters

axes (Union[None, tuple(int), list(int), int], optional) – If axes is None or blank, tensor.transpose() will reverse the order of the axes. If axes is tuple(int) or list(int), tensor.transpose() will transpose the tensor to the new axes order. If axes is int, this form is simply intended as a convenience alternative to the tuple/list form.

Returns

Tensor, has the same dimension as input tensor, with axes suitably permuted.

view(*shape)[source]

Reshape the tensor according to the input shape.

Parameters

shape (Union[tuple(int), int]) – Dimension of the output tensor.

Returns

Tensor, has the same dimension as the input shape.

property virtual_flag

Mark tensor is virtual.

mindspore.build_searched_strategy(strategy_filename)[source]

Build strategy of every parameter in network.

Parameters

strategy_filename (str) – Name of strategy file.

Returns

Dictionary, whose key is parameter name and value is slice strategy of this parameter.

Raises
mindspore.build_train_network(network, optimizer, loss_fn=None, level='O0', **kwargs)[source]

Build the mixed precision training cell automatically.

Parameters
  • network (Cell) – Definition of the network.

  • loss_fn (Union[None, Cell]) – Definition of the loss_fn. If None, the network should have the loss inside. Default: None.

  • optimizer (Optimizer) – Optimizer to update the Parameter.

  • level (str) –

    Supports [“O0”, “O2”, “O3”, “auto”]. Default: “O0”.

    • O0: Do not change.

    • O2: Cast network to float16, keep batchnorm and loss_fn (if set) run in float32, using dynamic loss scale.

    • O3: Cast network to float16, with additional property ‘keep_batchnorm_fp32=False’.

    • auto: Set to level to recommended level in different devices. Set level to O2 on GPU, Set level to O3 Ascend. The recommended level is choose by the export experience, cannot always generalize. User should specify the level for special network.

    O2 is recommended on GPU, O3 is recommended on Ascend.

  • cast_model_type (mindspore.dtype) – Supports mstype.float16 or mstype.float32. If set to mstype.float16, use float16 mode to train. If set, overwrite the level setting.

  • keep_batchnorm_fp32 (bool) – Keep Batchnorm run in float32. If set, overwrite the level setting. Only cast_model_type is float16, keep_batchnorm_fp32 will take effect.

  • loss_scale_manager (Union[None, LossScaleManager]) – If None, not scale the loss, or else scale the loss by LossScaleManager. If set, overwrite the level setting.

mindspore.connect_network_with_dataset(network, dataset_helper)[source]

Connect the network with dataset in dataset_helper.

This function wraps the input network with ‘GetNext’ so that the data can be fetched automatically from the data channel corresponding to the ‘queue_name’ and passed to the input network during forward computation.

Note

In the case of running the network on Ascend/GPU in graph mode, this function will wrap the input network with ‘GetNext’, in other cases, the input network will be returned with no change. The ‘GetNext’ is required to get data only in sink mode, so this function is not applicable to no-sink mode.

Parameters
  • network (Cell) – The training network for dataset.

  • dataset_helper (DatasetHelper) – A class to process the MindData dataset, it provides the type, shape and queue name of the dataset to wrap the GetNext.

Returns

Cell, a new network wrapped with ‘GetNext’ in the case of running the task on Ascend in graph mode, otherwise it is the input network.

Supported Platforms:

Ascend GPU

Examples

>>> from mindspore import DatasetHelper
>>>
>>> # call create_dataset function to create a regular dataset, refer to mindspore.dataset
>>> train_dataset = create_custom_dataset()
>>> dataset_helper = DatasetHelper(train_dataset, dataset_sink_mode=True)
>>> net = Net()
>>> net_with_get_next = connect_network_with_dataset(net, dataset_helper)
mindspore.dtype_to_nptype(type_)[source]

Convert MindSpore dtype to numpy data type.

Parameters

type (mindspore.dtype) – MindSpore’s dtype.

Returns

The data type of numpy.

mindspore.dtype_to_pytype(type_)[source]

Convert MindSpore dtype to python data type.

Parameters

type (mindspore.dtype) – MindSpore’s dtype.

Returns

Type of python.

mindspore.export(net, *inputs, file_name, file_format=AIR, **kwargs)[source]

Export the MindSpore prediction model to a file in the specified format.

Notes

When exporting to AIR format, the size of a single tensor can not exceed 2GB.

Parameters
  • net (Cell) – MindSpore network.

  • inputs (Tensor) – Inputs of the net.

  • file_name (str) – File name of the model to be exported.

  • file_format (str) –

    MindSpore currently supports ‘AIR’, ‘ONNX’ and ‘MINDIR’ format for exported model.

    • AIR: Ascend Intermediate Representation. An intermediate representation format of Ascend model. Recommended suffix for output file is ‘.air’.

    • ONNX: Open Neural Network eXchange. An open format built to represent machine learning models. Recommended suffix for output file is ‘.onnx’.

    • MINDIR: MindSpore Native Intermediate Representation for Anf. An intermediate representation format for MindSpore models. Recommended suffix for output file is ‘.mindir’.

  • kwargs (dict) –

    Configuration options dictionary.

    • quant_mode: The mode of quant.

    • mean: Input data mean. Default: 127.5.

    • std_dev: Input data variance. Default: 127.5.

mindspore.get_level()[source]

Get the logger level.

Returns

str, the Log level includes 3(ERROR), 2(WARNING), 1(INFO), 0(DEBUG).

Examples

>>> import os
>>> os.environ['GLOG_v'] = '0'
>>> from mindspore import log as logger
>>> logger.get_level()
mindspore.get_log_config()[source]

Get logger configurations.

Returns

Dict, the dictionary of logger configurations.

Examples

>>> import os
>>> os.environ['GLOG_v'] = '1'
>>> os.environ['GLOG_logtostderr'] = '0'
>>> os.environ['GLOG_log_dir'] = '/var/log'
>>> os.environ['logger_maxBytes'] = '5242880'
>>> os.environ['logger_backupCount'] = '10'
>>> from mindspore import log as logger
>>> logger.get_log_config()
mindspore.get_py_obj_dtype(obj)[source]

Get the MindSpore data type which corresponds to python type or variable.

Parameters

obj (type) – An object of python type, or a variable in python type.

Returns

Type of MindSpore type.

mindspore.get_seed()[source]

Get global random seed.

mindspore.issubclass_(type_, dtype)[source]

Determine whether type_ is a subclass of dtype.

Parameters
Returns

bool, True or False.

mindspore.load(file_name)[source]

Load MindIR.

The returned object can be executed by a GraphCell. However, there are some limitations to the current use of GraphCell, see class mindspore.nn.GraphCell for more details.

Parameters

file_name (str) – MindIR file name.

Returns

Object, a compiled graph that can executed by GraphCell.

Raises

ValueError – MindIR file is incorrect.

Examples

>>> import numpy as np
>>> import mindspore.nn as nn
>>> from mindspore import Tensor
>>> from mindspore.train import export, load
>>>
>>> net = nn.Conv2d(1, 1, kernel_size=3)
>>> input = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
>>> export(net, input, file_name="net", file_format="MINDIR")
>>> graph = load("net.mindir")
>>> net = nn.GraphCell(graph)
>>> output = net(input)
mindspore.load_checkpoint(ckpt_file_name, net=None, strict_load=False, filter_prefix=None)[source]

Loads checkpoint info from a specified file.

Parameters
  • ckpt_file_name (str) – Checkpoint file name.

  • net (Cell) – Cell network. Default: None

  • strict_load (bool) – Whether to strict load the parameter into net. If False, it will load parameter in the param_dict into net with the same suffix. Default: False

  • filter_prefix (Union[str, list[str], tuple[str]]) – Parameters starting with the filter_prefix will not be loaded. Default: None.

Returns

Dict, key is parameter name, value is a Parameter.

Raises

ValueError – Checkpoint file is incorrect.

Examples

>>> from mindspore import load_checkpoint
>>>
>>> ckpt_file_name = "./checkpoint/LeNet5-1_32.ckpt"
>>> param_dict = load_checkpoint(ckpt_file_name, filter_prefix="conv1")
mindspore.load_distributed_checkpoint(network, checkpoint_filenames, predict_strategy=None, train_strategy_filename=None)[source]

Load checkpoint into net for distributed predication.

Parameters
  • network (Cell) – Network for distributed predication.

  • checkpoint_filenames (list[str]) – The name of Checkpoint files in order of rank id.

  • predict_strategy (dict) – Strategy of predication process, whose key is parameter name, and value is a list or a tuple that the first four elements are [dev_matrix, tensor_map, param_split_shape, field]. If None, it means that the predication process just uses single device. Default: None.

Raises
  • TypeError – The type of inputs do not match the requirements.

  • ValueError – Failed to load checkpoint into net.

mindspore.load_param_into_net(net, parameter_dict, strict_load=False)[source]

Loads parameters into network.

Parameters
  • net (Cell) – Cell network.

  • parameter_dict (dict) – Parameter dictionary.

  • strict_load (bool) – Whether to strict load the parameter into net. If False, it will load parameter in the param_dict into net with the same suffix. Default: False

Raises

TypeError – Argument is not a Cell, or parameter_dict is not a Parameter dictionary.

Examples

>>> from mindspore import load_checkpoint, load_param_into_net
>>>
>>> net = Net()
>>> ckpt_file_name = "./checkpoint/LeNet5-1_32.ckpt"
>>> param_dict = load_checkpoint(ckpt_file_name, filter_prefix="conv1")
>>> param_not_load = load_param_into_net(net, param_dict)
>>> print(param_not_load)
['conv1.weight']
mindspore.merge_sliced_parameter(sliced_parameters, strategy=None)[source]

Merge parameter slices to one whole parameter.

Parameters
  • sliced_parameters (list[Parameter]) – Parameter slices in order of rank_id.

  • strategy (Optional[dict]) – Parameter slice strategy, whose key is parameter name and value is slice strategy of this parameter. If strategy is None, just merge parameter slices in 0 axis order. Default: None.

Returns

Parameter, the merged parameter which has the whole data.

Raises
  • ValueError – Failed to merge.

  • TypeError – The sliced_parameters is incorrect or strategy is not dict.

  • KeyError – The parameter name is not in keys of strategy.

Examples

>>> from mindspore.common.parameter import Parameter
>>> from mindspore.train import merge_sliced_parameter
>>>
>>> sliced_parameters = [
...                      Parameter(Tensor(np.array([0.00023915, 0.00013939, -0.00098059])),
...                                "network.embedding_table"),
...                      Parameter(Tensor(np.array([0.00015815, 0.00015458, -0.00012125])),
...                                "network.embedding_table"),
...                      Parameter(Tensor(np.array([0.00042165, 0.00029692, -0.00007941])),
...                                "network.embedding_table"),
...                      Parameter(Tensor(np.array([0.00084451, 0.00089960, -0.00010431])),
...                                "network.embedding_table")]
>>> merged_parameter = merge_sliced_parameter(sliced_parameters)
mindspore.ms_function(fn=None, obj=None, input_signature=None)[source]

Create a callable MindSpore graph from a python function.

This allows the MindSpore runtime to apply optimizations based on graph.

Parameters
  • fn (Function) – The Python function that will be run as a graph. Default: None.

  • obj (Object) – The Python Object that provides the information for identifying the compiled function.Default: None.

  • input_signature (Tensor) – The Tensor which describes the input arguments. The shape and dtype of the Tensor will be supplied to this function. If input_signature is specified, each input to fn must be a Tensor. And the input parameters of fn cannot accept **kwargs. The shape and dtype of actual inputs should keep the same as input_signature. Otherwise, TypeError will be raised. Default: None.

Returns

Function, if fn is not None, returns a callable function that will execute the compiled function; If fn is None, returns a decorator and when this decorator invokes with a single fn argument, the callable function is equal to the case when fn is not None.

Examples

>>> from mindspore.ops import functional as F
...
>>> x = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
>>> y = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
...
>>> # create a callable MindSpore graph by calling ms_function
>>> def tensor_add(x, y):
...     z = x + y
...     return z
...
>>> tensor_add_graph = ms_function(fn=tensor_add)
>>> out = tensor_add_graph(x, y)
...
>>> # create a callable MindSpore graph through decorator @ms_function
>>> @ms_function
... def tensor_add_with_dec(x, y):
...     z = x + y
...     return z
...
>>> out = tensor_add_with_dec(x, y)
...
>>> # create a callable MindSpore graph through decorator @ms_function with input_signature parameter
>>> @ms_function(input_signature=(Tensor(np.ones([1, 1, 3, 3]).astype(np.float32)),
...                               Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))))
... def tensor_add_with_sig(x, y):
...     z = x + y
...     return z
...
>>> out = tensor_add_with_sig(x, y)
mindspore.parse_print(print_file_name)[source]

Loads Print data from a specified file.

Parameters

print_file_name (str) – The file name of saved print data.

Returns

List, element of list is Tensor.

Raises

ValueError – The print file may be empty, please make sure enter the correct file name.

mindspore.pytype_to_dtype(obj)[source]

Convert python type to MindSpore type.

Parameters

obj (type) – A python type object.

Returns

Type of MindSpore type.

mindspore.save_checkpoint(save_obj, ckpt_file_name, integrated_save=True, async_save=False)[source]

Saves checkpoint info to a specified file.

Parameters
  • save_obj (Union[Cell, list]) – The cell object or data list(each element is a dictionary, like [{“name”: param_name, “data”: param_data},…], the type of param_name would be string, and the type of param_data would be parameter or Tensor).

  • ckpt_file_name (str) – Checkpoint file name. If the file name already exists, it will be overwritten.

  • integrated_save (bool) – Whether to integrated save in automatic model parallel scene. Default: True

  • async_save (bool) – Whether asynchronous execution saves the checkpoint to a file. Default: False

Raises

TypeError – If the parameter save_obj is not nn.Cell or list type. And if the parameter integrated_save and async_save are not bool type.

mindspore.set_seed(seed)[source]

Set global random seed.

Note

The global seed is used by numpy.random, mindspore.common.Initializer, mindspore.ops.composite.random_ops and mindspore.nn.probability.distribution.

If global seed is not set, these packages will use their own default seed independently, numpy.random and mindspore.common.Initializer will choose a random seed, mindspore.ops.composite.random_ops and mindspore.nn.probability.distribution will use zero.

Seed set by numpy.random.seed() only used by numpy.random, while seed set by this API will also used by numpy.random, so just set all seed by this API is recommended.

Parameters

seed (int) – The seed to be set.

Raises

Examples

>>> from mindspore.ops import composite as C
>>> from mindspore import Tensor
>>>
>>> # Note: (1) Please make sure the code is running in PYNATIVE MODE;
>>> # (2) Because Composite-level ops need parameters to be Tensors, for below examples,
>>> # when using C.uniform operator, minval and maxval are initialised as:
>>> minval = Tensor(1.0, ms.float32)
>>> maxval = Tensor(2.0, ms.float32)
>>>
>>> # 1. If global seed is not set, numpy.random and initializer will choose a random seed:
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A2
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W1
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W2
>>> # Rerun the program will get different results:
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A3
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A4
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W3
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W4
>>>
>>> # 2. If global seed is set, numpy.random and initializer will use it:
>>> set_seed(1234)
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A2
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W1
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W2
>>> # Rerun the program will get the same results:
>>> set_seed(1234)
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A2
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W1
>>> w1 = Parameter(initializer("uniform", [2, 2], ms.float32), name="w1") # W2
>>>
>>> # 3. If neither global seed nor op seed is set, mindspore.ops.composite.random_ops and
>>> # mindspore.nn.probability.distribution will choose a random seed:
>>> c1 = C.uniform((1, 4), minval, maxval) # C1
>>> c2 = C.uniform((1, 4), minval, maxval) # C2
>>> # Rerun the program will get different results:
>>> c1 = C.uniform((1, 4), minval, maxval) # C3
>>> c2 = C.uniform((1, 4), minval, maxval) # C4
>>>
>>> # 4. If global seed is set, but op seed is not set, mindspore.ops.composite.random_ops and
>>> # mindspore.nn.probability.distribution will calculate a seed according to global seed and
>>> # default op seed. Each call will change the default op seed, thus each call get different
>>> # results.
>>> set_seed(1234)
>>> c1 = C.uniform((1, 4), minval, maxval) # C1
>>> c2 = C.uniform((1, 4), minval, maxval) # C2
>>> # Rerun the program will get the same results:
>>> set_seed(1234)
>>> c1 = C.uniform((1, 4), minval, maxval) # C1
>>> c2 = C.uniform((1, 4), minval, maxval) # C2
>>>
>>> # 5. If both global seed and op seed are set, mindspore.ops.composite.random_ops and
>>> # mindspore.nn.probability.distribution will calculate a seed according to global seed and
>>> # op seed counter. Each call will change the op seed counter, thus each call get different
>>> # results.
>>> set_seed(1234)
>>> c1 = C.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = C.uniform((1, 4), minval, maxval, seed=2) # C2
>>> # Rerun the program will get the same results:
>>> set_seed(1234)
>>> c1 = C.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = C.uniform((1, 4), minval, maxval, seed=2) # C2
>>>
>>> # 6. If op seed is set but global seed is not set, 0 will be used as global seed. Then
>>> # mindspore.ops.composite.random_ops and mindspore.nn.probability.distribution act as in
>>> # condition 5.
>>> c1 = C.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = C.uniform((1, 4), minval, maxval, seed=2) # C2
>>> # Rerun the program will get the same results:
>>> c1 = C.uniform((1, 4), minval, maxval, seed=2) # C1
>>> c2 = C.uniform((1, 4), minval, maxval, seed=2) # C2
>>>
>>> # 7. Recall set_seed() in the program will reset numpy seed and op seed counter of
>>> # mindspore.ops.composite.random_ops and mindspore.nn.probability.distribution.
>>> set_seed(1234)
>>> np_1 = np.random.normal(0, 1, [1]).astype(np.float32) # A1
>>> c1 = C.uniform((1, 4), minval, maxval, seed=2) # C1
>>> set_seed(1234)
>>> np_2 = np.random.normal(0, 1, [1]).astype(np.float32) # still get A1
>>> c2 = C.uniform((1, 4), minval, maxval, seed=2) # still get C1