mindspore
MindSpore package.
- class mindspore.DatasetHelper(dataset, dataset_sink_mode=True)[source]
Help function to use the Minddata dataset.
According to different context, change the iter of dataset, to use the same for loop in different context.
Note
The iter of DatasetHelper will give one epoch data.
- Parameters
dataset (DataSet) – The dataset.
dataset_sink_mode (bool) – If true use GetNext to fetch the data, or else feed the data from host. Default: True.
Examples
>>> dataset_helper = DatasetHelper(dataset) >>> for inputs in dataset_helper: >>> outputs = network(*inputs)
- class mindspore.Model(network, loss_fn=None, optimizer=None, metrics=None, eval_network=None, eval_indexes=None, amp_level='O0', **kwargs)[source]
High-Level API for Training or Testing.
Model groups layers into an object with training and inference features.
- Parameters
network (Cell) – The training or testing network.
loss_fn (Cell) – Objective function, if loss_fn is None, the network should contain the logic of loss and grads calculation, and the logic of parallel if needed. Default: None.
optimizer (Cell) – Optimizer for updating the weights. Default: None.
metrics (Union[dict, set]) – Dict or set of metrics to be evaluated by the model during training and testing. eg: {‘accuracy’, ‘recall’}. Default: None.
eval_network (Cell) – Network for evaluation. If not defined, network and loss_fn would be wrapped as eval_network. Default: None.
eval_indexes (list) – In case of defining the eval_network, if eval_indexes is None, all outputs of eval_network would be passed to metrics, otherwise eval_indexes must contain three elements, representing the positions of loss value, predict value and label, the loss value would be passed to Loss metric, predict value and label would be passed to other metric. Default: None.
amp_level (str) –
Option for argument level in mindspore.amp.build_train_network, level for mixed precision training. Supports [O0, O2, O3]. Default: “O0”.
O0: Do not change.
O2: Cast network to float16, keep batchnorm run in float32, using dynamic loss scale.
O3: Cast network to float16, with additional property ‘keep_batchnorm_fp32=False’.
O2 is recommended on GPU, O3 is recommended on Ascend.
loss_scale_manager (Union[None, LossScaleManager]) – If None, not scale the loss, or else scale the loss by LossScaleManager. If it is set, overwrite the level setting. It’s a eyword argument. e.g. Use loss_scale_manager=None to set the value.
keep_batchnorm_fp32 (bool) – Keep Batchnorm run in float32. If set, overwrite the level setting. Default: True.
Examples
>>> class Net(nn.Cell): >>> def __init__(self): >>> super(Net, self).__init__() >>> self.conv = nn.Conv2d(3, 64, 3, has_bias=False, weight_init='normal') >>> self.bn = nn.BatchNorm2d(64) >>> self.relu = nn.ReLU() >>> self.flatten = nn.Flatten() >>> self.fc = nn.Dense(64*224*224, 12) # padding=0 >>> >>> def construct(self, x): >>> x = self.conv(x) >>> x = self.bn(x) >>> x = self.relu(x) >>> x = self.flatten(x) >>> out = self.fc(x) >>> return out >>> >>> net = Net() >>> loss = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True) >>> optim = Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9) >>> model = Model(net, loss_fn=loss, optimizer=optim, metrics=None) >>> dataset = get_dataset() >>> model.train(2, dataset)
- eval(valid_dataset, callbacks=None, dataset_sink_mode=True)[source]
Evaluation API where the iteration is controlled by python front-end.
Configure to pynative mode, the evaluation will be performed with dataset non-sink mode.
Note
CPU is not supported when dataset_sink_mode is true. If dataset_sink_mode is True, data will be sent to device. If device is Ascend, features of data will be transferred one by one. The limitation of data transmission per time is 256M.
- Parameters
- Returns
Dict, returns the loss value & metrics values for the model in test mode.
Examples
>>> dataset = get_dataset() >>> net = Net() >>> loss = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True) >>> model = Model(net, loss_fn=loss, optimizer=None, metrics={'acc'}) >>> model.eval(dataset)
- init(train_dataset=None, valid_dataset=None)[source]
Initializes compute graphs and data graphs with sink mode.
Note
Pre-init process only supports GRAPH_MODE and Ascend target currently.
- Parameters
train_dataset (Dataset) – A training dataset iterator. If define train_dataset, training graphs will be initialized. Default: None.
valid_dataset (Dataset) – A evaluating dataset iterator. If define valid_dataset, evaluation graphs will be initialized, and metrics in Model can not be None. Default: None.
Examples
>>> train_dataset = get_train_dataset() >>> valid_dataset = get_valid_dataset() >>> net = Net() >>> loss = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True) >>> optim = Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9) >>> model = Model(net, loss_fn=loss, optimizer=optim, metrics={'acc'}) >>> model.init(train_dataset, valid_dataset) >>> model.train(2, train_dataset) >>> model.eval(valid_dataset)
- predict(*predict_data)[source]
Generates output predictions for the input samples.
Data could be single tensor, or list of tensor, tuple of tensor.
Note
Batch data should be put together in one tensor.
- Parameters
predict_data (Tensor) – Tensor of predict data. can be array, list or tuple.
- Returns
Tensor, array(s) of predictions.
Examples
>>> input_data = Tensor(np.random.randint(0, 255, [1, 3, 224, 224]), mindspore.float32) >>> model = Model(Net()) >>> model.predict(input_data)
- train(epoch, train_dataset, callbacks=None, dataset_sink_mode=True)[source]
Training API where the iteration is controlled by python front-end.
When setting pynative mode, the training process will be performed with dataset not sink.
Note
CPU is not supported when dataset_sink_mode is true. If dataset_sink_mode is True, epoch of training should be equal to the count of repeat operation in dataset processing. Otherwise, errors could occur since the amount of data is not the amount training requires. If dataset_sink_mode is True, data will be sent to device. If device is Ascend, features of data will be transferred one by one. The limitation of data transmission per time is 256M.
- Parameters
epoch (int) – Total number of iterations on the data.
train_dataset (Dataset) – A training dataset iterator. If there is no loss_fn, a tuple with multiply data (data1, data2, data3, …) should be returned and passed to the network. Otherwise, a tuple (data, label) should be returned, and the data and label are passed to the network and loss function respectively.
callbacks (list) – List of callback object. Callbacks which should be excuted while training. Default: None.
dataset_sink_mode (bool) – Determines whether to pass the data through dataset channel. Default: True. Configure pynative mode, the training process will be performed with dataset not sink.
Examples
>>> dataset = get_dataset() >>> net = Net() >>> loss = nn.SoftmaxCrossEntropyWithLogits(is_grad=False, sparse=True) >>> loss_scale_manager = FixedLossScaleManager() >>> optim = Momentum(params=net.trainable_params(), learning_rate=0.1, momentum=0.9) >>> model = Model(net, loss_fn=loss, optimizer=optim, metrics=None, loss_scale_manager=loss_scale_manager) >>> model.train(2, dataset)
- class mindspore.ParallelMode[source]
Parallel mode options.
There are five kinds of parallel modes, “STAND_ALONE”, “DATA_PARALLEL”, “HYBRID_PARALLEL”, “SEMI_AUTO_PARALLEL” and “AUTO_PARALLEL”. Default: “STAND_ALONE”.
STAND_ALONE: Only one processor working.
DATA_PARALLEL: Distributing the data across different processors.
HYBRID_PARALLEL: Achieving data parallelism and model parallelism manually.
SEMI_AUTO_PARALLEL: Achieving data parallelism and model parallelism by setting parallel strategies.
AUTO_PARALLEL: Achieving parallelism automatically.
MODE_LIST: The list for all supported parallel modes.
- class mindspore.Parameter(default_input, name, requires_grad=True, layerwise_parallel=False, sparse_grad='')[source]
Parameter types of cell models.
Note
Each parameter of Cell is represented by Parameter class.
- Parameters
default_input (Union[Tensor, Initializer]) – Parameter data, when default_input is` Initializer`, the data stored by Parameter is MetaTensor, otherwise it is Tensor.
name (str) – Name of the child parameter.
requires_grad (bool) – True if the parameter requires gradient. Default: True.
layerwise_parallel (bool) – A kind of model parallel mode. When layerwise_parallel is true in paralle mode, broadcast and gradients communication would not be applied on parameters. Default: False.
sparse_grad (str) – Set if the parameter’s gradient is sparse. Default: empty.
- clone(prefix, init='same')[source]
Clone the parameter.
- Parameters
prefix (str) – Namespace of parameter.
init (Union[Tensor, str, Initializer, numbers.Number]) – Initialize the shape of the parameter. Default: ‘same’.
- Returns
Parameter, a new parameter.
- init_data(layout=None, set_sliced=False)[source]
Init data of the parameter.
- Parameters
Parameter slice layout [dev_mat, tensor_map, slice_shape].
dev_mat (list[int]): Device matrix.
tensor_map (list[int]): Tensor map.
slice_shape (list[int]): Shape of slice.
set_sliced (bool) – True if should set parameter sliced after init the data of initializer. Default: False.
- property is_init
Get init status of the parameter.
- property name
Get the name of the parameter.
- property requires_grad
Return whether the parameter requires gradient.
- property sliced
Get slice status of the parameter.
- property sparse_grad
Return whether the parameter’s gradient is sparse.
- class mindspore.ParameterTuple(iterable)[source]
Class for storing tuple of parameters.
Note
Used to store the parameters of the network into the parameter tuple collection.
- class mindspore.Tensor(input_data, dtype=None)[source]
Tensor for data storage.
Tensor inherits tensor object in C++ side, some functions are implemented in C++ side and some functions are implemented in Python layer.
- Parameters
input_data (Tensor, float, int, bool, tuple, list, numpy.ndarray) – Input data of the tensor.
dtype (
mindspore.dtype
) – Should be None, bool or numeric type defined in mindspore.dtype. The argument is used to define the data type of the output tensor. If it is None, the data type of the output tensor will be as same as the input_data. Default: None.
- Outputs:
Tensor, with the same shape as input_data.
Examples
>>> # init a tensor with input data >>> t1 = Tensor(np.zeros([1, 2, 3]), mindspore.float32) >>> assert isinstance(t1, Tensor) >>> assert t1.shape == (1, 2, 3) >>> assert t1.dtype == mindspore.float32 >>> >>> # init a tensor with a float scalar >>> t2 = Tensor(0.1) >>> assert isinstance(t2, Tensor) >>> assert t2.dtype == mindspore.float64
- property init_flag
whether the tensor is init.
- property virtual_flag
Mark tensor is virtual.
- mindspore.dtype_to_nptype(type_)[source]
Get numpy data type corresponding to MindSpore dtype.
- Parameters
type (
mindspore.dtype
) – MindSpore’s dtype.- Returns
The data type of numpy.
- mindspore.dtype_to_pytype(type_)[source]
Get python type corresponding to MindSpore dtype.
- Parameters
type (
mindspore.dtype
) – MindSpore’s dtype.- Returns
Type of python.
- mindspore.get_level()[source]
Get the logger level.
- Returns
str, the Log level includes 3(ERROR), 2(WARNING), 1(INFO), 0(DEBUG).
Examples
>>> import os >>> os.environ['GLOG_v'] = '0' >>> from mindspore import log as logger >>> logger.get_level()
- mindspore.get_log_config()[source]
Get logger configurations.
- Returns
Dict, the dictionary of logger configurations.
Examples
>>> import os >>> os.environ['GLOG_v'] = '1' >>> os.environ['GLOG_logtostderr'] = '0' >>> os.environ['GLOG_log_dir'] = '/var/log/mindspore' >>> os.environ['logger_maxBytes'] = '5242880' >>> os.environ['logger_backupCount'] = '10' >>> from mindspore import log as logger >>> logger.get_log_config()
- mindspore.get_py_obj_dtype(obj)[source]
Get the corresponding MindSpore data type by python type or variable.
- Parameters
obj – An object of python type, or a variable in python type.
- Returns
Type of MindSpore type.
- mindspore.issubclass_(type_, dtype)[source]
Determine whether type_ is a subclass of dtype.
- Parameters
type (
mindspore.dtype
) – Target MindSpore dtype.dtype (
mindspore.dtype
) – Compare MindSpore dtype.
- Returns
bool, True or False.
- mindspore.ms_function(fn=None, obj=None, input_signature=None)[source]
Creates a callable MindSpore graph from a python function.
This allows the MindSpore runtime to apply optimizations based on graph.
- Parameters
fn (Function) – The Python function that will be run as a graph. Default: None.
obj (Object) – The Python Object that provide information for identify compiled function. Default: None.
input_signature (MetaTensor) – The MetaTensor to describe the input arguments. The MetaTensor specifies the shape and dtype of the Tensor and they will be supplied to this function. If input_signature is specified, every input to fn must be a Tensor. And the input parameters of fn cannot accept **kwargs. The shape and dtype of actual inputs should keep same with input_signature, or TypeError will be raised. Default: None.
- Returns
Function, if fn is not None, returns a callable that will execute the compiled function; If fn is None, returns a decorator and when this decorator invokes with a single fn argument, the callable is equal to the case when fn is not None.
Examples
>>> def tensor_add(x, y): >>> z = F.tensor_add(x, y) >>> return z >>> >>> @ms_function >>> def tensor_add_with_dec(x, y): >>> z = F.tensor_add(x, y) >>> return z >>> >>> @ms_function(input_signature=(MetaTensor(mindspore.float32, (1, 1, 3, 3)), >>> MetaTensor(mindspore.float32, (1, 1, 3, 3)))) >>> def tensor_add_with_sig(x, y): >>> z = F.tensor_add(x, y) >>> return z >>> >>> x = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32)) >>> y = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32)) >>> >>> tensor_add_graph = ms_function(fn=tensor_add) >>> out = tensor_add_graph(x, y) >>> out = tensor_add_with_dec(x, y) >>> out = tensor_add_with_sig(x, y)