mindflow.common

init

class mindflow.common.EvalCallback(model, eval_ds, eval_interval)[source]

Evaluate the model during training.

Parameters
  • model (Model) – A testing network.

  • eval_ds (Dataset) – Dataset to evaluate the model.

  • eval_interval (int) – Specifies how many epochs to train before evaluating.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> import mindspore.dataset as ds
>>> import mindspore as ms
>>> from mindspore import nn
>>> from mindflow import EvalCallback
>>> loss_fn = nn.MSELoss()
>>> net = nn.Dense(2, 1)
>>> optimizer = nn.Adam(net.trainable_params(), 0.001)
>>> model = ms.train.model.Model(net, loss_fn, optimizer)
>>> data = np.array(np.random.sample(size=(5, 5)))
>>> dataset = ds.NumpySlicesDataset([data], ["data"])
>>> eval_cb = EvalCallback(model, dataset, 1)
epoch_end(run_context)[source]

Evaluate the model at the end of epoch.

Parameters

run_context (RunContext) – Context of the train running.

class mindflow.common.L2[source]

Calculates l2 metric.

Creates a criterion that measures the l2 metric between each element in the input: x and the target: y.

l2=i=1n(yixi)2yi2

Here yi is the true value and xi is the prediction.

Note

The method update must be called with the form update(y_pred, y).

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindflow.common import L2
>>> from mindspore import nn, Tensor
>>> import mindspore
...
>>> x = Tensor(np.array([0.1, 0.2, 0.6, 0.9]), mindspore.float32)
>>> y = Tensor(np.array([0.1, 0.25, 0.7, 0.9]), mindspore.float32)
>>> metric = L2()
>>> metric.clear()
>>> metric.update(x, y)
>>> result = metric.eval()
>>> print(result)
0.09543302997807275
clear()[source]

clear the internal evaluation result.

eval()[source]

Computes l2 metric.

Returns

Float, the computed result.

update(*inputs)[source]

Updates the internal evaluation result ypred and y.

Parameters

inputs (Union[Tensor, list, numpy.array]) – y_pred and y can be retrieved from input. y_pred is the predicted value while y the ground truth value. They are used for calculating L2 where the shape of them are the same.

Raises
  • ValueError – if the length of inputs is not 2.

  • ValueError – if the shape of y_pred and y are not same.

class mindflow.common.LossAndTimeMonitor(data_size, per_print_times=1)[source]

Monitor the loss in training.

If the loss is NAN or INF, it will terminate training.

Note

If per_print_times is 0, do not print loss.

Parameters
  • data_size (int) – number of batches of each epoch dataset.

  • per_print_times (int) – Print the loss each every seconds. Default: 1.

Raises
  • ValueError – If data_size is not an integer or less than zero.

  • ValueError – If per_print_times is not an integer or less than zero.

Supported Platforms:

Ascend GPU

Examples

>>> from mindflow.common import LossAndTimeMonitor
>>> loss_time_monitor = LossAndTimeMonitor(8)
epoch_begin(run_context)[source]

Set begin time at the beginning of epoch.

Parameters

run_context (RunContext) – Context of the train running.

epoch_end(run_context)[source]

Get loss at the end of epoch.

Parameters

run_context (RunContext) – Context of the train running.

mindflow.common.get_multi_step_lr(lr_init, milestones, gamma, steps_per_epoch, last_epoch)[source]

Generate decay learning rate array of each parameter group by gamma once the number of epoch reaches one of the milestones.

Calculate learning rate by the given milestone and lr_init. Let the value of milestone be (M1,M2,...,Mt,...,MN) and the value of lr_init be (x1,x2,...,xt,...,xN). N is the length of milestone. Let the output learning rate be y, then for the i-th step, the formula of computing decayed_learning_rate[i] is:

y[i]=xt, for i[Mt1,Mt)
Parameters
  • lr_init (float) – init learning rate, positive float value.

  • milestones (Union[list[int], tuple[int]]) – list of epoch indices, each element in the list must be greater than 0.

  • gamma (float) – multiplicative factor of learning rate decay.

  • steps_per_epoch (int) – number of steps to each epoch, positive int value.

  • last_epoch (int) – total epoch of training, positive int value.

Returns

Numpy.array, learning rate array.

Raises
  • TypeError – If lr_init or gamma is not a float.

  • TypeError – If steps_per_epoch or last_epoch is not an int.

  • TypeError – If milestones is neither a tuple nor a list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindflow import get_multi_step_lr
>>> lr_init = 0.001
>>> milestones = [2, 4]
>>> gamma = 0.1
>>> steps_per_epoch = 3
>>> last_epoch = 5
>>> lr = get_multi_step_lr(lr_init, milestones, gamma, steps_per_epoch, last_epoch)
>>> print(lr)
[1.e-03 1.e-03 1.e-03 1.e-03 1.e-03 1.e-03 1.e-04 1.e-04 1.e-04 1.e-04 1.e-04 1.e-04 1.e-05 1.e-05 1.e-05]
mindflow.common.get_poly_lr(global_step, lr_init, lr_end, lr_max, warmup_steps, total_steps, poly_power)[source]

Generate polynomial decay learning rate array. The learning rate decays in a polynomial manner as training goes along.

Parameters
  • global_step (int) – current step number, non-negtive int value.

  • lr_init (float) – init learning rate, positive float value.

  • lr_end (float) – end learning rate, non-negtive float value.

  • lr_max (float) – max learning rate, positive float value.

  • warmup_steps (int) – number of warmup epochs, non-negtive int value.

  • total_steps (int) – total epoch of training, positive int value.

  • poly_power (float) – poly learning rate power, positive float value.

Returns

Numpy.array, learning rate array.

Supported Platforms:

Ascend GPU

Examples

>>> from mindflow.common import get_poly_lr
>>> learning_rate = get_poly_lr(100, 0.001, 0.1, 0.0001, 1000, 10000, 0.5)
>>> print(learning_rate.shape)
(9900,)
mindflow.common.get_warmup_cosine_annealing_lr(lr_init, steps_per_epoch, last_epoch, warmup_epochs=0, warmup_lr_init=0.0, eta_min=1e-06)[source]

Calculates learning rate base on cosine decay function. If warmup epoch is specified, the warmup epoch will be warmed up by linear annealing.

For the i-th step, the formula of computing cosine decayed_learning_rate[i] is:

decayed_learning_rate[i]=eta_min+0.5(lr_initeta_min)(1+cos(current_epochlast_epochπ))

Where current_epoch=floor(isteps_per_epoch).

If warmup epoch is specified, for the i-th step in waramup epoch, the formula of computing

warmup_learning_rate[i] is:

warmup_learning_rate[i]=(lr_initwarmup_lr_init)i/warmup_steps+warmup_lr_init
Parameters
  • lr_init (float) – init learning rate, positive float value.

  • steps_per_epoch (int) – number of steps to each epoch, positive int value.

  • last_epoch (int) – total epoch of training, positive int value.

  • warmup_epochs (int) – total epoch of warming up, default:0.

  • warmup_lr_init (float) – warmup init learning rate, default:0.0.

  • eta_min (float) – minimum learning rate, default: 1e-6.

Returns

Numpy.array, learning rate array.

Raises
  • TypeError – If lr_init or warmup_lr_init or eta_min is not a float.

  • TypeError – If steps_per_epoch or warmup_epochs or last_epoch is not an int.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindflow import get_warmup_cosine_annealing_lr
>>> lr_init = 0.001
>>> steps_per_epoch = 3
>>> last_epoch = 5
>>> warmup_epochs = 1
>>> lr = get_warmup_cosine_annealing_lr(lr_init, steps_per_epoch, last_epoch, warmup_epochs=warmup_epochs)
>>> print(lr)
[3.3333333e-04 6.6666666e-04 1.0000000e-03 9.0460398e-04 9.0460398e-04
 9.0460398e-04 6.5485400e-04 6.5485400e-04 6.5485400e-04 3.4614600e-04
 3.4614600e-04 3.4614600e-04 9.6396012e-05 9.6396012e-05 9.6396012e-05]