mindflow.common.get_multi_step_lr

View Source On Gitee
mindflow.common.get_multi_step_lr(lr_init, milestones, gamma, steps_per_epoch, last_epoch)[source]

Generate decay learning rate array of each parameter group by gamma once the number of epoch reaches one of the milestones.

Calculate learning rate by the given milestone and lr_init. Let the value of milestone be \((M_1, M_2, ..., M_t, ..., M_N)\) and the value of lr_init be \((x_1, x_2, ..., x_t, ..., x_N)\). N is the length of milestone. Let the output learning rate be y, then for the i-th step, the formula of computing decayed_learning_rate[i] is:

\[y[i] = x_t,\ for\ i \in [M_{t-1}, M_t)\]
Parameters
  • lr_init (float) – init learning rate, positive float value.

  • milestones (Union[list[int], tuple[int]]) – list of epoch indices, each element in the list must be greater than 0.

  • gamma (float) – multiplicative factor of learning rate decay.

  • steps_per_epoch (int) – number of steps to each epoch, positive int value.

  • last_epoch (int) – total epoch of training, positive int value.

Returns

Numpy.array, learning rate array.

Raises
  • TypeError – If lr_init or gamma is not a float.

  • TypeError – If steps_per_epoch or last_epoch is not an int.

  • TypeError – If milestones is neither a tuple nor a list.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindflow import get_multi_step_lr
>>> lr_init = 0.001
>>> milestones = [2, 4]
>>> gamma = 0.1
>>> steps_per_epoch = 3
>>> last_epoch = 5
>>> lr = get_multi_step_lr(lr_init, milestones, gamma, steps_per_epoch, last_epoch)
>>> print(lr)
[1.e-03 1.e-03 1.e-03 1.e-03 1.e-03 1.e-03 1.e-04 1.e-04 1.e-04 1.e-04 1.e-04 1.e-04 1.e-05 1.e-05 1.e-05]