mindvision.engine

mindvision.engine.callback

Init for base architecture engine register.

class mindvision.engine.callback.LossMonitor(lr_init: Optional[Union[float, Iterable]] = None, per_print_times: int = 1)[source]

Loss Monitor for classification.

Parameters
  • lr_init (Union[float, Iterable], optional) – The learning rate schedule. Default: None.

  • per_print_times (int) – Every how many steps to print the log information. Default: 1.

Examples

>>> from mindvision.engine.callback import LossMonitor
>>> lr = [0.01, 0.008, 0.006, 0.005, 0.002]
>>> monitor = LossMonitor(lr_init=lr, per_print_times=100)
epoch_begin(run_context)[source]

Record time at the beginning of epoch.

Parameters

run_context (RunContext) – Context of the process running.

epoch_end(run_context)[source]

Print training info at the end of epoch.

Parameters

run_context (RunContext) – Context of the process running.

step_begin(run_context)[source]

Record time at the beginning of step.

Parameters

run_context (RunContext) – Context of the process running.

step_end(run_context)[source]

Print training info at the end of step.

Parameters

run_context (RunContext) – Context of the process running.

class mindvision.engine.callback.ValAccMonitor(model: ms.Model, dataset_val: ms.dataset, num_epochs: int, interval: int = 1, eval_start_epoch: int = 1, save_best_ckpt: bool = True, ckpt_directory: str = './', best_ckpt_name: str = 'best.ckpt', metric_name: str = 'Accuracy', dataset_sink_mode: bool = True)[source]

Monitors the train loss and the validation accuracy, after each epoch saves the best checkpoint file with highest validation accuracy.

Parameters
  • model (ms.Model) – The model to monitor.

  • dataset_val (ms.dataset) – The dataset that the model needs.

  • num_epochs (int) – The number of epochs.

  • interval (int) – Every how many epochs to validate and print information. Default: 1.

  • eval_start_epoch (int) – From which time to validate. Default: 1.

  • save_best_ckpt (bool) – Whether to save the checkpoint file which performs best. Default: True.

  • ckpt_directory (str) – The path to save checkpoint files. Default: ‘./’.

  • best_ckpt_name (str) – The file name of the checkpoint file which performs best. Default: ‘best.ckpt’.

  • metric_name (str) – The name of metric for model evaluation. Default: ‘Accuracy’.

  • dataset_sink_mode (bool) – Whether to use the dataset sinking mode. Default: True.

Raises

ValueError – If interval is not more than 1.

Examples

>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> import mindspore.dataset as ds
>>> from mindvision.classification.models import lenet
>>> from mindvision.classification.dataset import Mnist
>>>
>>> net = lenet()
>>> opt = nn.Momentum(params=net.trainable_params(), learning_rate=0.001, momentum=0.9)
>>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True,reduction='mean')
>>> model = ms.Model(net, loss,opt,metrics={"Accuracy":nn.Accuracy()})
>>> dataset_val = Mnist("./mnist", split="test", batch_size=32, resize=32, download=True)
>>> dataset_val = dataset_val.run()
>>> monitor = ValAccMonitor(model, dataset_val, num_epochs=10)
apply_eval()[source]

Model evaluation, return validation accuracy.

end(run_context)[source]

Print the best validation accuracy after network training.

Parameters

run_context (RunContext) – Context of the process running.

epoch_end(run_context)[source]

After epoch, print train loss and val accuracy, save the best ckpt file with highest validation accuracy.

Parameters

run_context (RunContext) – Context of the process running.