mindspore.LossMonitor
- class mindspore.LossMonitor(per_print_times=1)[source]
Monitor the loss in train or monitor the loss and eval metrics in fit.
If the loss is NAN or INF, it will terminate training.
Note
If per_print_times is 0, do not print loss.
- Parameters
per_print_times (int) – How many steps to print once loss. During sink mode, it will print loss in the nearest step. Default: 1.
- Raises
ValueError – If per_print_times is not an integer or less than zero.
Examples
>>> import mindspore as ms >>> from mindspore import nn >>> >>> net = LeNet5() >>> loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean') >>> optim = nn.Momentum(net.trainable_params(), 0.01, 0.9) >>> model = ms.Model(net, loss_fn=loss, optimizer=optim) >>> data_path = './MNIST_Data' >>> dataset = create_dataset(data_path) >>> loss_monitor = LossMonitor() >>> model.train(10, dataset, callbacks=loss_monitor)
- on_train_epoch_end(run_context)[source]
When LossMoniter used in model.fit, print eval metrics at the end of epoch if current epoch should do evaluation.
- Parameters
run_context (RunContext) – Include some information of the model. For more details, please refer to
mindspore.RunContext
.
- step_end(run_context)[source]
Print training loss at the end of step.
- Parameters
run_context (RunContext) – Include some information of the model. For more details, please refer to
mindspore.RunContext
.