mindflow.loss

init

class mindflow.loss.MTLWeightedLossCell(num_losses, bound_param=0.0)[source]

Compute the MTL strategy weighted multi-task losses automatically. For more information, please refer to MTL weighted losses .

Parameters
  • num_losses (int) – The number of multi-task losses, should be positive integer.

  • bound_param (float) – The square addition to weight and regularization when the mere bound is higher than certain constant given.

Inputs:
  • input - tuple of Tensors.

Outputs:

Tensor. losses for MTL weighted strategy.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindflow.loss import MTLWeightedLossCell
>>> import mindspore
>>> from mindspore import Tensor
>>> net = MTLWeightedLossCell(num_losses=2)
>>> input1 = Tensor(1.0, mindspore.float32)
>>> input2 = Tensor(0.8, mindspore.float32)
>>> output = net((input1, input2))
>>> print(output)
2.2862945
class mindflow.loss.RelativeRMSELoss(reduction='sum')[source]

Relative Root Mean Square Error (RRMSE) is the root mean squared error normalized by the root-mean-square value where each residual is scaled against the actual value. Relative RMSELoss creates a criterion to measure the root-mean-square error between x and y element-wise, where x is the prediction and y is the labels.

For simplicity, let x and y be 1-dimensional Tensor with length N, the loss of x and y is given as:

loss=1Ni=1N(xiyi)2sumi=1N(yi)2
Parameters

reduction (str) – Type of reduction to be applied to loss. The optional values are “mean”, “sum”, and “none”. Default: “sum”.

Inputs:
  • prediction (Tensor) - The prediction value of the network. Tensor of shape (N,) where means, any number of additional dimensions.

  • labels (Tensor) - True value of the samples. Tensor of shape (N,), where means, any number of additional dimensions, same shape as the prediction in common cases. However, it supports the shape of labels is different from the shape of prediction and they should be broadcasted to each other.

Outputs:

Tensor, weighted loss.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> import mindspore
>>> from mindspore import Tensor
>>> from mindflow import RelativeRMSELoss
>>> # Case: prediction.shape = labels.shape = (3, 3)
>>> prediction = Tensor(np.array([[1, 2, 3],[1, 2, 3],[1, 2, 3]]), mindspore.float32)
>>> labels = Tensor(np.array([[1, 2, 2],[1, 2, 3],[1, 2, 3]]), mindspore.float32)
>>> loss_fn = RelativeRMSELoss()
>>> loss = loss_fn(prediction, labels)
>>> print(loss)
0.33333334
class mindflow.loss.WaveletTransformLoss(wave_level=2, regroup=False)[source]

The multi-level wavelet transformation losses.

Parameters
  • wave_level (int) – The number of the wavelet transformation levels, should be positive integer.

  • regroup (bool) – The regroup error combination form of the wavelet transformation losses. Default: “False”.

Inputs:
  • input - tuple of Tensors. Tensor of shape (BHW/(PP)PPC), where B denotes the batch size. H, W denotes the height and the width of the image, respectively. P denotes the patch size. C denots the feature channels.

Outputs:

Tensor.

Raises
Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindflow.loss import WaveletTransformLoss
>>> import mindspore
>>> from mindspore import Tensor
>>> net = WaveletTransformLoss(wave_level=2)
>>> input1 = Tensor(np.ones((32, 288, 768)), mstype.float32)
>>> input2 = Tensor(np.ones((32, 288, 768)), mstype.float32)
>>> output = net((input1, input2))
>>> print(output)
2.0794415
mindflow.loss.get_loss_metric(name)[source]

Gets the loss function.

Parameters

name (str) – The name of the loss function.

Returns

Function, the loss function.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindflow.loss import get_loss_metric
>>> import mindspore
>>> from mindspore import Tensor
>>> l1_loss = get_loss_metric('l1_loss')
>>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> labels = Tensor(np.array([[1, 1, 1], [1, 2, 2]]), mindspore.float32)
>>> output = l1_loss(logits, labels)
>>> print(output)
0.6666667