mindearth.core.RelativeRMSELoss
- class mindearth.core.RelativeRMSELoss(reduction='mean')[source]
Relative Root Mean Square Error (RRMSE) is the root mean squared error normalized by the root mean square value where each residual is scaled against the actual value. Relative RMSELoss creates a criterion to measure the root mean square error between \(x\) and \(y\) element-wise, where \(x\) is the input and \(y\) is the labels.
For simplicity, let \(x\) and \(y\) be 1-dimensional Tensor with length \(N\), the loss of \(x\) and \(y\) is given as:
\[loss = \sqrt{\frac{\frac{1}{N}\sum_{i=1}^{N}{(x_i-y_i)^2}}{sum_{i=1}^{N}{(y_i)^2}}}\]- Parameters
reduction (str) – Type of reduction to be applied to loss. The optional values are
"mean"
,"sum"
, and"none"
. Default:"mean"
.
- Inputs:
prediction (Tensor) - Tensor of shape \((N, *)\) where \(*\) means, any number of additional dimensions.
labels (Tensor) - Tensor of shape \((N, *)\), same shape as the prediction in common cases. However, it supports the shape of prediction is different from the shape of label and they should be broadcasted to each other.
- Outputs:
Tensor, weighted loss float tensor.
output (Tensor) - Tensor of shape \(()\)
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> import numpy as np >>> import mindspore >>> from mindspore import Tensor >>> from mindearth.core import RelativeRMSELoss >>> # Case: prediction.shape = labels.shape = (3, 3) >>> prediction = Tensor(np.array([[1, 2, 3],[1, 2, 3],[1, 2, 3]]), mindspore.float32) >>> labels = Tensor(np.array([[1, 2, 2],[1, 2, 3],[1, 2, 3]]), mindspore.float32) >>> loss_fn = RelativeRMSELoss() >>> loss = loss_fn(prediction, labels) >>> print(loss) 0.11111112