mindspore.mint.nn.MSELoss
- class mindspore.mint.nn.MSELoss(reduction='mean')[source]
Calculates the mean squared error between the predicted value and the label value.
For simplicity, let \(x\) and \(y\) be 1-dimensional Tensor with length \(N\), the unreduced loss (i.e. with argument reduction set to 'none') of \(x\) and \(y\) is given as:
\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad \text{with} \quad l_n = (x_n - y_n)^2.\]where \(N\) is the batch size. If reduction is not
'none'
, then:\[\begin{split}\ell(x, y) = \begin{cases} \operatorname{mean}(L), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]- Parameters
reduction (str, optional) –
Apply specific reduction method to the output:
'none'
,'mean'
,'sum'
. Default:'mean'
.'none'
: no reduction will be applied.'mean'
: compute and return the mean of elements in the output.'sum'
: the output elements will be summed.
- Inputs:
logits (Tensor) - The predicted value of the input. Tensor of any dimension. The data type needs to be consistent with the labels. It should also be broadcastable with the labels.
labels (Tensor) - The input label. Tensor of any dimension. The data type needs to be consistent with the logits. It should also be broadcastable with the logits.
- Outputs:
Tensor. If reduction is
'mean'
or'sum'
, the shape of output is Tensor Scalar.If reduction is
'none'
, the shape of output is the broadcasted shape of logits and labels .
- Raises
ValueError – If reduction is not one of
'mean'
,'sum'
or'none'
.ValueError – If logits and labels are not broadcastable.
TypeError – If logits and labels are in different data type.
- Supported Platforms:
Ascend
Examples
>>> import mindspore >>> from mindspore import Tensor, mint >>> import numpy as np >>> # Case 1: logits.shape = labels.shape = (3,) >>> loss = mint.nn.MSELoss() >>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32) >>> labels = Tensor(np.array([1, 1, 1]), mindspore.float32) >>> output = loss(logits, labels) >>> print(output) 1.6666667 >>> # Case 2: logits.shape = (3,), labels.shape = (2, 3) >>> loss = mint.nn.MSELoss(reduction='none') >>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32) >>> labels = Tensor(np.array([[1, 1, 1], [1, 2, 2]]), mindspore.float32) >>> output = loss(logits, labels) >>> print(output) [[0. 1. 4.] [0. 0. 1.]]