mindspore.nn.MSELoss
- class mindspore.nn.MSELoss(reduction='mean')[source]
Calculates the mean squared error between the predicted value and the label value.
For simplicity, let
and be 1-dimensional Tensor with length , the unreduced loss (i.e. with argument reduction set to 'none') of and is given as:where
is the batch size. If reduction is not'none'
, then:- Parameters
reduction (str, optional) –
Apply specific reduction method to the output:
'none'
,'mean'
,'sum'
. Default:'mean'
.'none'
: no reduction will be applied.'mean'
: compute and return the mean of elements in the output.'sum'
: the output elements will be summed.
- Inputs:
logits (Tensor) - The predicted value of the input. Tensor of any dimension.
labels (Tensor) - The input label. Tensor of any dimension, same shape as the logits in common cases. However, it supports the shape of logits is different from the shape of labels and they should be broadcasted to each other.
- Outputs:
Tensor, loss of type float, the shape is zero if reduction is
'mean'
or'sum'
, while the shape of output is the broadcasted shape if reduction is 'none'.
- Raises
ValueError – If reduction is not one of
'none'
,'mean'
or'sum'
.ValueError – If logits and labels have different shapes and cannot be broadcasted.
TypeError – if logits and labels have different data types.
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> import mindspore >>> from mindspore import Tensor, nn >>> import numpy as np >>> # Case 1: logits.shape = labels.shape = (3,) >>> loss = nn.MSELoss() >>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32) >>> labels = Tensor(np.array([1, 1, 1]), mindspore.float32) >>> output = loss(logits, labels) >>> print(output) 1.6666667 >>> # Case 2: logits.shape = (3,), labels.shape = (2, 3) >>> loss = nn.MSELoss(reduction='none') >>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32) >>> labels = Tensor(np.array([[1, 1, 1], [1, 2, 2]]), mindspore.float32) >>> output = loss(logits, labels) >>> print(output) [[0. 1. 4.] [0. 0. 1.]]