mindspore.ops.mse_loss
- mindspore.ops.mse_loss(input, target, reduction='mean')[source]
Calculates the mean squared error between the predicted value and the label value.
For detailed information, please refer to
mindspore.nn.MSELoss
.- Parameters
input (Tensor) – Tensor of any dimension.
target (Tensor) – The input label. Tensor of any dimension, same shape as the input in common cases. However, it supports that the shape of input is different from the shape of target and they should be broadcasted to each other.
reduction (str, optional) – Type of reduction to be applied to loss. The optional values are
"mean"
,"none"
and"sum"
. Default:'mean'
.
- Returns
Tensor, loss of type float, the shape is zero if reduction is
'mean'
or'sum'
, while the shape of output is the broadcasted shape if reduction is'none'
.- Raises
ValueError – If reduction is not one of
'none'
,'mean'
or'sum'
.ValueError – If input and target have different shapes and cannot be broadcasted.
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> import mindspore >>> import numpy as np >>> from mindspore import Tensor, ops >>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32) >>> labels = Tensor(np.array([[1, 1, 1], [1, 2, 2]]), mindspore.float32) >>> output = ops.mse_loss(logits, labels, reduction='none') >>> print(output) [[0. 1. 4.] [0. 0. 1.]]