mindspore.ops.mse_loss

mindspore.ops.mse_loss(input, target, reduction='mean')[source]

Calculates the mean squared error between the predicted value and the label value.

For detailed information, please refer to mindspore.nn.MSELoss.

Parameters
  • input (Tensor) – Tensor of any dimension.

  • target (Tensor) – The input label. Tensor of any dimension, same shape as the input in common cases. However, it supports that the shape of input is different from the shape of target and they should be broadcasted to each other.

  • reduction (str, optional) –

    Apply specific reduction method to the output: 'none' , 'mean' , 'sum' . Default: 'mean' .

    • 'none': no reduction will be applied.

    • 'mean': compute and return the mean of elements in the output.

    • 'sum': the output elements will be summed.

Returns

Tensor, loss of type float, the shape is zero if reduction is 'mean' or 'sum' , while the shape of output is the broadcasted shape if reduction is 'none' .

Raises
  • ValueError – If reduction is not one of 'none' , 'mean' or 'sum'.

  • ValueError – If input and target have different shapes and cannot be broadcasted.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> labels = Tensor(np.array([[1, 1, 1], [1, 2, 2]]), mindspore.float32)
>>> output = ops.mse_loss(logits, labels, reduction='none')
>>> print(output)
[[0. 1. 4.]
 [0. 0. 1.]]