mindspore.ops.l1_loss

mindspore.ops.l1_loss(input, target, reduction='mean')[source]

Calculate the mean absolute error between the input value and the target value.

Assuming that the x and y (predicted and target value) are 1-D Tensor, length N, reduction is set to 'none', then calculate the loss of x and y without dimensionality reduction.

The formula is as follows:

(x,y)=L={l1,,lN},with ln=|xnyn|,

where N is the batch size.

If reduction is 'mean' or 'sum' , then:

(x,y)={mean(L),if reduction='mean';sum(L),if reduction='sum'.
Parameters
  • input (Tensor) – Predicted value, Tensor of any dimension.

  • target (Tensor) – Target value, usually has the same shape as the input. If input and target have different shape, make sure they can broadcast to each other.

  • reduction (str, optional) –

    Apply specific reduction method to the output: 'none' , 'mean' , 'sum' . Default: 'mean' .

    • 'none': no reduction will be applied.

    • 'mean': compute and return the mean of elements in the output.

    • 'sum': the output elements will be summed.

Returns

Tensor or Scalar, if reduction is 'none', return a Tensor with same shape and dtype as input. Otherwise, a scalar value will be returned.

Raises
  • TypeError – If input is not a Tensor.

  • TypeError – If target is not a Tensor.

  • ValueError – If reduction is not one of 'none', 'mean' or 'sum'.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import Tensor, ops
>>> from mindspore import dtype as mstype
>>> x = Tensor([[1, 2, 3], [4, 5, 6]], mstype.float32)
>>> target = Tensor([[6, 5, 4], [3, 2, 1]], mstype.float32)
>>> output = ops.l1_loss(x, target, reduction="mean")
>>> print(output)
3.0