mindspore.mint.nn.functional.l1_loss
- mindspore.mint.nn.functional.l1_loss(input, target, reduction='mean')[source]
Calculate the mean absolute error between the input value and the target value.
Assuming that the
and are the predicted value and target value, both are one-dimensional tensors of length , length , reduction is set to'none'
, then calculate the loss of and without dimensionality reduction.The formula is as follows:
where
is the batch size.If reduction is
'mean'
or'sum'
, then:- Parameters
input (Tensor) – Predicted value, Tensor of any dimension.
target (Tensor) – Target value, usually has the same shape as the input. If input and target have different shapes, make sure they can broadcast to each other.
reduction (str, optional) –
Apply specific reduction method to the output:
'none'
,'mean'
,'sum'
. Default:'mean'
.'none'
: no reduction will be applied.'mean'
: compute and return the mean of elements in the output. Notice: At least one of the input and target is float type when the reduction is'mean'
.'sum'
: the output elements will be summed.
- Returns
Tensor or Scalar, if reduction is
'none'
, return a Tensor with same shape and dtype as input. Otherwise, a scalar value will be returned.- Raises
TypeError – If input is not a Tensor.
TypeError – If target is not a Tensor.
ValueError – If reduction is not one of
'none'
,'mean'
or'sum'
.
- Supported Platforms:
Ascend
Examples
>>> from mindspore import Tensor, mint >>> from mindspore import dtype as mstype >>> x = Tensor([[1, 2, 3], [4, 5, 6]], mstype.float32) >>> target = Tensor([[6, 5, 4], [3, 2, 1]], mstype.float32) >>> output = mint.nn.functional.l1_loss(x, target, reduction="mean") >>> print(output) 3.0