mindspore.nn.L1Loss
- class mindspore.nn.L1Loss(reduction='mean')[source]
L1Loss creates a criterion to measure the mean absolute error (MAE) between \(x\) and \(y\) element-wise, where \(x\) is the input Tensor and \(y\) is the target Tensor.
For simplicity, let \(x\) and \(y\) be 1-dimensional Tensor with length \(N\), the unreduced loss (i.e. with argument reduction set to ‘none’) of \(x\) and \(y\) is given as:
\[L(x, y) = \{l_1,\dots,l_N\}, \quad \text{with } l_n = \left| x_n - y_n \right|\]When argument reduction is ‘mean’, the mean value of \(L(x, y)\) will be returned. When argument reduction is ‘sum’, the sum of \(L(x, y)\) will be returned. \(N\) is the batch size.
- Parameters
reduction (str) – Type of reduction to be applied to loss. The optional values are “mean”, “sum”, and “none”. Default: “mean”.
- Inputs:
logits (Tensor) - Tensor of shape \((x_1, x_2, ..., x_R)\).
labels (Tensor) - Tensor of shape \((y_1, y_2, ..., y_S)\).
- Outputs:
Tensor, loss float tensor.
- Raises
ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> loss = nn.L1Loss() >>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32) >>> labels = Tensor(np.array([1, 2, 2]), mindspore.float32) >>> output = loss(logits, labels) >>> print(output) 0.33333334