mindspore.ops.SmoothL1Loss
- class mindspore.ops.SmoothL1Loss(beta=1.0, reduction='none')[source]
Calculate the smooth L1 loss, and the L1 loss function has robustness.
Refer to
mindspore.ops.smooth_l1_loss()
for more details.- Parameters
beta (number, optional) –
A parameter used to control the point where the function will change between L1 to L2 loss. Default:
1.0
.Ascend: The value should be equal to or greater than zero.
CPU/GPU: The value should be greater than zero.
reduction (str, optional) –
Apply specific reduction method to the output:
'none'
,'mean'
,'sum'
. Default:'none'
.'none'
: no reduction will be applied.'mean'
: compute and return the mean of elements in the output.'sum'
: the output elements will be summed.
- Inputs:
logits (Tensor) - Input Tensor of any dimension. Supported dtypes:
Ascend: float16, float32, bfloat16.
CPU/GPU: float16, float32, float64.
labels (Tensor) - Ground truth data.
CPU/Ascend: has the same shape as the logits, logits and labels comply with the implicit type conversion rules to make the data types consistent.
GPU: has the same shape and dtype as the logits.
- Outputs:
Tensor, if reduction is
'none'
, then output is a tensor with the same shape as logits. Otherwise the shape of output tensor is \(()\).- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> import mindspore >>> import numpy as np >>> from mindspore import Tensor, ops >>> loss = ops.SmoothL1Loss() >>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32) >>> labels = Tensor(np.array([1, 2, 2]), mindspore.float32) >>> output = loss(logits, labels) >>> print(output) [0. 0. 0.5]