mindspore.ops.SmoothL1Loss
- class mindspore.ops.SmoothL1Loss(*args, **kwargs)[source]
Computes smooth L1 loss, a robust L1 loss.
SmoothL1Loss is a Loss similar to MSELoss but less sensitive to outliers as described in the Fast R-CNN by Ross Girshick.
The updating formulas of SmoothL1Loss algorithm are as follows,
\[\text{SmoothL1Loss} = \begin{cases} \frac{0.5 x^{2}}{\text{beta}}, &if \left |x \right | < \text{beta} \cr \left |x \right|-0.5 \text{beta}, &\text{otherwise}\end{cases}\]where \(X\) represents prediction. \(Y\) represents target. \(loss\) represents output.
- Parameters
beta (float) – A parameter used to control the point where the function will change from quadratic to linear. Default: 1.0.
- Inputs:
prediction (Tensor) - Predict data. Data type must be float16 or float32.
target (Tensor) - Ground truth data, with the same type and shape as prediction.
- Outputs:
Tensor, with the same type and shape as prediction.
- Raises
TypeError – If beta is not a float.
TypeError – If prediction or target is not a Tensor.
TypeError – If dtype of prediction or target is neither float16 nor float32.
ValueError – If beta is less than or equal to 0.
ValueError – If shape of prediction is not the same as target.
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> loss = ops.SmoothL1Loss() >>> input_data = Tensor(np.array([1, 2, 3]), mindspore.float32) >>> target_data = Tensor(np.array([1, 2, 2]), mindspore.float32) >>> output = loss(input_data, target_data) >>> print(output) [0. 0. 0.5]