mindspore.nn.SmoothL1Loss
- class mindspore.nn.SmoothL1Loss(beta=1.0)[source]
A loss class for learning region proposals.
SmoothL1Loss can be regarded as modified version of L1Loss or a combination of L1Loss and L2Loss. L1Loss computes the element-wise absolute difference between two input Tensor while L2Loss computes the squared difference between two input Tensor. L2Loss often leads to faster convergence but it is less robust to outliers.
Given two input \(x,\ y\) of length \(N\), the unreduced SmoothL1Loss can be described as follows:
\[\begin{split}L_{i} = \begin{cases} \frac{0.5 (x_i - y_i)^{2}}{\text{beta}}, & \text{if } |x_i - y_i| < \text{beta} \\ |x_i - y_i| - 0.5 \text{beta}, & \text{otherwise. } \end{cases}\end{split}\]Here \(\text{beta}\) controls the point where the loss function changes from quadratic to linear. Its default value is 1.0. \(N\) is the batch size. This function returns an unreduced loss Tensor.
- Parameters
beta (float) – A parameter used to control the point where the function will change from quadratic to linear. Default: 1.0.
- Inputs:
logits (Tensor) - Tensor of shape \((x_1, x_2, ..., x_R)\). Data type must be float16 or float32.
labels (Tensor) - Ground truth data, with the same type and shape as logits.
- Outputs:
Tensor, loss float tensor.
- Raises
TypeError – If beta is not a float.
TypeError – If dtype of logits or labels is neither float16 not float32.
ValueError – If beta is less than or equal to 0.
ValueError – If shape of logits is not the same as labels.
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> loss = nn.SmoothL1Loss() >>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32) >>> labels = Tensor(np.array([1, 2, 2]), mindspore.float32) >>> output = loss(logits, labels) >>> print(output) [0. 0. 0.5]