mindspore.ops.smooth_l1_loss

mindspore.ops.smooth_l1_loss(logits, labels, beta=1.0, reduction='none')[source]

Computes smooth L1 loss, a robust L1 loss.

SmoothL1Loss is a Loss similar to MSELoss but less sensitive to outliers as described in the Fast R-CNN by Ross Girshick.

Given two input \(x,\ y\) of length \(N\), the unreduced SmoothL1Loss can be described as follows:

\[\begin{split}L_{i} = \begin{cases} \frac{0.5 (x_i - y_i)^{2}}{\text{beta}}, & \text{if } |x_i - y_i| < \text{beta} \\ |x_i - y_i| - 0.5 \text{beta}, & \text{otherwise. } \end{cases}\end{split}\]

If reduction is not none, then:

\[\begin{split}L = \begin{cases} \operatorname{mean}(L_{i}), & \text{if reduction} = \text{'mean';}\\ \operatorname{sum}(L_{i}), & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

Here \(\text{beta}\) controls the point where the loss function changes from quadratic to linear. Its default value is 1.0. \(N\) is the batch size.

Note

For Ascend platform, the float64 data type of logits is not support now.

Parameters
  • logits (Tensor) – Tensor of shape \((N, *)\) where \(*\) means, any number of additional dimensions.

  • labels (Tensor) – Ground truth data, tensor of shape \((N, *)\), same shape and dtype as the logits.

  • beta (float) – A parameter used to control the point where the function will change from quadratic to linear. Default: 1.0.

  • reduction (str) – Apply specific reduction method to the output: ‘none’, ‘mean’ or ‘sum’. Default: ‘none’.

Returns

Tensor, if reduction is ‘none’, then output is a tensor with the same shape as logits. Otherwise the shape of output tensor is (1,).

Raises
  • TypeError – If beta is not a float.

  • ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.

  • TypeError – If dtype of logits or labels is neither float16 nor float32.

  • ValueError – If beta is less than or equal to 0.

  • ValueError – If shape of logits is not the same as labels.

  • TypeError – The float64 data type of logits is support on Ascend platform.

Supported Platforms:

Ascend GPU CPU

Examples

>>> from mindspore import ops
>>> logits = Tensor(np.array([1, 2, 3]), mindspore.float32)
>>> labels = Tensor(np.array([1, 2, 2]), mindspore.float32)
>>> output = ops.smooth_l1_loss(logits, labels)
>>> print(output)
[0.  0.  0.5]