mindspore.ops.nll_loss
- mindspore.ops.nll_loss(inputs, target, weight=None, ignore_index=- 100, reduction='mean', label_smoothing=0.0)[source]
Gets the negative log likelihood loss between inputs and target.
The nll loss with reduction=none can be described as:
where
is the inputs, is the target, is the weight, N is the batch size, belonging to is class index, where is the number of classes.If reduction is not
'None'
(default'mean'
), then- Parameters
inputs (Tensor) –
where C = number of classes or in case of 2D Loss, or . inputs is expected to be log-probabilities, data type must be float16 or float32.target (Tensor) –
or for high-dimensional loss, data type must be int32.weight (Tensor) – A rescaling weight applied to the loss of each batch element. If not None, the shape is
. The data type must be float16 or float32. Default:None
.ignore_index (int) – Specifies a target value that is ignored and does not contribute to the input gradient. Default:
-100
.reduction (str, optional) –
Apply specific reduction method to the output:
'none'
,'mean'
,'sum'
. Default:'mean'
.'none'
: no reduction will be applied.'mean'
: compute and return the weighted mean of elements in the output.'sum'
: the output elements will be summed.
label_smoothing (float) – Label smoothing values, a regularization tool used to prevent the model from overfitting when calculating Loss. The value range is [0.0, 1.0]. Default value:
0.0
.
- Returns
Tensor, the computed loss value.
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> import mindspore >>> import numpy as np >>> from mindspore import Tensor, ops >>> inputs = mindspore.Tensor(np.random.randn(3, 5), mindspore.float32) >>> target = mindspore.Tensor(np.array([1, 0, 4]), mindspore.int32) >>> output = ops.nll_loss(inputs, target)