mindspore.nn.NLLLossο
- class mindspore.nn.NLLLoss(weight=None, ignore_index=- 100, reduction='mean')[source]ο
Gets the negative log likelihood loss between logits and labels.
The nll loss with
can be described as:where
is the logits, is the labels, is the weight, is the batch size, belonging to is class index, where is the number of classes.If reduction is not βnoneβ (default βmeanβ), then
- Parameters
weight (Tensor) β The rescaling weight to each class. If the value is not None, the shape is
. The data type only supports float32 or float16. Default: None.ignore_index (int) β Specifies a target value that is ignored (typically for padding value) and does not contribute to the gradient. Default: -100.
reduction (str) β Apply specific reduction method to the output: βnoneβ, βmeanβ, or βsumβ. Default: βmeanβ.
- Inputs:
logits (Tensor) - Tensor of shape
or for -dimensional data, where C = number of classes. Data type must be float16 or float32. inputs needs to be logarithmic probability.labels (Tensor) -
or for -dimensional data. Data type must be int32.
- Returns
Tensor, the computed negative log likelihood loss value.
- Raises
TypeError β If weight is not a Tensor.
TypeError β If ignore_index is not an int.
TypeError β If the data type of weight is not float16 or float32.
ValueError β If reduction is not one of βnoneβ, βmeanβ, βsumβ.
TypeError β If logits is not a Tensor.
TypeError β If labels is not a Tensor.
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> logits = mindspore.Tensor(np.random.randn(3, 5), mindspore.float32) >>> labels = mindspore.Tensor(np.array([1, 0, 4]), mindspore.int32) >>> loss = nn.NLLLoss() >>> output = loss(logits, labels)