mindspore.mint.nn.CrossEntropyLoss

class mindspore.mint.nn.CrossEntropyLoss(weight=None, ignore_index=- 100, reduction='mean', label_smoothing=0.0)[source]

The cross entropy loss between input and target.

The cross entropy supports two kind of targets:

  • Class indices (int) in the range \([0, C)\) where \(C\) is the number of classes, the loss with reduction=none can be described as:

    \[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_{y_n} \log \frac{\exp(x_{n,y_n})}{\sum_{c=1}^C \exp(x_{n,c})} \cdot \mathbb{1}\{y_n \not= \text{ignore_index}\}\]

    where \(x\) is the inputs, \(y\) is the target, \(w\) is the weight, \(N\) is the batch size, \(c\) belonging to \([0, C-1]\) is class index, where \(C\) is the number of classes.

    If reduction is not None (default 'mean' ), then

    \[\begin{split}\ell(x, y) = \begin{cases} \sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n} \cdot \mathbb{1}\{y_n \not= \text{ignore_index}\}} l_n, & \text{if reduction} = \text{'mean',}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]
  • Probabilities (float) for each class, useful when labels beyond a single class per minibatch item are required, the loss with reduction=none can be described as:

    \[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - \sum_{c=1}^C w_c \log \frac{\exp(x_{n,c})}{\sum_{i=1}^C \exp(x_{n,i})} y_{n,c}\]

    where \(x\) is the inputs, \(y\) is the target, \(w\) is the weight, N is the batch size, \(c\) belonging to \([0, C-1]\) is class index, where \(C\) is the number of classes.

    If reduction is not None (default 'mean' ), then

    \[\begin{split}\ell(x, y) = \begin{cases} \frac{\sum_{n=1}^N l_n}{N}, & \text{if reduction} = \text{'mean',}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]

Warning

This is an experimental API that is subject to change or deletion.

Note

Dynamic shape, dynamic rank and variable constant input are not supported in strict graph mode (jit_syntax_level=mindspore.STRICT).

Parameters
  • weight (Tensor, optional) – A rescaling weight applied to the loss of each batch element. If not None, the shape is \((C,)\), data type must be float16 or float32 or bfloat16(only supported by Atlas A2 training series products). Default: None .

  • ignore_index (int, optional) – Specifies a target value that is ignored and does not contribute to the input gradient. Only valid in class indices, please set it to a negative number in probabilities. Default: -100 .

  • reduction (str, optional) –

    Apply specific reduction method to the output: 'none' , 'mean' , 'sum' . Default: 'mean' .

    • 'none': no reduction will be applied.

    • 'mean': compute and return the weighted mean of elements in the output.

    • 'sum': the output elements will be summed.

  • label_smoothing (float, optional) – Label smoothing values, a regularization tool used to prevent the model from overfitting when calculating Loss. The value range is [0.0, 1.0]. Default: 0.0 .

Inputs:
  • input (Tensor) - \((N)\) or \((N, C)\) where C = number of classes or \((N, C, H, W)\) in case of 2D Loss, or \((N, C, d_1, d_2, ..., d_K)\). input is expected to be log-probabilities, data type must be float16 or float32 or bfloat16(only supported by Atlas A2 training series products).

  • target (Tensor) - For class indices, tensor of shape \(()\), \((N)\) or \((N, d_1, d_2, ..., d_K)\) , data type must be int32 or int64. For probabilities, tensor of shape \((N,)\) , \((N, C)\) or \((N, C, d_1, d_2, ..., d_K)\) , data type must be float16 or float32 or bfloat16(only supported by Atlas A2 training series products).

Outputs:

Tensor, the data type is the same as input .

Supported Platforms:

Ascend

Examples

>>> import mindspore as ms
>>> import numpy as np
>>> # Case 1: Indices labels
>>> inputs = ms.Tensor(np.random.randn(3, 5), ms.float32)
>>> target = ms.Tensor(np.array([1, 0, 4]), ms.int32)
>>> op = ms.mint.nn.CrossEntropyLoss()
>>> output = op(inputs, target)
>>> # Case 2: Probability labels
>>> inputs = ms.Tensor(np.random.randn(3, 5), ms.float32)
>>> target = ms.Tensor(np.random.randn(3, 5), ms.float32)
>>> op = ms.mint.nn.CrossEntropyLoss()
>>> output = op(inputs, target)