mindspore.ops.cross_entropy
- mindspore.ops.cross_entropy(input, target, weight=None, ignore_index=- 100, reduction='mean', label_smoothing=0.0)[source]
The cross entropy loss between input and target.
The cross entropy support two kind of targets:
Class indices (int) in the range \([0, C)\) where \(C\) is the number of classes, the loss with reduction=none can be described as:
\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - w_{y_n} \log \frac{\exp(x_{n,y_n})}{\sum_{c=1}^C \exp(x_{n,c})} \cdot \mathbb{1}\{y_n \not= \text{ignore_index}\}\]where \(x\) is the inputs, \(y\) is the target, \(w\) is the weight, N is the batch size, \(c\) belonging to \([0, C-1]\) is class index, where \(C\) is the number of classes.
If reduction is not
None
(default'mean'
), then\[\begin{split}\ell(x, y) = \begin{cases} \sum_{n=1}^N \frac{1}{\sum_{n=1}^N w_{y_n} \cdot \mathbb{1}\{y_n \not= \text{ignore_index}\}} l_n, & \text{if reduction} = \text{'mean',}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]Probabilities (float) for each class, useful when labels beyond a single class per minibatch item are required, the loss with reduction=none can be described as:
\[\ell(x, y) = L = \{l_1,\dots,l_N\}^\top, \quad l_n = - \sum_{c=1}^C w_c \log \frac{\exp(x_{n,c})}{\sum_{i=1}^C \exp(x_{n,i})} y_{n,c}\]where \(x\) is the inputs, \(y\) is the target, \(w\) is the weight, N is the batch size, \(c\) belonging to \([0, C-1]\) is class index, where \(C\) is the number of classes.
If reduction is not
None
(default'mean'
), then\[\begin{split}\ell(x, y) = \begin{cases} \frac{\sum_{n=1}^N l_n}{N}, & \text{if reduction} = \text{'mean',}\\ \sum_{n=1}^N l_n, & \text{if reduction} = \text{'sum'.} \end{cases}\end{split}\]
- Parameters
input (Tensor) – \((N)\) or \((N, C)\) where C = number of classes or \((N, C, H, W)\) in case of 2D Loss, or \((N, C, d_1, d_2, ..., d_K)\). input is expected to be log-probabilities, data type must be float16 or float32.
target (Tensor) – For class indices, tensor of shape \(()\), \((N)\) or \((N, d_1, d_2, ..., d_K)\) , data type must be int32. For probabilities, tensor of shape \((C,)\) , \((N, C)\) or \((N, C, d_1, d_2, ..., d_K)\) , data type must be float16 or float32 or float64.
weight (Tensor) – A rescaling weight applied to the loss of each batch element. If not None, the shape is \((C,)\), data type must be float16 or float32. Default:
None
.ignore_index (int) – Specifies a target value that is ignored and does not contribute to the input gradient. Default:
-100
.reduction (str, optional) –
Apply specific reduction method to the output:
'none'
,'mean'
,'sum'
. Default:'mean'
.'none'
: no reduction will be applied.'mean'
: compute and return the weighted mean of elements in the output.'sum'
: the output elements will be summed.
label_smoothing (float) – Label smoothing values, a regularization tool used to prevent the model from overfitting when calculating Loss. The value range is [0.0, 1.0]. Default value:
0.0
.
- Returns
Tensor, the computed loss value.
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> import mindspore as ms >>> import numpy as np >>> # Case 1: Indices labels >>> inputs = ms.Tensor(np.random.randn(3, 5), ms.float32) >>> target = ms.Tensor(np.array([1, 0, 4]), ms.int32) >>> output = ms.ops.cross_entropy(inputs, target) >>> # Case 2: Probability labels >>> inputs = ms.Tensor(np.random.randn(3, 5), ms.float32) >>> target = ms.Tensor(np.random.randn(3, 5), ms.float32) >>> output = ms.ops.cross_entropy(inputs, target)