mindspore.ops.SoftmaxCrossEntropyWithLogits
- class mindspore.ops.SoftmaxCrossEntropyWithLogits[source]
Gets the softmax cross-entropy value between logits and labels with one-hot encoding.
The updating formulas of SoftmaxCrossEntropyWithLogits algorithm are as follows,
\[\begin{split}\begin{array}{ll} \\ p_{ij} = softmax(X_{ij}) = \frac{\exp(x_i)}{\sum_{j = 0}^{N-1}\exp(x_j)} \\ loss_{ij} = -\sum_j{Y_{ij} * ln(p_{ij})} \end{array}\end{split}\]where \(X\) represents logits. \(Y\) represents label. \(loss\) represents output.
- Inputs:
logits (Tensor) - Input logits, with shape \((N, C)\). Data type must be float16 or float32.
labels (Tensor) - Ground truth labels, with shape \((N, C)\), has the same data type with logits.
- Outputs:
Tuple of 2 tensors(loss, dlogits), the loss shape is \((N,)\), and the dlogits with the same shape as logits.
- Raises
TypeError – If dtype of logits or labels is neither float16 nor float32.
TypeError – If logits or labels is not a Tensor.
ValueError – If shape of logits is not the same as labels.
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> import mindspore >>> from mindspore import Tensor, ops >>> logits = Tensor([[2, 4, 1, 4, 5], [2, 1, 2, 4, 3]], mindspore.float32) >>> labels = Tensor([[0, 0, 0, 0, 1], [0, 0, 0, 1, 0]], mindspore.float32) >>> softmax_cross = ops.SoftmaxCrossEntropyWithLogits() >>> loss, dlogits = softmax_cross(logits, labels) >>> print(loss) [0.5899297 0.52374405] >>> print(dlogits) [[ 0.02760027 0.20393994 0.01015357 0.20393994 -0.44563377] [ 0.08015892 0.02948882 0.08015892 -0.4077012 0.21789455]]