mindspore.ops.SigmoidCrossEntropyWithLogits

class mindspore.ops.SigmoidCrossEntropyWithLogits[source]

Uses the given logits to compute sigmoid cross entropy between the logits and the label.

Measures the distribution error in discrete classification tasks where each class is independent and not mutually exclusive using cross entropy loss.

Sets input logits as \(X\), input label as \(Y\), output as \(loss\). Then,

\[\begin{split}\begin{array}{ll} \\ p_{ij} = sigmoid(X_{ij}) = \frac{1}{1 + e^{-X_{ij}}} \\ loss_{ij} = -[Y_{ij} * ln(p_{ij}) + (1 - Y_{ij})ln(1 - p_{ij})] \end{array}\end{split}\]
Inputs:
  • logits (Tensor) - Input logits. Tensor of shape \((N, *)\) where \(*\) means any number of additional dimensions.

  • label (Tensor) - Ground truth label. With the same shape and type as logits.

Outputs:

Tensor, with the same shape and type as input logits.

Raises

TypeError – If logits or label is not a Tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> logits = Tensor(np.array([[-0.8, 1.2, 0.7], [-0.1, -0.4, 0.7]]).astype(np.float32))
>>> labels = Tensor(np.array([[0.3, 0.8, 1.2], [-0.6, 0.1, 2.2]]).astype(np.float32))
>>> sigmoid = ops.SigmoidCrossEntropyWithLogits()
>>> output = sigmoid(logits, labels)
>>> print(output)
[[ 0.6111007   0.5032824   0.26318604]
 [ 0.58439666  0.5530153  -0.4368139 ]]