mindspore.nn.BCEWithLogitsLoss
- class mindspore.nn.BCEWithLogitsLoss(reduction="mean", weight=None, pos_weight=None)[source]
Adds sigmoid activation function to input logits, and uses the given logits to compute binary cross entropy between the logits and the labels.
Sets input logits as
, input labels as , output as . Then,Then,
- Parameters
reduction (str) – Type of reduction to be applied to loss. The optional values are ‘mean’, ‘sum’, and ‘none’. If ‘none’, do not perform reduction. Default:’mean’.
weight (Tensor, optional) – A rescaling weight applied to the loss of each batch element. If not None, it can be broadcast to a tensor with shape of logits, data type must be float16 or float32. Default: None.
pos_weight (Tensor, optional) – A weight of positive examples. Must be a vector with length equal to the number of classes. If not None, it must can be broadcast to a tensor with shape of logits, data type must be float16 or float32. Default: None.
- Inputs:
logits (Tensor) - Input logits with shape
where means, any number of additional dimensions. The data type must be float16 or float32.labels (Tensor) - Ground truth label with shape
, same shape and dtype as logits.
- Outputs:
Tensor or Scalar, if reduction is “none”, its shape is the same as logits. Otherwise, a scalar value will be returned.
- Raises
TypeError – If data type of logits or labels is neither float16 nor float32.
TypeError – If weight or pos_weight is a parameter.
TypeError – If data type of weight or pos_weight is neither float16 nor float32.
ValueError – If weight or pos_weight can not be broadcast to a tensor with shape of logits.
ValueError – If reduction is not one of ‘none’, ‘mean’, ‘sum’.
- Supported Platforms:
Ascend
GPU
Examples
>>> logits = Tensor(np.array([[-0.8, 1.2, 0.7], [-0.1, -0.4, 0.7]]).astype(np.float32)) >>> labels = Tensor(np.array([[0.3, 0.8, 1.2], [-0.6, 0.1, 2.2]]).astype(np.float32)) >>> loss = nn.BCEWithLogitsLoss() >>> output = loss(logits, labels) >>> print(output) 0.3463612