mindspore.nn.BCELoss
- class mindspore.nn.BCELoss(weight=None, reduction='mean')[source]
BCELoss creates a criterion to measure the binary cross entropy between the true labels and predicted labels.
Set the predicted labels as
, true labels as , the output loss as . The formula is as follow:where N is the batch size. Then,
Note
Note that the predicted labels should always be the output of sigmoid. Because it is a two-class classification, the true labels should be numbers between 0 and 1. And if input is either 0 or 1, one of the log terms would be mathematically undefined in the above loss equation.
- Parameters
weight (Tensor, optional) – A rescaling weight applied to the loss of each batch element. And it must have the same shape and data type as inputs. Default:
None
.reduction (str, optional) –
Apply specific reduction method to the output:
'none'
,'mean'
,'sum'
. Default:'mean'
.'none'
: no reduction will be applied.'mean'
: compute and return the weighted mean of elements in the output.'sum'
: the output elements will be summed.
- Inputs:
logits (Tensor) - The input tensor with shape
where means, any number of additional dimensions. The data type must be float16 or float32.labels (Tensor) - The label tensor with shape
where means, any number of additional dimensions. The same shape and data type as logits.
- Outputs:
Tensor, has the same dtype as logits. if reduction is
'none'
, then it has the same shape as logits. Otherwise, it is a scalar Tensor.
- Raises
TypeError – If dtype of logits, labels or weight (if given) is neither float16 not float32.
ValueError – If reduction is not one of
'none'
,'mean'
,'sum'
.ValueError – If shape of logits is not the same as labels or weight (if given).
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> import mindspore as ms >>> import mindspore.nn as nn >>> import numpy as np >>> weight = ms.Tensor(np.array([[1.0, 2.0, 3.0], [4.0, 3.3, 2.2]]), ms.float32) >>> loss = nn.BCELoss(weight=weight, reduction='mean') >>> logits = ms.Tensor(np.array([[0.1, 0.2, 0.3], [0.5, 0.7, 0.9]]), ms.float32) >>> labels = ms.Tensor(np.array([[0, 1, 0], [0, 0, 1]]), ms.float32) >>> output = loss(logits, labels) >>> print(output) 1.8952923