mindspore.mint.nn.BCEWithLogitsLoss
- class mindspore.mint.nn.BCEWithLogitsLoss(weight=None, reduction='mean', pos_weight=None)[source]
Adds sigmoid activation function to input as logits, and uses this logits to compute binary cross entropy between the logits and the target.
Sets input input as
, input target as , output as . Then,Then,
- Parameters
weight (Tensor, optional) – A rescaling weight applied to the loss of each batch element. If not None, it can be broadcast to a tensor with shape of target, data type must be float16, float32 or bfloat16(only Atlas A2 series products are supported). Default:
None
.reduction (str, optional) –
Apply specific reduction method to the output:
'none'
,'mean'
,'sum'
. Default:'mean'
.'none'
: no reduction will be applied.'mean'
: compute and return the weighted mean of elements in the output.'sum'
: the output elements will be summed.
pos_weight (Tensor, optional) – A weight of positive examples. Must be a vector with length equal to the number of classes. If not None, it must be broadcast to a tensor with shape of input, data type must be float16, float32 or bfloat16(only Atlas A2 series products are supported). Default:
None
.
- Inputs:
input (Tensor) - Input input with shape
where means, any number of additional dimensions. The data type must be float16, float32 or bfloat16(only Atlas A2 series products are supported).target (Tensor) - Ground truth label with shape
where means, any number of additional dimensions. The same shape and data type as input.
- Outputs:
Tensor or Scalar, if reduction is
'none'
, its shape is the same as input. Otherwise, a scalar value will be returned.
- Raises
TypeError – If input input or target is not Tensor.
TypeError – If weight or pos_weight is a parameter.
TypeError – If data type of reduction is not string.
ValueError – If weight or pos_weight can not be broadcast to a tensor with shape of input.
ValueError – If reduction is not one of
'none'
,'mean'
,'sum'
.
- Supported Platforms:
Ascend
Examples
>>> import mindspore as ms >>> from mindspore import mint >>> import numpy as np >>> input = ms.Tensor(np.array([[-0.8, 1.2, 0.7], [-0.1, -0.4, 0.7]]).astype(np.float32)) >>> target = ms.Tensor(np.array([[0.3, 0.8, 1.2], [-0.6, 0.1, 2.2]]).astype(np.float32)) >>> loss = mint.nn.BCEWithLogitsLoss() >>> output = loss(input, target) >>> print(output) 0.3463612