Document feedback

Question document fragment

When a question document fragment contains a formula, it is displayed as a space.

Submission type
issue

It's a little complicated...

I'd like to ask someone.

PR

Just a small problem.

I can fix it online!

Please select the submission type

Problem type
Specifications and Common Mistakes

- Specifications and Common Mistakes:

- Misspellings or punctuation mistakes,incorrect formulas, abnormal display.

- Incorrect links, empty cells, or wrong formats.

- Chinese characters in English context.

- Minor inconsistencies between the UI and descriptions.

- Low writing fluency that does not affect understanding.

- Incorrect version numbers, including software package names and version numbers on the UI.

Usability

- Usability:

- Incorrect or missing key steps.

- Missing main function descriptions, keyword explanation, necessary prerequisites, or precautions.

- Ambiguous descriptions, unclear reference, or contradictory context.

- Unclear logic, such as missing classifications, items, and steps.

Correctness

- Correctness:

- Technical principles, function descriptions, supported platforms, parameter types, or exceptions inconsistent with that of software implementation.

- Incorrect schematic or architecture diagrams.

- Incorrect commands or command parameters.

- Incorrect code.

- Commands inconsistent with the functions.

- Wrong screenshots.

- Sample code running error, or running results inconsistent with the expectation.

Risk Warnings

- Risk Warnings:

- Lack of risk warnings for operations that may damage the system or important data.

Content Compliance

- Content Compliance:

- Contents that may violate applicable laws and regulations or geo-cultural context-sensitive words and expressions.

- Copyright infringement.

Please select the type of question

Problem description

Describe the bug so that we can quickly locate the problem.

mindspore.mint.nn.functional.binary_cross_entropy_with_logits

View Source On Gitee
mindspore.mint.nn.functional.binary_cross_entropy_with_logits(input, target, weight=None, reduction='mean', pos_weight=None)[source]

Adds sigmoid activation function to input as logits, and uses this logits to compute binary cross entropy between the logits and the target. Consistent with the function of mindspore.ops.binary_cross_entropy_with_logits() .

Sets input input as X, input target as Y, input weight as W, output as L. Then,

pij=sigmoid(Xij)=11+eXijLij=[Yijlog(pij)+(1Yij)log(1pij)]

i indicates the ith sample, j indicates the category. Then,

(x,y)={L,if reduction='none';mean(L),if reduction='mean';sum(L),if reduction='sum'.

indicates the method of calculating the loss. There are three methods: the first method is to provide the loss value directly, the second method is to calculate the average value of all losses, and the third method is to calculate the sum of all losses.

This operator will multiply the output by the corresponding weight. The tensor weight assigns different weights to each piece of data in the batch, and the tensor pos_weight adds corresponding weights to the positive examples of each category.

In addition, it can trade off recall and precision by adding weights to positive examples. In the case of multi-label classification the loss can be described as:

pij,c=sigmoid(Xij,c)=11+eXij,cLij,c=[PcYij,clog(pij,c)+(1Yij,c)log(1pij,c)]

where c is the class number (c>1 for multi-label binary classification, c=1 for single-label binary classification), n is the number of the sample in the batch and Pc is the weight of the positive answer for the class c. Pc>1 increases the recall, Pc<1 increases the precision.

Parameters
  • input (Tensor) – Input input with shape (N,) where means, any number of additional dimensions. The data type must be float16, float32 or bfloat16(only Atlas A2 series products are supported).

  • target (Tensor) – Ground truth label, has the same shape as input. The data type must be float16, float32 or bfloat16(only Atlas A2 series products are supported).

  • weight (Tensor, optional) – A rescaling weight applied to the loss of each batch element. It can be broadcast to a tensor with shape of input. Data type must be float16, float32 or bfloat16(only Atlas A2 series products are supported). Default: None, weight is a Tensor whose value is 1.

  • reduction (str, optional) –

    Apply specific reduction method to the output: 'none' , 'mean' , 'sum' . Default: 'mean' .

    • 'none': no reduction will be applied.

    • 'mean': compute and return the weighted mean of elements in the output.

    • 'sum': the output elements will be summed.

  • pos_weight (Tensor, optional) – A weight of positive examples. Must be a vector with length equal to the number of classes. It can be broadcast to a tensor with shape of input. Data type must be float16, float32 or bfloat16(only Atlas A2 series products are supported). Default: None, it equals to pos_weight is a Tensor whose value is 1.

Returns

Tensor or Scalar, if reduction is 'none', it's a tensor with the same shape and type as input input. Otherwise, the output is a Scalar.

Raises
  • TypeError – If input input, target, weight, pos_weight is not Tensor.

  • TypeError – If data type of input reduction is not string.

  • ValueError – If weight or pos_weight can not be broadcast to a tensor with shape of input.

  • ValueError – If reduction is not one of 'none', 'mean' or 'sum'.

Supported Platforms:

Ascend

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, mint
>>> input = Tensor(np.array([[-0.8, 1.2, 0.7], [-0.1, -0.4, 0.7]]), mindspore.float32)
>>> target = Tensor(np.array([[0.3, 0.8, 1.2], [-0.6, 0.1, 2.2]]), mindspore.float32)
>>> weight = Tensor(np.array([1.0, 1.0, 1.0]), mindspore.float32)
>>> pos_weight = Tensor(np.array([1.0, 1.0, 1.0]), mindspore.float32)
>>> output = mint.nn.functional.binary_cross_entropy_with_logits(input, target, weight, 'mean', pos_weight)
>>> print(output)
0.3463612