Document feedback

Question document fragment

When a question document fragment contains a formula, it is displayed as a space.

Submission type
issue

It's a little complicated...

I'd like to ask someone.

PR

Just a small problem.

I can fix it online!

Please select the submission type

Problem type
Specifications and Common Mistakes

- Specifications and Common Mistakes:

- Misspellings or punctuation mistakes,incorrect formulas, abnormal display.

- Incorrect links, empty cells, or wrong formats.

- Chinese characters in English context.

- Minor inconsistencies between the UI and descriptions.

- Low writing fluency that does not affect understanding.

- Incorrect version numbers, including software package names and version numbers on the UI.

Usability

- Usability:

- Incorrect or missing key steps.

- Missing main function descriptions, keyword explanation, necessary prerequisites, or precautions.

- Ambiguous descriptions, unclear reference, or contradictory context.

- Unclear logic, such as missing classifications, items, and steps.

Correctness

- Correctness:

- Technical principles, function descriptions, supported platforms, parameter types, or exceptions inconsistent with that of software implementation.

- Incorrect schematic or architecture diagrams.

- Incorrect commands or command parameters.

- Incorrect code.

- Commands inconsistent with the functions.

- Wrong screenshots.

- Sample code running error, or running results inconsistent with the expectation.

Risk Warnings

- Risk Warnings:

- Lack of risk warnings for operations that may damage the system or important data.

Content Compliance

- Content Compliance:

- Contents that may violate applicable laws and regulations or geo-cultural context-sensitive words and expressions.

- Copyright infringement.

Please select the type of question

Problem description

Describe the bug so that we can quickly locate the problem.

mindspore.nn.MultiClassDiceLoss

View Source On Gitee
class mindspore.nn.MultiClassDiceLoss(weights=None, ignore_indiex=None, activation='softmax')[source]

When there are multiple classifications, label is transformed into multiple binary classifications by one hot. For each channel section in the channel, it can be regarded as a binary classification problem, so it can be obtained through the binary mindspore.nn.DiceLoss losses of each category, and then the average value of the binary losses.

Parameters
  • weights (Union[Tensor, None]) – Tensor of shape (num_classes,dim). The weight shape[0] should be equal to labels shape[1]. Default: None .

  • ignore_indiex (Union[int, None]) – Class index to ignore. Default: None .

  • activation (Union[str, Cell]) – Activate function applied to the output of the fully connected layer, eg. ‘ReLU’. Default: 'softmax' . Choose from: [ 'softmax' , 'logsoftmax' , 'relu' , 'relu6' , 'tanh' , 'Sigmoid' ]

Inputs:
  • logits (Tensor) - Tensor of shape (N,C,) where means, any number of additional dimensions. The logits dimension should be greater than 1. The data type must be float16 or float32.

  • labels (Tensor) - Tensor of shape (N,C,), same shape as the logits. The labels dimension should be greater than 1. The data type must be float16 or float32.

Outputs:

Tensor, a tensor of shape with the per-example sampled MultiClass Dice Losses.

Raises
  • ValueError – If the shape of logits is different from labels.

  • TypeError – If the type of logits or labels is not a tensor.

  • ValueError – If the dimension of logits or labels is less than 2.

  • ValueError – If the weights.shape[0] is not equal to labels.shape[1].

  • ValueError – If weights is a tensor, but its dimension is not 2.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor, nn
>>> import numpy as np
>>> loss = nn.MultiClassDiceLoss(weights=None, ignore_indiex=None, activation="softmax")
>>> logits = Tensor(np.array([[0.2, 0.5, 0.7], [0.3, 0.1, 0.5], [0.9, 0.6, 0.3]]), mindspore.float32)
>>> labels = Tensor(np.array([[0, 1, 0], [1, 0, 0], [0, 0, 1]]), mindspore.float32)
>>> output = loss(logits, labels)
>>> print(output)
0.54958105