Document feedback

Question document fragment

When a question document fragment contains a formula, it is displayed as a space.

Submission type
issue

It's a little complicated...

I'd like to ask someone.

PR

Just a small problem.

I can fix it online!

Please select the submission type

Problem type
Specifications and Common Mistakes

- Specifications and Common Mistakes:

- Misspellings or punctuation mistakes,incorrect formulas, abnormal display.

- Incorrect links, empty cells, or wrong formats.

- Chinese characters in English context.

- Minor inconsistencies between the UI and descriptions.

- Low writing fluency that does not affect understanding.

- Incorrect version numbers, including software package names and version numbers on the UI.

Usability

- Usability:

- Incorrect or missing key steps.

- Missing main function descriptions, keyword explanation, necessary prerequisites, or precautions.

- Ambiguous descriptions, unclear reference, or contradictory context.

- Unclear logic, such as missing classifications, items, and steps.

Correctness

- Correctness:

- Technical principles, function descriptions, supported platforms, parameter types, or exceptions inconsistent with that of software implementation.

- Incorrect schematic or architecture diagrams.

- Incorrect commands or command parameters.

- Incorrect code.

- Commands inconsistent with the functions.

- Wrong screenshots.

- Sample code running error, or running results inconsistent with the expectation.

Risk Warnings

- Risk Warnings:

- Lack of risk warnings for operations that may damage the system or important data.

Content Compliance

- Content Compliance:

- Contents that may violate applicable laws and regulations or geo-cultural context-sensitive words and expressions.

- Copyright infringement.

Please select the type of question

Problem description

Describe the bug so that we can quickly locate the problem.

mindformers.core.EntityScore

View Source On Gitee
class mindformers.core.EntityScore[source]

Evaluates the precision, recall, and F1 score of predicted entities against the ground truth.

Mathematically, these metrics are defined as follows:

Precision: Measures the fraction of correctly predicted entities out of all predicted entities.

Precision=Number of correct entitiesNumber of predicted entities

Recall: Measures the fraction of correctly predicted entities out of all actual entities.

Recall=Number of correct entitiesNumber of actual entities

F1 Score: The harmonic mean of precision and recall, providing a balance between them.

F1=2×Precision×RecallPrecision+Recall

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindformers.core.metric.metric import EntityScore
>>> x = Tensor(np.array([[np.arange(0, 22)]]))
>>> y = Tensor(np.array([[21]]))
>>> metric = EntityScore()
>>> metric.clear()
>>> metric.update(x, y)
>>> result = metric.eval()
>>> print(result)
({'precision': 1.0, 'recall': 1.0, 'f1': 1.0}, {'address': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0}})
clear()[source]

Clearing the internal evaluation result.

eval()[source]

Computing the evaluation result.

Returns

A dict of evaluation results with precision, recall, and F1 scores of entities relative to their true labels.

update(*inputs)[source]

Updating the internal evaluation result.

Parameters

*inputs (List) – Logits and labels. The logits are tensors of shape [N,C] with data type Float16 or Float32, and the labels are tensors of shape [N,] with data type Int32 or Int64, where N is batch size, and C is the total number of entity types.