mindformers.core.EntityScore
- class mindformers.core.EntityScore[source]
Evaluates the precision, recall, and F1 score of predicted entities against the ground truth.
Mathematically, these metrics are defined as follows:
Precision: Measures the fraction of correctly predicted entities out of all predicted entities.
Recall: Measures the fraction of correctly predicted entities out of all actual entities.
F1 Score: The harmonic mean of precision and recall, providing a balance between them.
Examples
>>> import numpy as np >>> from mindspore import Tensor >>> from mindformers.core.metric.metric import EntityScore >>> x = Tensor(np.array([[np.arange(0, 22)]])) >>> y = Tensor(np.array([[21]])) >>> metric = EntityScore() >>> metric.clear() >>> metric.update(x, y) >>> result = metric.eval() >>> print(result) ({'precision': 1.0, 'recall': 1.0, 'f1': 1.0}, {'address': {'precision': 1.0, 'recall': 1.0, 'f1': 1.0}})
- eval()[source]
Computing the evaluation result.
- Returns
A dict of evaluation results with precision, recall, and F1 scores of entities relative to their true labels.
- update(*inputs)[source]
Updating the internal evaluation result.
- Parameters
*inputs (List) – Logits and labels. The logits are tensors of shape
with data type Float16 or Float32, and the labels are tensors of shape with data type Int32 or Int64, where is batch size, and is the total number of entity types.