mindformers.core.EmF1Metric
- class mindformers.core.EmF1Metric[source]
Calculate the Em and F1 scores for each example to evaluate the model's performance in prediction tasks.
Em Score: The Em score measures the accuracy of predictions that exactly match the labels, ignoring punctuation. For example, if the question is "河南的省会是哪里?" and the label is "郑州市":
When the prediction is "郑州市", the Em score is 100. When the prediction is "郑州市。", the Em score is 100. When the prediction is "郑州", the Em score is 0.
F1 Score: The F1 score is the harmonic mean of precision and recall, calculated as follows:
\[F1 = \frac{2 \times \text{precision} \times \text{recall}}{\text{precision} + \text{recall}}\]Where precision and recall are calculated as:
\[\text{precision} = \frac{\text{lcs_length}}{\text{len(prediction_segment)}}, \quad \text{recall} = \frac{\text{lcs_length}}{\text{len(label_segment)}}\]In the above formulas, \(\text{lcs_length}\) represents the length of the longest common subsequence (LCS).
Calculation Process:
First, calculate the longest common subsequence (LCS) between the prediction and the label to measure the degree of matching. Then, compute the precision and recall based on the respective formulas. Finally, use the F1 score formula to calculate the final F1 value. This evaluation metric comprehensively measures the accuracy and completeness of the model, providing data support for model optimization and debugging.
Examples
>>> from mindformers.core.metric.metric import EmF1Metric >>> >>> str_pre = ["I love Beijing, because it's beautiful", "Hello world。"] >>> str_label = ["I love Beijing.", "Hello world"] >>> metric = EmF1Metric() >>> metric.clear() >>> for pre, label in zip(str_pre, str_label): ... metric.update([pre], [label]) >>> result = metric.eval() >>> print(result) The F1/Em of this example is: {'F1': 100.0, 'Em': 100.0} F1 score: 75.0, Em score: 50.0, total_count: 2 {'F1': 75.0, 'Em': 50.0}