mindspore.nn.BleuScore
- class mindspore.nn.BleuScore(n_gram=4, smooth=False)[source]
Calculates the BLEU score. BLEU (bilingual evaluation understudy) is a metric for evaluating the quality of text translated by machine.
- Parameters
- Raises
ValueError – If the value range of n_gram is not from 1 to 4.
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> import mindspore.nn as nn >>> >>> candidate_corpus = [['i', 'have', 'a', 'pen', 'on', 'my', 'desk']] >>> reference_corpus = [[['i', 'have', 'a', 'pen', 'in', 'my', 'desk'], ... ['there', 'is', 'a', 'pen', 'on', 'the', 'desk']]] >>> metric = nn.BleuScore() >>> metric.clear() >>> metric.update(candidate_corpus, reference_corpus) >>> bleu_score = metric.eval() >>> print(bleu_score) 0.5946035575013605
- eval()[source]
Computes the bleu score.
- Returns
numpy.float64, the bleu score.
- Raises
RuntimeError – If the update method is not called first, an error will be reported.
- update(*inputs)[source]
Updates the internal evaluation result with candidate_corpus and reference_corpus.
- Parameters
inputs – Input candidate_corpus and reference_corpus. candidate_corpus and reference_corpus are both a list. The candidate_corpus is an iterable of machine translated corpus. The reference_corpus is an iterable object of iterables of reference corpus.
- Raises
ValueError – If the number of inputs is not 2.
ValueError – If the lengths of candidate_corpus and reference_corpus are not equal.