mindspore.train.Perplexity

View Source On Gitee
class mindspore.train.Perplexity(ignore_label=None)[source]

Computes perplexity. Perplexity is a measurement about how well a probability distribution or a model predicts a sample. A low perplexity indicates the model can predict the sample well. The function is shown as follows:

\[PP(W)=P(w_{1}w_{2}...w_{N})^{-\frac{1}{N}}=\sqrt[N]{\frac{1}{P(w_{1}w_{2}...w_{N})}}\]

Where \(w\) represents words in corpus.

Parameters

ignore_label (Union[int, None]) – Index of an invalid label to be ignored when counting. If set to None, it will include all entries. Default: None .

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore.train import Perplexity
>>> x = Tensor(np.array([[0.2, 0.5], [0.3, 0.1], [0.9, 0.6]]))
>>> y = Tensor(np.array([1, 0, 1]))
>>> metric = Perplexity(ignore_label=None)
>>> metric.clear()
>>> metric.update(x, y)
>>> perplexity = metric.eval()
>>> print(perplexity)
2.231443166940565
clear()[source]

Clears the internal evaluation result.

eval()[source]

Returns the current evaluation result.

Returns

numpy.float64. The computed result.

Raises

RuntimeError – If the sample size is 0.

update(*inputs)[source]

Updates the internal evaluation result preds and labels.

Parameters

inputs – Input preds and labels. preds and labels are a Tensor, list or numpy.ndarray. preds is the predicted values, labels is the labels of the data. The shape of preds and labels are both \((N, C)\).

Raises
  • ValueError – If the number of the inputs is not 2.

  • RuntimeError – If preds and labels have different lengths.

  • RuntimeError – If label shape is not equal to pred shape.