mindspore.ops.ctc_loss

mindspore.ops.ctc_loss(log_probs, targets, input_lengths, target_lengths, blank=0, reduction='mean', zero_infinity=False)[source]

Calculates the CTC (Connectionist Temporal Classification) loss and the gradient.

CTC is a loss function in sequence labeling problems, which is mainly used to deal with the alignment of input and output labels in sequence labeling problems. While traditional sequence labeling algorithms require the input and output symbols to be perfectly aligned at each moment, CTC expands the label collection and adds empty elements. After labeling the sequence using the extended label set, all the prediction sequences that can be converted into real sequences by the mapping function are correct prediction results, that is, the predicted sequence can be obtained without data alignment processing. Its objective function is to maximize the sum of probabilities of all correct prediction sequences.

The CTC algorithm is proposed in Connectionist Temporal Classification: Labeling Unsegmented Sequence Data with Recurrent Neural Networks.

Parameters
  • log_probs (Tensor) – A tensor of shape \((T, N, C)\), where T is input length, N is batch size and C is number of classes (including blank).

  • targets (Tensor) – Target sequences. A tensor of shape \((N, S)\), where S is max target length.

  • input_lengths (Union(tuple, Tensor)) – Lengths of the input. A tuple or Tensor of shape \((N)\).

  • target_lengths (Union(tuple, Tensor)) – Lengths of the target. A tuple or Tensor of shape \((N)\).

  • blank (int, optional) – The blank label. Default: 0 .

  • reduction (str, optional) –

    Apply specific reduction method to the output: 'none' , 'mean' , 'sum' . Default: 'mean' .

    • 'none': no reduction will be applied.

    • 'mean': compute and return the mean of elements in the output.

    • 'sum': the output elements will be summed.

  • zero_infinity (bool, optional) – Whether to set infinite loss and correlation gradient to 0. Default: False .

Returns

neg_log_likelihood (Tensor), A loss value with shape \((N)\) , which is differentiable with respect to each input node.

log_alpha (Tensor), The probability of possible trace of input to target with shape \((N, T, 2 * S + 1)\) .

Raises
  • TypeError – If zero_infinity is not a bool, reduction is not string.

  • TypeError – If the dtype of log_probs is not float or double.

  • TypeError – If the dtype of targets, input_lengths or target_lengths is not int32 or int64.

  • ValueError – If the rank of log_probs is not 3.

  • ValueError – If the rank of targets is not 2.

  • ValueError – If the shape of input_lengths does not match N. N is batch size of log_probs .

  • ValueError – If the shape of target_lengths does not match N. N is batch size of log_probs .

  • ValueError – If the value of blank is not in range [0, num_labels|C). C is number of classes of log_probs .

  • RuntimeError – If any value of input_lengths is larger than T. T is the length of log_probs.

  • RuntimeError – If any target_lengths[i] is not in range [0, input_length[i]].

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> from mindspore import dtype as mstype
>>> log_probs = Tensor(np.array([[[0.3, 0.6, 0.6]],
...                              [[0.9, 0.4, 0.2]]]).astype(np.float32))
>>> targets = Tensor(np.array([[0, 1]]), mstype.int32)
>>> input_lengths = Tensor(np.array([2]), mstype.int32)
>>> target_lengths = Tensor(np.array([1]), mstype.int32)
>>> loss, log_alpha = ops.ctc_loss(log_probs, targets, input_lengths,
...                                target_lengths, 0, 'mean', True)
>>> print(loss)
-2.2986124
>>> print(log_alpha)
[[[0.3       0.3            -inf      -inf      -inf]
  [1.2       1.8931472 1.2            -inf      -inf]]]