mindspore.dataset.audio.Filtfilt

class mindspore.dataset.audio.Filtfilt(a_coeffs, b_coeffs, clamp=True)[source]

Apply an IIR filter forward and backward to a waveform.

Parameters
  • a_coeffs (Sequence[float]) – Denominator coefficients of difference equation of dimension. Lower delays coefficients are first, e.g. [a0, a1, a2, …]. Must be same size as b_coeffs (pad with 0’s as necessary).

  • b_coeffs (Sequence[float]) – Numerator coefficients of difference equation of dimension. Lower delays coefficients are first, e.g. [b0, b1, b2, …]. Must be same size as a_coeffs (pad with 0’s as necessary).

  • clamp (bool, optional) – If True, clamp the output signal to be in the range [-1, 1]. Default: True.

Raises
  • TypeError – If a_coeffs is not of type Sequence[float].

  • TypeError – If b_coeffs is not of type Sequence[float].

  • ValueError – If a_coeffs and b_coeffs are of different sizes.

  • TypeError – If clamp is not of type bool.

  • RuntimeError – If shape of the input audio is not <…, time>.

Examples

>>> import numpy as np
>>> import mindspore.dataset as ds
>>> import mindspore.dataset.audio as audio
>>>
>>> # Use the transform in dataset pipeline mode
>>> waveform = np.random.random([5, 16])  # 5 samples
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data=waveform, column_names=["audio"])
>>> transforms = [audio.Filtfilt(a_coeffs=[0.1, 0.2, 0.3], b_coeffs=[0.1, 0.2, 0.3])]
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=transforms, input_columns=["audio"])
>>> for item in numpy_slices_dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
...     print(item["audio"].shape, item["audio"].dtype)
...     break
(16,) float64
>>>
>>> # Use the transform in eager mode
>>> waveform = np.random.random([16])  # 1 sample
>>> output = audio.Filtfilt(a_coeffs=[0.1, 0.2, 0.3], b_coeffs=[0.1, 0.2, 0.3])(waveform)
>>> print(output.shape, output.dtype)
(16,) float64
Tutorial Examples: