mindspore.dataset.audio.PitchShift

class mindspore.dataset.audio.PitchShift(sample_rate, n_steps, bins_per_octave=12, n_fft=512, win_length=None, hop_length=None, window=WindowType.HANN)[source]

Shift the pitch of a waveform by n_steps steps.

Parameters
  • sample_rate (int) – Sampling rate of waveform (in Hz).

  • n_steps (int) – The steps to shift waveform.

  • bins_per_octave (int, optional) – The number of steps per octave. Default: 12.

  • n_fft (int, optional) – Size of FFT, creates n_fft // 2 + 1 bins. Default: 512.

  • win_length (int, optional) – Window size. Default: None, will be set to n_fft .

  • hop_length (int, optional) – Length of hop between STFT windows. Default: None, will be set to win_length // 4 .

  • window (WindowType, optional) – Window tensor that is applied/multiplied to each frame/window. Default: WindowType.HANN.

Raises
Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> import mindspore.dataset as ds
>>> import mindspore.dataset.audio as audio
>>>
>>> # Use the transform in dataset pipeline mode
>>> waveform = np.random.random([5, 8, 30])  # 5 samples
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data=waveform, column_names=["audio"])
>>> transforms = [audio.PitchShift(sample_rate=16000, n_steps=4)]
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=transforms, input_columns=["audio"])
>>> for item in numpy_slices_dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
...     print(item["audio"].shape, item["audio"].dtype)
...     break
(8, 30) float64
>>>
>>> # Use the transform in eager mode
>>> waveform = np.random.random([8, 30])  # 1 sample
>>> output = audio.PitchShift(sample_rate=16000, n_steps=4)(waveform)
>>> print(output.shape, output.dtype)
(8, 30) float64
Tutorial Examples: