mindspore.dataset.audio.PitchShift
- class mindspore.dataset.audio.PitchShift(sample_rate, n_steps, bins_per_octave=12, n_fft=512, win_length=None, hop_length=None, window=WindowType.HANN)[source]
Shift the pitch of a waveform by n_steps steps.
- Parameters
sample_rate (int) – Sampling rate of waveform (in Hz).
n_steps (int) – The steps to shift waveform.
bins_per_octave (int, optional) – The number of steps per octave. Default:
12
.n_fft (int, optional) – Size of FFT, creates n_fft // 2 + 1 bins. Default:
512
.win_length (int, optional) – Window size. Default:
None
, will be set to n_fft .hop_length (int, optional) – Length of hop between STFT windows. Default:
None
, will be set to win_length // 4 .window (WindowType, optional) – Window tensor that is applied/multiplied to each frame/window. Default:
WindowType.HANN
.
- Raises
TypeError – If sample_rate is not of type int.
TypeError – If n_steps is not of type int.
TypeError – If bins_per_octave is not of type int.
TypeError – If n_fft is not of type int.
TypeError – If win_length is not of type int.
TypeError – If hop_length is not of type int.
TypeError – If window is not of type
mindspore.dataset.audio.WindowType
.ValueError – If sample_rate is a negative number.
ValueError – If bins_per_octave is 0.
ValueError – If n_fft is a negative number.
ValueError – If win_length is not positive.
ValueError – If hop_length is not positive.
- Supported Platforms:
CPU
Examples
>>> import numpy as np >>> >>> import mindspore.dataset as ds >>> import mindspore.dataset.audio as audio >>> from mindspore.dataset.audio import WindowType >>> >>> waveform = np.random.random([1, 1, 300]) >>> numpy_slices_dataset = ds.NumpySlicesDataset(data=waveform, column_names=["audio"]) >>> transforms = [audio.PitchShift(sample_rate=16000,n_steps=4)] >>> numpy_slices_dataset = numpy_slices_dataset.map(operations=transforms, input_columns=["audio"])
- Tutorial Examples: