mindspore.dataset.audio.MelSpectrogram
- class mindspore.dataset.audio.MelSpectrogram(sample_rate=16000, n_fft=400, win_length=None, hop_length=None, f_min=0.0, f_max=None, pad=0, n_mels=128, window=WindowType.HANN, power=2.0, normalized=False, center=True, pad_mode=BorderType.REFLECT, onesided=True, norm=NormType.NONE, mel_scale=MelType.HTK)[source]
Create MelSpectrogram for a raw audio signal.
- Parameters
sample_rate (int, optional) – Sampling rate of audio signal (in Hz), which can't be less than 0. Default:
16000
.n_fft (int, optional) – Size of FFT, creates n_fft // 2 + 1 bins, which should be greater than 0 and less than twice of the last dimension size of the input. Default:
400
.win_length (int, optional) – Window size, which should be greater than 0 and no more than n_fft . Default: None, will be set to n_fft .
hop_length (int, optional) – Length of hop between STFT windows, which should be greater than 0. Default:
None
, will be set to win_length // 2 .f_min (float, optional) – Minimum frequency, which can't be greater than f_max . Default:
0.0
.f_max (float, optional) – Maximum frequency, which can't be less than 0. Default:
None
, will be set to sample_rate // 2 .pad (int, optional) – Two sided padding of signal, which can't be less than 0. Default:
0
.n_mels (int, optional) – Number of mel filterbanks, which can't be less than 0. Default:
128
.window (WindowType, optional) – A function to create a window tensor that is applied/multiplied to each frame/window. Default:
WindowType.HANN
.power (float, optional) – Exponent for the magnitude spectrogram, which must be greater than 0, e.g.,
1
for energy,2
for power, etc. Default:2.0
.normalized (bool, optional) – Whether to normalize by magnitude after stft. Default:
False
.center (bool, optional) – Whether to pad waveform on both sides. Default:
True
.pad_mode (BorderType, optional) – Controls the padding method used when center is
True
, can beBorderType.REFLECT
,BorderType.CONSTANT
,BorderType.EDGE
orBorderType.SYMMETRIC
. Default:BorderType.REFLECT
.onesided (bool, optional) – Controls whether to return half of results to avoid redundancy. Default:
True
.norm (NormType, optional) – If 'slaney', divide the triangular mel weights by the width of the mel band (area normalization). Default:
NormType.NONE
, no narmalization.mel_scale (MelType, optional) – Mel scale to use, can be
MelType.SLANEY
orMelType.HTK
. Default:MelType.HTK
.
- Raises
TypeError – If sample_rate is not of type int.
TypeError – If n_fft is not of type int.
TypeError – If n_mels is not of type int.
TypeError – If f_min is not of type float.
TypeError – If f_max is not of type float.
TypeError – If window is not of type
mindspore.dataset.audio.WindowType
.TypeError – If norm is not of type
mindspore.dataset.audio.NormType
.TypeError – If mel_scale is not of type
mindspore.dataset.audio.MelType
.TypeError – If power is not of type float.
TypeError – If normalized is not of type bool.
TypeError – If center is not of type bool.
TypeError – If pad_mode is not of type
mindspore.dataset.audio.BorderType
.TypeError – If onesided is not of type bool.
TypeError – If pad is not of type int.
TypeError – If win_length is not of type int.
TypeError – If hop_length is not of type int.
ValueError – If sample_rate is a negative number.
ValueError – If n_fft is not positive.
ValueError – If n_mels is a negative number.
ValueError – If f_min is greater than f_max .
ValueError – If f_max is a negative number.
ValueError – If f_min is not less than sample_rate // 2 when f_max is set to None.
ValueError – If power is not positive.
ValueError – If pad is a negative number.
ValueError – If win_length is not positive.
ValueError – If hop_length is not positive.
- Supported Platforms:
CPU
Examples
>>> import numpy as np >>> import mindspore.dataset as ds >>> import mindspore.dataset.audio as audio >>> >>> >>> # Use the transform in dataset pipeline mode >>> waveform = np.random.random([5, 32]) # 5 samples >>> numpy_slices_dataset = ds.NumpySlicesDataset(data=waveform, column_names=["audio"]) >>> transforms = [audio.MelSpectrogram(sample_rate=16000, n_fft=16, win_length=16, hop_length=8, f_min=0.0, ... f_max=5000.0, pad=0, n_mels=2, window=audio.WindowType.HANN, power=2.0, ... normalized=False, center=True, pad_mode=audio.BorderType.REFLECT, ... onesided=True, norm=audio.NormType.SLANEY, mel_scale=audio.MelType.HTK)] >>> numpy_slices_dataset = numpy_slices_dataset.map(operations=transforms, input_columns=["audio"]) >>> for item in numpy_slices_dataset.create_dict_iterator(num_epochs=1, output_numpy=True): ... print(item["audio"].shape, item["audio"].dtype) ... break (2, 5) float64 >>> >>> # Use the transform in eager mode >>> waveform = np.random.random([32]) # 1 sample >>> output = audio.MelSpectrogram(sample_rate=16000, n_fft=16, win_length=16, hop_length=8, f_min=0.0, ... f_max=5000.0, pad=0, n_mels=2, window=audio.WindowType.HANN, power=2.0, ... normalized=False, center=True, pad_mode=audio.BorderType.REFLECT, ... onesided=True, norm=audio.NormType.SLANEY, ... mel_scale=audio.MelType.HTK)(waveform) >>> print(output.shape, output.dtype) (2, 5) float64
- Tutorial Examples: