mindspore.dataset.audio.MelSpectrogram

View Source On Gitee
class mindspore.dataset.audio.MelSpectrogram(sample_rate=16000, n_fft=400, win_length=None, hop_length=None, f_min=0.0, f_max=None, pad=0, n_mels=128, window=WindowType.HANN, power=2.0, normalized=False, center=True, pad_mode=BorderType.REFLECT, onesided=True, norm=NormType.NONE, mel_scale=MelType.HTK)[source]

Create MelSpectrogram for a raw audio signal.

Parameters
  • sample_rate (int, optional) – Sampling rate of audio signal (in Hz), which can't be less than 0. Default: 16000.

  • n_fft (int, optional) – Size of FFT, creates n_fft // 2 + 1 bins, which should be greater than 0 and less than twice of the last dimension size of the input. Default: 400.

  • win_length (int, optional) – Window size, which should be greater than 0 and no more than n_fft . Default: None, will be set to n_fft .

  • hop_length (int, optional) – Length of hop between STFT windows, which should be greater than 0. Default: None, will be set to win_length // 2 .

  • f_min (float, optional) – Minimum frequency, which can't be greater than f_max . Default: 0.0.

  • f_max (float, optional) – Maximum frequency, which can't be less than 0. Default: None, will be set to sample_rate // 2 .

  • pad (int, optional) – Two sided padding of signal, which can't be less than 0. Default: 0.

  • n_mels (int, optional) – Number of mel filterbanks, which can't be less than 0. Default: 128.

  • window (WindowType, optional) – A function to create a window tensor that is applied/multiplied to each frame/window. Default: WindowType.HANN.

  • power (float, optional) – Exponent for the magnitude spectrogram, which must be greater than 0, e.g., 1 for energy, 2 for power, etc. Default: 2.0.

  • normalized (bool, optional) – Whether to normalize by magnitude after stft. Default: False.

  • center (bool, optional) – Whether to pad waveform on both sides. Default: True.

  • pad_mode (BorderType, optional) – Controls the padding method used when center is True, can be BorderType.REFLECT, BorderType.CONSTANT, BorderType.EDGE or BorderType.SYMMETRIC. Default: BorderType.REFLECT.

  • onesided (bool, optional) – Controls whether to return half of results to avoid redundancy. Default: True.

  • norm (NormType, optional) – If 'slaney', divide the triangular mel weights by the width of the mel band (area normalization). Default: NormType.NONE, no narmalization.

  • mel_scale (MelType, optional) – Mel scale to use, can be MelType.SLANEY or MelType.HTK. Default: MelType.HTK.

Raises
Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> import mindspore.dataset as ds
>>> import mindspore.dataset.audio as audio
>>>
>>>
>>> # Use the transform in dataset pipeline mode
>>> waveform = np.random.random([5, 32])  # 5 samples
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data=waveform, column_names=["audio"])
>>> transforms = [audio.MelSpectrogram(sample_rate=16000, n_fft=16, win_length=16, hop_length=8, f_min=0.0,
...                                    f_max=5000.0, pad=0, n_mels=2, window=audio.WindowType.HANN, power=2.0,
...                                    normalized=False, center=True, pad_mode=audio.BorderType.REFLECT,
...                                    onesided=True, norm=audio.NormType.SLANEY, mel_scale=audio.MelType.HTK)]
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=transforms, input_columns=["audio"])
>>> for item in numpy_slices_dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
...     print(item["audio"].shape, item["audio"].dtype)
...     break
(2, 5) float64
>>>
>>> # Use the transform in eager mode
>>> waveform = np.random.random([32])  # 1 sample
>>> output = audio.MelSpectrogram(sample_rate=16000, n_fft=16, win_length=16, hop_length=8, f_min=0.0,
...                               f_max=5000.0, pad=0, n_mels=2, window=audio.WindowType.HANN, power=2.0,
...                               normalized=False, center=True, pad_mode=audio.BorderType.REFLECT,
...                               onesided=True, norm=audio.NormType.SLANEY,
...                               mel_scale=audio.MelType.HTK)(waveform)
>>> print(output.shape, output.dtype)
(2, 5) float64
Tutorial Examples: