mindspore.dataset.audio.MFCC
- class mindspore.dataset.audio.MFCC(sample_rate=16000, n_mfcc=40, dct_type=2, norm=NormMode.ORTHO, log_mels=False, melkwargs=None)[source]
- Create MFCC for a raw audio signal. - Parameters
- sample_rate (int, optional) – Sampling rate of audio signal (in Hz), can't be less than 0. Default: - 16000.
- n_mfcc (int, optional) – Number of mfc coefficients to retain, can't be less than 0. Default: - 40.
- dct_type (int, optional) – Type of DCT (discrete cosine transform) to use, can only be - 2. Default:- 2.
- norm (NormMode, optional) – Norm to use. Default: - NormMode.ORTHO.
- log_mels (bool, optional) – Whether to use log-mel spectrograms instead of db-scaled. Default: - False.
- melkwargs (dict, optional) – - Arguments for - mindspore.dataset.audio.MelSpectrogram. Default:- None, the default setting is a dict including- 'n_fft': 400 
- 'win_length': n_fft 
- 'hop_length': win_length // 2 
- 'f_min': 0.0 
- 'f_max': sample_rate // 2 
- 'pad': 0 
- 'window': WindowType.HANN 
- 'power': 2.0 
- 'normalized': False 
- 'center': True 
- 'pad_mode': BorderType.REFLECT 
- 'onesided': True 
- 'norm': NormType.NONE 
- 'mel_scale': MelType.HTK 
 
 
- Raises
- TypeError – If sample_rate is not of type int. 
- TypeError – If log_mels is not of type bool. 
- TypeError – If norm is not of type - mindspore.dataset.audio.NormMode.
- TypeError – If n_mfcc is not of type int. 
- TypeError – If melkwargs is not of type dict. 
- ValueError – If sample_rate is a negative number. 
- ValueError – If n_mfcc is a negative number. 
- ValueError – If dct_type is not - 2.
 
 - Supported Platforms:
- CPU
 - Examples - >>> import numpy as np >>> import mindspore.dataset as ds >>> import mindspore.dataset.audio as audio >>> >>> # Use the transform in dataset pipeline mode >>> waveform = np.random.random([5, 500]) # 5 samples >>> numpy_slices_dataset = ds.NumpySlicesDataset(data=waveform, column_names=["audio"]) >>> transforms = [audio.MFCC(4000, 128, 2)] >>> numpy_slices_dataset = numpy_slices_dataset.map(operations=transforms, input_columns=["audio"]) >>> for item in numpy_slices_dataset.create_dict_iterator(num_epochs=1, output_numpy=True): ... print(item["audio"].shape, item["audio"].dtype) ... break (128, 3) float32 >>> >>> # Use the transform in eager mode >>> waveform = np.random.random([500]) # 1 sample >>> output = audio.MFCC(4000, 128, 2)(waveform) >>> print(output.shape, output.dtype) (128, 3) float32 - Tutorial Examples: