mindspore.dataset.audio.LFCC
- class mindspore.dataset.audio.LFCC(sample_rate=16000, n_filter=128, n_lfcc=40, f_min=0.0, f_max=None, dct_type=2, norm=NormMode.ORTHO, log_lf=False, speckwargs=None)[source]
- Create LFCC for a raw audio signal. - Note - The shape of the audio waveform to be processed needs to be <…, time>. - Parameters
- sample_rate (int, optional) – Sample rate of audio signal. Default: - 16000.
- n_filter (int, optional) – Number of linear filters to apply. Default: - 128.
- n_lfcc (int, optional) – Number of lfc coefficients to retain. Default: - 40.
- f_min (float, optional) – Minimum frequency. Default: - 0.0.
- f_max (float, optional) – Maximum frequency. Default: - None, will be set to sample_rate // 2 .
- dct_type (int, optional) – Type of DCT to use. The value can only be - 2. Default:- 2.
- norm (NormMode, optional) – Norm to use. Default: - NormMode.ORTHO.
- log_lf (bool, optional) – Whether to use log-lf spectrograms instead of db-scaled. Default: - False.
- speckwargs (dict, optional) – - Arguments for - mindspore.dataset.audio.Spectrogram. Default:- None, the default setting is a dict including- 'n_fft': 400 
- 'win_length': n_fft 
- 'hop_length': win_length // 2 
- 'pad': 0 
- 'window': WindowType.HANN 
- 'power': 2.0 
- 'normalized': False 
- 'center': True 
- 'pad_mode': BorderType.REFLECT 
- 'onesided': True 
 
 
- Raises
- TypeError – If sample_rate is not of type int. 
- TypeError – If n_filter is not of type int. 
- TypeError – If n_lfcc is not of type int. 
- TypeError – If norm is not of type - mindspore.dataset.audio.NormMode.
- TypeError – If log_lf is not of type bool. 
- TypeError – If speckwargs is not of type dict. 
- ValueError – If sample_rate is 0. 
- ValueError – If n_lfcc is less than 0. 
- ValueError – If f_min is greater than f_max . 
- ValueError – If f_min is greater than sample_rate // 2 when f_max is set to None. 
- ValueError – If dct_type is not - 2.
 
 - Supported Platforms:
- CPU
 - Examples - >>> import numpy as np >>> import mindspore.dataset as ds >>> import mindspore.dataset.audio as audio >>> >>> # Use the transform in dataset pipeline mode >>> waveform = np.random.random([5, 10, 300]) >>> numpy_slices_dataset = ds.NumpySlicesDataset(data=waveform, column_names=["audio"]) >>> transforms = [audio.LFCC()] >>> numpy_slices_dataset = numpy_slices_dataset.map(operations=transforms, input_columns=["audio"]) >>> for item in numpy_slices_dataset.create_dict_iterator(num_epochs=1, output_numpy=True): ... print(item["audio"].shape, item["audio"].dtype) ... break (10, 40, 2) float32 >>> >>> # Use the transform in eager mode >>> waveform = np.random.random([10, 300]) # 1 sample >>> output = audio.LFCC()(waveform) >>> print(output.shape, output.dtype) (10, 40, 2) float32 - Tutorial Examples: