mindspore.dataset.audio.SpectralCentroid

class mindspore.dataset.audio.SpectralCentroid(sample_rate, n_fft=400, win_length=None, hop_length=None, pad=0, window=WindowType.HANN)[source]

Compute the spectral centroid for each channel along the time axis.

Parameters
  • sample_rate (int) – Sampling rate of audio signal, e.g. 44100 (Hz).

  • n_fft (int, optional) – Size of FFT, creates n_fft // 2 + 1 bins. Default: 400.

  • win_length (int, optional) – Window size. Default: None, will use n_fft .

  • hop_length (int, optional) – Length of hop between STFT windows. Default: None, will use win_length // 2 .

  • pad (int, optional) – Two sided padding of signal. Default: 0.

  • window (WindowType, optional) – Window function that is applied/multiplied to each frame/window, can be WindowType.BARTLETT, WindowType.BLACKMAN, WindowType.HAMMING, WindowType.HANN or WindowType.KAISER. Default: WindowType.HANN.

Raises
Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> import mindspore.dataset as ds
>>> import mindspore.dataset.audio as audio
>>>
>>> # Use the transform in dataset pipeline mode
>>> waveform = np.random.random([5, 10, 20])  # 5 samples
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data=waveform, column_names=["audio"])
>>> transforms = [audio.SpectralCentroid(44100)]
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=transforms, input_columns=["audio"])
>>> for item in numpy_slices_dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
...     print(item["audio"].shape, item["audio"].dtype)
...     break
(10, 1, 1) float64
>>>
>>> # Use the transform in eager mode
>>> waveform = np.random.random([10, 20])  # 1 sample
>>> output = audio.SpectralCentroid(44100)(waveform)
>>> print(output.shape, output.dtype)
(10, 1, 1) float64
Tutorial Examples: