Differences with torchaudio.transforms.SpectralCentroid

View Source On Gitee

torchaudio.transforms.SpectralCentroid

class torchaudio.transforms.SpectralCentroid(sample_rate: int, n_fft: int = 400, win_length: Optional[int] = None,
                                             hop_length: Optional[int] = None, pad: int = 0,
                                             window_fn: Callable[[...], torch.Tensor] = <built-in method hann_window of type object>,
                                             wkwargs: Optional[dict] = None)

For more information, see torchaudio.transforms.SpectralCentroid.

mindspore.dataset.audio.SpectralCentroid

class mindspore.dataset.audio.SpectralCentroid(sample_rate, n_fft=400, win_length=None, hop_length=None,
                                               pad=0, window=WindowType.HANN)

For more information, see mindspore.dataset.audio.SpectralCentroid.

Differences

PyTorch:Compute the spectral centroid for each channel along the time axis. Customized window function and different parameter configs for window function are both supported.

MindSpore:Compute the spectral centroid for each channel along the time axis.

Categories

Subcategories

PyTorch

MindSpore

Difference

Parameter

Parameter1

sample_rate

sample_rate

-

Parameter2

n_fft

n_fft

-

Parameter3

win_length

win_length

-

Parameter4

hop_length

hop_length

-

Parameter5

pad

pad

Parameter6

window_fn

window

MindSpore only supports 5 window functions

Parameter7

wkwargs

-

Arguments for window function, not supported by MindSpore

Code Example

import numpy as np

fake_input = np.array([[[1, 1, 2, 2, 3, 3, 4]]]).astype(np.float32)

# PyTorch
import torch
import torchaudio.transforms as T

transformer = T.SpectralCentroid(sample_rate=44100, n_fft=8, window_fn=torch.hann_window)
torch_result = transformer(torch.from_numpy(fake_input))
print(torch_result)
# Out: tensor([[[4436.1182, 3768.7986]]])

# MindSpore
import mindspore.dataset.audio as audio

transformer = audio.SpectralCentroid(sample_rate=44100, n_fft=8, window=audio.WindowType.HANN)
ms_result = transformer(fake_input)
print(ms_result)
# Out: [[[[4436.117  3768.7979]]]]