mindspore.dataset.audio.Fade
- class mindspore.dataset.audio.Fade(fade_in_len=0, fade_out_len=0, fade_shape=FadeShape.LINEAR)[source]
Add a fade in and/or fade out to an waveform.
- Parameters
fade_in_len (int, optional) – Length of fade-in (time frames), which must be non-negative. Default:
0
.fade_out_len (int, optional) – Length of fade-out (time frames), which must be non-negative. Default:
0
.fade_shape (FadeShape, optional) –
Shape of fade, five different types can be chosen as defined in FadeShape. Default:
FadeShape.LINEAR
.FadeShape.QUARTER_SINE
, means it tend to 0 in an quarter sin function.FadeShape.HALF_SINE
, means it tend to 0 in an half sin function.FadeShape.LINEAR
, means it linear to 0.FadeShape.LOGARITHMIC
, means it tend to 0 in an logrithmic function.FadeShape.EXPONENTIAL
, means it tend to 0 in an exponential function.
- Raises
RuntimeError – If fade_in_len exceeds waveform length.
RuntimeError – If fade_out_len exceeds waveform length.
- Supported Platforms:
CPU
Examples
>>> import numpy as np >>> import mindspore.dataset as ds >>> import mindspore.dataset.audio as audio >>> >>> # Use the transform in dataset pipeline mode >>> waveform = np.random.random([5, 16]) # 5 samples >>> numpy_slices_dataset = ds.NumpySlicesDataset(data=waveform, column_names=["audio"]) >>> transforms = [audio.Fade(fade_in_len=3, fade_out_len=2, fade_shape=audio.FadeShape.LINEAR)] >>> numpy_slices_dataset = numpy_slices_dataset.map(operations=transforms, input_columns=["audio"]) >>> for item in numpy_slices_dataset.create_dict_iterator(num_epochs=1, output_numpy=True): ... print(item["audio"].shape, item["audio"].dtype) ... break (16,) float64 >>> >>> # Use the transform in eager mode >>> waveform = np.random.random([16]) # 1 sample >>> output = audio.Fade(fade_in_len=3, fade_out_len=2, fade_shape=audio.FadeShape.LINEAR)(waveform) >>> print(output.shape, output.dtype) (16,) float64
- Tutorial Examples: