mindspore.dataset.audio.AmplitudeToDB
- class mindspore.dataset.audio.AmplitudeToDB(stype=ScaleType.POWER, ref_value=1.0, amin=1e-10, top_db=80.0)[source]
Turn the input audio waveform from the amplitude/power scale to decibel scale.
Note
The dimension of the audio waveform to be processed needs to be (…, freq, time).
- Parameters
stype (ScaleType, optional) – Scale of the input waveform, which can be ScaleType.POWER or ScaleType.MAGNITUDE. Default: ScaleType.POWER.
ref_value (float, optional) – Multiplier reference value for generating db_multiplier. Default: 1.0. The formula is \(\text{db_multiplier} = Log10(max(\text{ref_value}, amin))\).
amin (float, optional) – Lower bound to clamp the input waveform, which must be greater than zero. Default: 1e-10.
top_db (float, optional) – Minimum cut-off decibels, which must be non-negative. Default: 80.0.
- Raises
TypeError – If stype is not of type
mindspore.dataset.audio.ScaleType
.TypeError – If ref_value is not of type float.
ValueError – If ref_value is not a positive number.
TypeError – If amin is not of type float.
ValueError – If amin is not a positive number.
TypeError – If top_db is not of type float.
ValueError – If top_db is not a positive number.
RuntimeError – If input tensor is not in shape of <…, freq, time>.
- Supported Platforms:
CPU
Examples
>>> import numpy as np >>> from mindspore.dataset.audio import ScaleType >>> >>> waveform = np.random.random([1, 400 // 2 + 1, 30]) >>> numpy_slices_dataset = ds.NumpySlicesDataset(data=waveform, column_names=["audio"]) >>> transforms = [audio.AmplitudeToDB(stype=ScaleType.POWER)] >>> numpy_slices_dataset = numpy_slices_dataset.map(operations=transforms, input_columns=["audio"])