mindspore.dataset.audio.transforms.AmplitudeToDB

class mindspore.dataset.audio.transforms.AmplitudeToDB(stype=ScaleType.POWER, ref_value=1.0, amin=1e-10, top_db=80.0)[source]

Turn the input audio waveform from the amplitude/power scale to decibel scale.

Note

The dimension of the audio waveform to be processed needs to be (…, freq, time).

Parameters
  • stype (ScaleType, optional) – Scale of the input waveform, which can be ScaleType.POWER or ScaleType.MAGNITUDE. Default: ScaleType.POWER.

  • ref_value (float, optional) –

    Multiplier reference value for generating db_multiplier. Default: 1.0. The formula is

    \(\text{db_multiplier} = Log10(max(\text{ref_value}, amin))\).

  • amin (float, optional) – Lower bound to clamp the input waveform, which must be greater than zero. Default: 1e-10.

  • top_db (float, optional) – Minimum cut-off decibels, which must be non-negative. Default: 80.0.

Raises
Supported Platforms:

CPU

Examples

>>> import numpy as np
>>> from mindspore.dataset.audio import ScaleType
>>>
>>> waveform = np.random.random([1, 400 // 2 + 1, 30])
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data=waveform, column_names=["audio"])
>>> transforms = [audio.AmplitudeToDB(stype=ScaleType.POWER)]
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=transforms, input_columns=["audio"])