bfloat16 Datatype Support Status

View Source On Gitee

Overview

bfloat16 (BF16) is a new floating-point format that can accelerate machine learning (deep learning training, in particular) algorithms.

FP16 format has 5 bits of exponent and 10 bits of mantissa, while BF16 has 8 bits of exponent and 7 bits of mantissa. Compared to FP32, while reducing the precision (only 7 bits mantissa), BF16 retains a range that is similar to FP32, which makes it appropriate for deep learning training.

Support List

  • When computing with tensors of bfloat16 data type, the operators used must also support bfloat16 data type. Currently, only Ascend backend has adapted operators.

  • The bfloat16 data type does not support implicit type conversion, that is, when the data types of parameters are inconsistent, the bfloat16 precision type will not be automatically converted to a higher precision type.

API Name

Ascend

Descriptions

Version

mindspore.Tensor.asnumpy

Since numpy does not support bfloat16 data type, it is not possible to convert a tensor of bfloat16 type to numpy type.

mindspore.amp.auto_mixed_precision

✔️

When using the auto-mixed-precision interface, you can specify bfloat16 as the low-precision data type.

mindspore.amp.custom_mixed_precision

✔️

When using the custom-mixed-precision interface, you can specify bfloat16 as the low-precision data type.

mindspore.ops.AllGather

✔️

2.2.10

mindspore.ops.AllReduce

✔️

2.2.10

mindspore.ops.BatchMatMul

✔️

2.2.10

mindspore.ops.Broadcast

✔️

2.2.10

mindspore.ops.Cast

✔️

2.2.0

mindspore.ops.LayerNorm

✔️

2.2.0

mindspore.ops.Mul

✔️

2.2.0

mindspore.ops.ReduceScatter

✔️

2.2.10

mindspore.ops.ReduceSum

✔️

2.2.0

mindspore.ops.Sub

✔️

2.2.0

mindspore.ops.Softmax

✔️

2.2.0

mindspore.ops.Transpose

✔️

2.2.0

The overall bfloat16 datatype capability is currently supported, and more operators will be added to support the bfloat16 datatype.