mindspore.device_context.ascend.op_precision.precision_mode

View Source On Gitee
mindspore.device_context.ascend.op_precision.precision_mode(mode)[source]

Configure mixed precision mode setting. The framework set the configuration of Atlas training series products to "force_fp16" by default, and set the configuration for other products such as the Atlas A2 training series products to "must_keep_origin_dtype" by default. For detailed information, please refer to Ascend community .

Note

  • The default value of precision_mode is experimental parameter, may change in the future.

Parameters

mode (str) –

The operator precision mode setting. The value range is as follows:

  • force_fp16: When the operator supports both float16 and float32, directly choose float16.

  • allow_fp32_to_fp16: For matrix-type operators, use float16. For vector-type operators, prioritize the original precision. If the operator in the network model supports float32, retain the original precision float32. If the operator in the network model does not support float32, directly reduce the precision to float16.

  • allow_mix_precision: Automatic mixed precision, for all operators in the network, according to the built-in optimization strategy, automatically reduce the precision of some operators to float16 or bfloat16.

  • must_keep_origin_dtype: Maintain the original precision.

  • force_fp32: When the input of the matrix calculation operator is float16, and the output supports both float16 and float32, force the output to be converted to float32.

  • allow_fp32_to_bf16: For matrix-type operators, use bfloat16. For vector-type operators, prioritize the original precision. If the operator in the network model supports float32, retain the original precision float32. If the operator in the network model does not support float32, directly reduce the precision to bfloat16.

  • allow_mix_precision_fp16: Automatic mixed precision, for all operators in the network, according to the built-in optimization strategy, automatically reduce the precision of some operators to float16.

  • allow_mix_precision_bf16: Automatic mixed precision, for all operators in the network, according to the built-in optimization strategy, automatically reduce the precision of some operators to bfloat16.

Examples

>>> import mindspore as ms
>>> ms.device_context.ascend.op_precision.precision_mode("force_fp16")