mindspore.JitConfig
- class mindspore.JitConfig(jit_level='', exc_mode='auto', jit_syntax_level='', debug_level='RELEASE', infer_boost='off', **kwargs)[source]
Jit config for compile.
- Parameters
jit_level (str, optional) –
Used to control the compilation optimization level. Supports ["O0", "O1", "O2"]. Default:
""
, The framework automatically selects the execution method. Not recommended, it is recommended to use the jit decorator."O0"
: Except for optimizations that may affect functionality, all other optimizations are turned off, adopt KernelByKernel execution mode."O1"
: Using commonly used optimizations and automatic operator fusion optimizations, adopt KernelByKernel execution mode."O2"
: Ultimate performance optimization, adopt Sink execution mode.
exc_mode (str, optional) –
Control the execution mode of the model. Supports ["auto", "sink", "no_sink"]. Default:
"auto"
."auto"
: The framework automatically selects the execution method."sink"
: Support the network to load and load the entire device at once, and then execute it by input driver, without the need to iterate through each operator to achieve better execution performance. This mode is only supported on the Ascend backend."no_sink"
: The network model is executed asynchronously one by one using a single operator.
jit_syntax_level (str, optional) –
JIT syntax level for graph compiling. The value must be
"STRICT"
,"LAX"
or""
. Default to an empty string, which means that this JitConfig configuration will be ignored and the jit_syntax_level of ms.context will be used. For more details about ms.context, refer to set_context . Default:""
."STRICT"
: Only basic syntax is supported, and execution performance is optimal. Can be used for MindIR load and export."LAX"
: Compatible with all Python syntax as much as possible. However, execution performance may be affected and not optimal. Cannot be used for MindIR load and export due to some syntax that may not be able to be exported.
debug_level (str, optional) –
Set debugging level for graph compiling. The value must be
"RELEASE"
or"DEBUG"
. Default value:RELEASE
.RELEASE
: Used for normally running, and some debug information will be discard to get a better compiling performance.DEBUG
: Used for debugging when errors occur, more information will be record in compiling process.
infer_boost (str, optional) – enable infer boost mode. The value must be
"on"
,"off"
. Default to an "off", which means that disable infer boost. when infer boost mode is enabled, MindSpore will use high perf kernel lib, use faster runtime make infer speed is best. Note: current infer boost only support jit_level =="O0"
and only Atlas A2 series products are supported.**kwargs (dict) – A dictionary of keyword arguments that the class needs.
Examples
>>> from mindspore import JitConfig >>> >>> jitconfig = JitConfig(jit_level="O1") >>> >>> # Define the network structure of LeNet5. Refer to >>> # https://gitee.com/mindspore/docs/blob/master/docs/mindspore/code/lenet.py >>> net = LeNet5() >>> >>> net.set_jit_config(jitconfig)