mindspore.jit

View Source On Gitee
mindspore.jit(function: Optional[Callable] = None, *, capture_mode: str = 'ast', jit_level: str = 'O0', dynamic: bool = False, fullgraph: bool = False, backend: str = '', **options)[source]

Create a callable MindSpore graph from a Python function.

This allows the MindSpore runtime to apply optimizations based on graph.

Note

  • It is not supported to run a function with decoration @jit(capture_mode=“bytecode”) in static graph mode, in which case the decoration @jit(capture_mode=“bytecode”) is considered invalid.

  • Calls to functions with decorated @jit(capture_mode=“bytecode”) inside functions decorated with @jit(capture_mode=“ast”) are not supported, and the decoration @jit(capture_mode=“bytecode”) is considered invalid.

Parameters

function (Function, optional) – The Python function that will be run as a graph. Default: None.

Keyword Arguments
  • capture_mode (str, optional) –

    The method to create a callable MindSpore graph. The value of capture_mode should be ast , bytecode or trace . Default: ast .

    • ast : Parse Python ast to build graph.

    • bytecode : Parse Python bytecode to build graph at runtime. This is an experimental prototype that is subject to change and/or deletion.

    • trace : Trace the execution of Python code to build graph. This is an experimental prototype that is subject to change and/or deletion.

  • jit_level (str, optional) –

    Used to control the compilation optimization level. Currently is only effective with default backend. The value of jit_level should be O0 or O1 . Default: O0 .

    • O0: Except for optimizations that may affect functionality, all other optimizations are turned off.

    • O1: Using commonly used optimizations and automatic operator fusion optimizations. This optimization level is experimental and is being improved.

  • dynamic (bool, optional) – Whether dynamic shape compilation should be performed. Currently, it is a reserved parameter, so it does not work. Default: False.

  • fullgraph (bool, optional) – Whether to capture the entire function into graph. If False, jit attempts to be compatible with all Python syntax in the function as much as possible. If True, we require that the entire function can be captured into graph. If this is not possible (that is, if there is Python syntax not supported), then it will raise an exception. This currently only applies when capture_mode is ast. Default: False.

  • backend (str, optional) –

    The compilation backend to be used. If this parameter is not set, the framework will use GE backend for Atlas training series products and ms_backend backend for others including Atlas A2 training series products by default.

    • ms_backend: Adopt KernelByKernel execution mode.

    • GE: Adopt Sink execution mode. The whole model will be sinked to device to execute, only applicable to the top cell of model. And only can be used in Ascend platform.

  • **options (dict) –

    A dictionary of options to pass to the compilation backend.

    Some options are device specific, see the below table for details:

    Option Parameters

    Hardware Platform Support

    Backend Support

    disable_format_transform

    GPU

    ms_backend

    exec_order

    Ascend

    ms_backend

    ge_options

    Ascend

    GE

    infer_boost

    Ascend

    ms_backend

    • disable_format_transform (bool, optional): Whether to disable the automatic format transform function from NCHW to NHWC. When the network training performance of fp16 is worse than fp32, disable_format_transform can be set to True to try to improve training performance. Default: False .

    • exec_order (str, optional): Set the sorting method for operator execution, currently only two sorting methods are supported: bfs and dfs . Default: bfs .

      • bfs: The default sorting method, breadth priority, good communication masking, relatively good performance.

      • dfs: An optional sorting method, depth-first sorting. The performance is relatively worse than that of bfs execution order, but it occupies less memory. It is recommended to try dfs in scenarios where other execution orders run out of memory (OOM).

    • ge_options (dict): Set options for ge backend. The options are divided into two categories: global, and session. This is an experimental prototype that is subject to change and/or deletion. For detailed information, please refer to Ascend community .

      • global (dict): Set global options.

      • session (dict): Set session options.

    • infer_boost (str, optional): Used to control the inference mode. Default: off, which means the inference mode is disabled. The range is as follows:

      • on: Enable inference mode, get better infer performance.

      • off: Disable inference mode, use forward for inference. The performance is poor.

Returns

Function, if fn is not None, returns a callable function that will execute the compiled function; If fn is None, returns a decorator and when this decorator invokes with a single fn argument, the callable function is equal to the case when fn is not None.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor
>>> from mindspore import ops
>>> from mindspore import jit
...
>>> x = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
>>> y = Tensor(np.ones([1, 1, 3, 3]).astype(np.float32))
...
>>> # create a callable MindSpore graph by calling jit
>>> def tensor_add(x, y):
...     z = x + y
...     return z
...
>>> tensor_add_graph = jit(function=tensor_add)
>>> out = tensor_add_graph(x, y)
...
>>> # create a callable MindSpore graph through decorator @jit
>>> @jit
... def tensor_add_with_dec(x, y):
...     z = x + y
...     return z
...
>>> out = tensor_add_with_dec(x, y)
...
>>> # create a callable MindSpore graph and capture the entire function into the graph
>>> @jit(fullgraph=True)
... def tensor_add_fullgraph(x, y):
...     z = x + y
...     return z
...
>>> out = tensor_add_fullgraph(x, y)