mindspore_lite.Context

class mindspore_lite.Context(thread_num=None, inter_op_parallel_num=None, thread_affinity_mode=None, \                 thread_affinity_core_list=None, enable_parallel=False)[source]

Context is used to store environment variables during execution.

The context should be configured before running the program. If it is not configured, it will be automatically set according to the device target by default.

Note

If core_list and mode are set by SetThreadAffinity at the same time, the core_list is effective, but the mode is not effective. If the default value of the parameter is none, it means the parameter is not set.

Parameters
  • thread_num (int, optional) – Set the number of threads at runtime. Default: None.

  • inter_op_parallel_num (int, optional) – Set the parallel number of operators at runtime. Default: None.

  • thread_affinity_mode (int, optional) –

    Set the thread affinity to CPU cores. Default: None.

    • 0: no affinities.

    • 1: big cores first.

    • 2: little cores first.

  • thread_affinity_core_list (list[int], optional) – Set the thread lists to CPU cores. Default: None.

  • enable_parallel (bool, optional) – Set the status whether to perform model inference or training in parallel. Default: False.

Raises
  • TypeErrorthread_num is neither an int nor None.

  • TypeErrorinter_op_parallel_num is neither an int nor None.

  • TypeErrorthread_affinity_mode is neither an int nor None.

  • TypeErrorthread_affinity_core_list is neither a list nor None.

  • TypeErrorthread_affinity_core_list is a list, but the elements are neither int nor None.

  • TypeErrorenable_parallel is not a bool.

  • ValueErrorthread_num is less than 0.

  • ValueErrorinter_op_parallel_num is less than 0.

Examples

>>> import mindspore_lite as mslite
>>> context = mslite.Context(thread_num=1, inter_op_parallel_num=1, thread_affinity_mode=1,
...                          enable_parallel=False)
>>> print(context)
thread_num: 1,
inter_op_parallel_num: 1,
thread_affinity_mode: 1,
thread_affinity_core_list: [],
enable_parallel: False,
device_list: .
append_device_info(device_info)[source]

Append one user-defined device info to the context.

Note

After gpu device info is added, cpu device info must be added before call context. Because when ops are not supported on GPU, The system will try whether the CPU supports it. At that time, need to switch to the context with cpu device info.

After Ascend device info is added, cpu device info must be added before call context. Because when ops are not supported on Ascend, The system will try whether the CPU supports it. At that time, need to switch to the context with cpu device info.

Parameters

device_info (DeviceInfo) – the instance of device info.

Raises

TypeErrordevice_info is not a DeviceInfo.

Examples

>>> import mindspore_lite as mslite
>>> context = mslite.Context()
>>> context.append_device_info(mslite.CPUDeviceInfo())
>>> print(context)
thread_num: 0,
inter_op_parallel_num: 0,
thread_affinity_mode: 0,
thread_affinity_core_list: [],
enable_parallel: False,
device_list: 0, .