mindspore_lite.Context
- class mindspore_lite.Context(thread_num=None, inter_op_parallel_num=None, thread_affinity_mode=None, \ thread_affinity_core_list=None, enable_parallel=False)[source]
Context is used to transfer environment variables during execution.
The context should be configured before running the program. If it is not configured, it will be automatically set according to the device target by default.
Note
If thread_affinity_core_list and thread_affinity_mode are set at the same time in one context, the thread_affinity_core_list is effective, but the thread_affinity_mode is not effective.
- Parameters
thread_num (int, optional) – Set the number of threads at runtime. thread_num cannot be less than inter_op_parallel_num . Setting thread_num to 0 represents thread_num will be automatically adjusted based on computer performance and core numbers. Default: None, None is equivalent to 0.
inter_op_parallel_num (int, optional) – Set the parallel number of operators at runtime. inter_op_parallel_num cannot be greater than thread_num . Setting inter_op_parallel_num to 0 represents inter_op_parallel_num will be automatically adjusted based on computer performance and core num. Default: None, None is equivalent to 0.
thread_affinity_mode (int, optional) –
Set the mode of the CPU/GPU/NPU core binding policy at runtime. The following thread_affinity_mode are supported. Default: None, None is equivalent to 0.
0: no binding core.
1: binding big cores first.
2: binding middle cores first.
thread_affinity_core_list (list[int], optional) – Set the list of CPU/GPU/NPU core binding policies at runtime. For example, [0,1] on the CPU device represents the specified binding of CPU0 and CPU1. Default: None, None is equivalent to [].
enable_parallel (bool, optional) – Set the status whether to perform model inference or training in parallel. Default: False.
- Raises
TypeError – thread_num is neither an int nor None.
TypeError – inter_op_parallel_num is neither an int nor None.
TypeError – thread_affinity_mode is neither an int nor None.
TypeError – thread_affinity_core_list is neither a list nor None.
TypeError – thread_affinity_core_list is a list, but the elements are neither int nor None.
TypeError – enable_parallel is not a bool.
ValueError – thread_num is less than 0.
ValueError – inter_op_parallel_num is less than 0.
Examples
>>> import mindspore_lite as mslite >>> context = mslite.Context(thread_num=1, inter_op_parallel_num=1, thread_affinity_mode=1, ... enable_parallel=False) >>> print(context) thread_num: 1, inter_op_parallel_num: 1, thread_affinity_mode: 1, thread_affinity_core_list: [], enable_parallel: False, device_list: .
- append_device_info(device_info)[source]
Append one user-defined device info to the context.
Note
After gpu device info is added, cpu device info must be added before call context. Because when ops are not supported on GPU, The system will try whether the CPU supports it. At that time, need to switch to the context with cpu device info.
After Ascend device info is added, users can choose to add CPU device info before using context when the inputs format of the original model is inconsistent with that of the model generated by Converter. Because in this case, the model generated by Converter on Ascend device will contain the ‘Transpose’ node, which needs to be executed on the CPU device currently, so it needs to switch to the context with CPU device info.
- Parameters
device_info (DeviceInfo) – the instance of device info.
- Raises
TypeError – device_info is not a DeviceInfo.
Examples
>>> import mindspore_lite as mslite >>> context = mslite.Context() >>> context.append_device_info(mslite.CPUDeviceInfo()) >>> print(context) thread_num: 0, inter_op_parallel_num: 0, thread_affinity_mode: 0, thread_affinity_core_list: [], enable_parallel: False, device_list: 0, .