mindspore.context
The context of mindspore, used to configure the current execution environment, includes the execution mode, execution backend and other feature switches.
- class mindspore.context.ParallelMode[source]
Parallel mode options.
There are five kinds of parallel modes, “STAND_ALONE”, “DATA_PARALLEL”, “HYBRID_PARALLEL”, “SEMI_AUTO_PARALLEL” and “AUTO_PARALLEL”. Default: “STAND_ALONE”.
STAND_ALONE: Only one processor is working.
DATA_PARALLEL: Distributes the data across different processors.
HYBRID_PARALLEL: Achieves data parallelism and model parallelism manually.
SEMI_AUTO_PARALLEL: Achieves data parallelism and model parallelism by setting parallel strategies.
AUTO_PARALLEL: Achieves parallelism automatically.
MODE_LIST: The list of all supported parallel modes.
- mindspore.context.get_auto_parallel_context(attr_key)[source]
Gets auto parallel context attribute value according to the key.
- Parameters
attr_key (str) – The key of the attribute.
- Returns
Returns attribute value according to the key.
- Raises
ValueError – If input key is not attribute in auto parallel context.
- mindspore.context.get_context(attr_key)[source]
Gets context attribute value according to the input key.
- Parameters
attr_key (str) – The key of the attribute.
- Returns
Object, The value of given attribute key.
- Raises
ValueError – If input key is not an attribute in context.
- mindspore.context.get_ps_context(attr_key)[source]
Get parameter server training mode context attribute value according to the key.
- Parameters
attr_key (str) – The key of the attribute.
- Returns
Returns attribute value according to the key.
- Raises
ValueError – If input key is not attribute in auto parallel context.
- mindspore.context.reset_auto_parallel_context()[source]
Reset auto parallel context attributes to the default values:
device_num: 1.
global_rank: 0.
gradients_mean: False.
gradient_fp32_sync: True.
parallel_mode: ‘stand_alone’.
auto_parallel_search_mode: ‘dynamic_programming’.
parameter_broadcast: False.
strategy_ckpt_load_file: ‘’.
strategy_ckpt_save_file: ‘’.
full_batch: False.
enable_parallel_optimizer: False.
pipeline_stages: 1.
- mindspore.context.reset_ps_context()[source]
Reset parameter server training mode context attributes to the default values:
enable_ps: False.
- mindspore.context.set_auto_parallel_context(**kwargs)[source]
Set auto parallel context, which is valid only for Ascend and GPU target.
Auto parallel context should be configured before the initialization of your network.
Note
Attribute name is required for setting attributes. If a program has tasks with different parallel modes, then before setting new parallel mode for the next task, interface mindspore.context.reset_auto_parallel_context() needs to be called to reset the configuration. Setting or changing parallel modes must be called before any creating Initializer, otherwise, RuntimeError may be raised when compiling the network.
Some configurations are parallel mode specific, see the below table for details:
Common
AUTO_PARALLEL
device_num
gradient_fp32_sync
global_rank
loss_repeated_mean
gradients_mean
auto_parallel_search_mode
parallel_mode
strategy_ckpt_load_file
all_reduce_fusion_config
strategy_ckpt_save_file
enable_parallel_optimizer
full_batch
pipeline_stages
- Parameters
device_num (int) – Available device number, the value must be in [1, 4096]. Default: 1.
global_rank (int) – Global rank id, the value must be in [0, 4095]. Default: 0.
gradients_mean (bool) – Whether to perform mean operator after allreduce of gradients. “stand_alone” do not support gradients_mean. Default: False.
gradient_fp32_sync (bool) – Run allreduce of gradients in fp32. “stand_alone”, “data_parallel” and “hybrid_parallel” do not support gradient_fp32_sync. Default: True.
parallel_mode (str) –
There are five kinds of parallel modes, “stand_alone”, “data_parallel”, “hybrid_parallel”, “semi_auto_parallel” and “auto_parallel”. Default: “stand_alone”.
stand_alone: Only one processor is working.
data_parallel: Distributes the data across different processors.
hybrid_parallel: Achieves data parallelism and model parallelism manually.
semi_auto_parallel: Achieves data parallelism and model parallelism by setting parallel strategies.
auto_parallel: Achieving parallelism automatically.
auto_parallel_search_mode (str) –
There are two kinds of shard strategy search modes, “recursive_programming” and “dynamic_programming”. Default: “dynamic_programming”.
recursive_programming: Recursive programming search mode.
dynamic_programming: Dynamic programming search mode.
parameter_broadcast (bool) – Whether to broadcast parameters before training. Before training, in order to have the same network initialization parameter values for all devices, broadcast the parameters on device 0 to other devices. Parameter broadcasting in different parallel modes is different, data_parallel mode, all parameters are broadcast except for the prameter whose attribute layerwise_parallel is True. Hybrid_parallel, semi_auto_parallel and auto_parallel mode, the segmented parameters do not participate in broadcasting. Default: False.
strategy_ckpt_load_file (str) – The path to load parallel strategy checkpoint. Default: ‘’
strategy_ckpt_save_file (str) – The path to save parallel strategy checkpoint. Default: ‘’
full_batch (bool) – If you load whole batch datasets in auto_parallel mode, this parameter should be set with True. Default: False.
enable_parallel_optimizer (bool) – This is a developing feature, which shards the weight update computation for data parallel training in the benefit of time and memory saving. Currently, auto and semi auto parallel mode support all optimizers in both Ascend and GPU. Data parallel mode only supports Lamb and AdamWeightDecay in Ascend . Default: False.
all_reduce_fusion_config (list) – Set allreduce fusion strategy by parameters indices. Only support ReduceOp.SUM and HCCL_WORLD_GROUP/NCCL_WORLD_GROUP. No Default, if it is not set, the fusion is closed.
pipeline_stages (int) – Set the stage information for pipeline parallel. This indicates how the devices are distributed alone the pipeline. The total devices will be divided into ‘pipeline_stags’ stages. This currently could only be used when parallel mode semi_auto_parallel is enabled. Default: 1.
- Raises
ValueError – If input key is not attribute in auto parallel context.
Examples
>>> context.set_auto_parallel_context(device_num=8) >>> context.set_auto_parallel_context(global_rank=0) >>> context.set_auto_parallel_context(gradients_mean=True) >>> context.set_auto_parallel_context(gradient_fp32_sync=False) >>> context.set_auto_parallel_context(parallel_mode="auto_parallel") >>> context.set_auto_parallel_context(auto_parallel_search_mode="dynamic_programming") >>> context.set_auto_parallel_context(parameter_broadcast=False) >>> context.set_auto_parallel_context(strategy_ckpt_load_file="./strategy_stage1.ckpt") >>> context.set_auto_parallel_context(strategy_ckpt_save_file="./strategy_stage1.ckpt") >>> context.set_auto_parallel_context(full_batch=True) >>> context.set_auto_parallel_context(enable_parallel_optimizer=False) >>> context.set_auto_parallel_context(all_reduce_fusion_config=[8, 160]) >>> context.set_auto_parallel_context(pipeline_stages=2)
- mindspore.context.set_context(**kwargs)[source]
Sets context for running environment.
Context should be configured before running your program. If there is no configuration, the “Ascend” device target will be used by default. GRAPH_MODE or PYNATIVE_MODE can be set by mode attribute and both modes support all backends, default mode is PYNATIVE_MODE.
When the save_graphs attribute is set to True, attribute of save_graphs_path is used to set the intermediate compilation graph storage path. By default, the graphs are saved in the current directory. For other configurations and arguments, please refer to the corresponding module description, the configuration is optional and can be enabled when needed.
Note
Attribute name is required for setting attributes. The mode is not recommended to be changed after net was initilized because the implementations of some operations are different in graph mode and pynative mode. Default: PYNATIVE_MODE.
Some configurations are device specific, see the bellow table for details:
Common(CPU/GPU/Ascend)
Ascend
GPU
check_bprop
print_file_path
max_device_memory
device_id
enable_dump
enable_graph_kernel
device_target
save_dump_path
enable_sparse
enable_graph_kernel
max_call_depth
enable_reduce_precision
mode
enable_profiling
reserve_class_name_in_scope
profiling_options
save_graphs
variable_memory_max_size
save_graphs_path
- Parameters
mode (int) – Running in GRAPH_MODE(0) or PYNATIVE_MODE(1). Default: PYNATIVE_MODE(1).
device_target (str) – The target device to run, support “Ascend”, “GPU”, and “CPU”. Default: “Ascend”.
device_id (int) – ID of the target device, the value must be in [0, device_num_per_host-1], while device_num_per_host should be no more than 4096. Default: 0.
save_graphs (bool) – Whether to save graphs. Default: False.
save_graphs_path (str) – Path to save graphs. Default: “.”
enable_graph_kernel (bool) – Whether to enable composition of basic primitives. These primitives would be compiled into a fused kernel automatically. Default: False.
reserve_class_name_in_scope (bool) – Whether to save the network class name in the scope. Default: True.
enable_reduce_precision (bool) – Whether to enable precision reduction. Default: True.
enable_dump (bool) – Whether to enable dump. Default: False.
save_dump_path (str) – When the program is executed on Ascend, operators can dump data in this path. The root dump path is configured in /home/HwHiAiUser/ide_daemon/ide_daemon.cfg. So the real dump path is “{configured root dump path}/{save_dump_path}”. Default: “.”.
variable_memory_max_size (str) – Set the maximum size of the variable memory max size. Default: “0GB”.
enable_profiling (bool) – Whether to open profiling. Default: False.
profiling_options (str) –
Set profiling collection options, operators can profiling data here. The values of profiling collection options are as follows, supporting the collection of multiple data.
training_trace: collect iterative trajectory data, that is, the training task and software information of the AI software stack, to achieve performance analysis of the training task, focusing on data enhancement, forward and backward calculation, gradient aggregation update and other related data.
task_trace: collect task trajectory data, that is, the hardware information of the HWTS/AICore of the Ascend 910 processor, and analyze the information of beginning and ending of the task.
op_trace: collect single operator performance data.
The profiling can choose the combination of training_trace, task_trace, training_trace and task_trace combination, and separated by colons; a single operator can choose op_trace, op_trace cannot be combined with training_trace and task_trace. Default: “training_trace”.
check_bprop (bool) – Whether to check bprop. Default: False.
max_device_memory (str) – Sets the maximum memory available for devices. Currently, it is only supported on GPU. The format is “xxGB”. Default: “1024GB”.
print_file_path (str) – The path of saving print data. If this parameter is set, print data is saved to a file by default, and turns off printing to the screen. If the file already exists, add a timestamp suffix to the file. Default: ‘’.
enable_sparse (bool) – Whether to enable sparsity feature. Default: False.
max_call_depth (int) – Specify the maximum depth of function call. Default: 1000.
- Raises
ValueError – If input key is not an attribute in context.
Examples
>>> context.set_context(mode=context.GRAPH_MODE) >>> context.set_context(mode=context.PYNATIVE_MODE) >>> context.set_context(device_target="Ascend") >>> context.set_context(device_id=0) >>> context.set_context(save_graphs=True, save_graphs_path="./model.ms") >>> context.set_context(enable_reduce_precision=True) >>> context.set_context(enable_dump=True, save_dump_path=".") >>> context.set_context(reserve_class_name_in_scope=True) >>> context.set_context(variable_memory_max_size="6GB") >>> context.set_context(mode=context.GRAPH_MODE, ... device_target="Ascend",device_id=0, save_graphs=True, ... save_graphs_path="/mindspore") >>> context.set_context(enable_profiling=True, profiling_options="training_trace") >>> context.set_context(max_device_memory="3.5GB") >>> context.set_context(print_file_path="print.pb") >>> context.set_context(max_call_depth=80)
- mindspore.context.set_ps_context(**kwargs)[source]
Set parameter server training mode context.
Note
Some other environment variables should also be set for parameter server training mode. These environment variables are listed below:
MS_SERVER_NUM # Server number MS_WORKER_NUM # Worker number MS_SCHED_HOST # Scheduler IP address MS_SCHED_PORT # Scheduler port MS_ROLE # The role of this process: # MS_SCHED represents the scheduler, # MS_WORKER represents the worker, # MS_PSERVER represents the Server
- Parameters
enable_ps (bool) – Whether to enable parameter server training mode. Only after enable_ps is set True, the environment variables will be effective. Default: False.
- Raises
ValueError – If input key is not the attribute in parameter server training mode context.
Examples
>>> context.set_ps_context(enable_ps=True)