mindspore.context

The context of mindspore, used to configure the current execution environment, includes the execution mode, execution backend and other feature switches.

class mindspore.context.ParallelMode[source]

Parallel mode options.

There are five kinds of parallel modes, “STAND_ALONE”, “DATA_PARALLEL”, “HYBRID_PARALLEL”, “SEMI_AUTO_PARALLEL” and “AUTO_PARALLEL”. Default: “STAND_ALONE”.

  • STAND_ALONE: Only one processor is working.

  • DATA_PARALLEL: Distributes the data across different processors.

  • HYBRID_PARALLEL: Achieves data parallelism and model parallelism manually.

  • SEMI_AUTO_PARALLEL: Achieves data parallelism and model parallelism by setting parallel strategies.

  • AUTO_PARALLEL: Achieves parallelism automatically.

MODE_LIST: The list of all supported parallel modes.

mindspore.context.get_auto_parallel_context(attr_key)[source]

Get auto parallel context attribute value according to the key.

Parameters

attr_key (str) – The key of the attribute.

Returns

Returns attribute value according to the key.

Raises

ValueError – If input key is not attribute in auto parallel context.

mindspore.context.get_context(attr_key)[source]

Get context attribute value according to the input key. If some attribute are not set, it will be automatically obtained.

Parameters

attr_key (str) – The key of the attribute.

Returns

Object, The value of given attribute key.

Raises

ValueError – If input key is not an attribute in context.

mindspore.context.get_ps_context(attr_key)[source]

Get parameter server training mode context attribute value according to the key.

Parameters

attr_key (str) – The key of the attribute: - enable_ps (bool): Whether to enable parameter server training mode.

Returns

Returns attribute value according to the key.

Raises

ValueError – If input key is not attribute in auto parallel context.

mindspore.context.reset_auto_parallel_context()[source]

Reset auto parallel context attributes to the default values:

  • device_num: 1.

  • global_rank: 0.

  • gradients_mean: False.

  • gradient_fp32_sync: True.

  • parallel_mode: ‘stand_alone’.

  • auto_parallel_search_mode: ‘dynamic_programming’.

  • parameter_broadcast: False.

  • strategy_ckpt_load_file: ‘’.

  • strategy_ckpt_save_file: ‘’.

  • full_batch: False.

  • enable_parallel_optimizer: False.

  • pipeline_stages: 1.

mindspore.context.reset_ps_context()[source]

Reset parameter server training mode context attributes to the default values:

  • enable_ps: False.

mindspore.context.set_auto_parallel_context(**kwargs)[source]

Set auto parallel context, which is valid only for Ascend and GPU target.

Auto parallel context should be configured before the initialization of your network.

Note

Attribute name is required for setting attributes. If a program has tasks with different parallel modes, then before setting new parallel mode for the next task, interface mindspore.context.reset_auto_parallel_context() needs to be called to reset the configuration. Setting or changing parallel modes must be called before any creating Initializer, otherwise, RuntimeError may be raised when compiling the network.

Some configurations are parallel mode specific, see the below table for details:

Common

AUTO_PARALLEL

device_num

gradient_fp32_sync

global_rank

loss_repeated_mean

gradients_mean

auto_parallel_search_mode

parallel_mode

strategy_ckpt_load_file

all_reduce_fusion_config

strategy_ckpt_save_file

enable_parallel_optimizer

full_batch

pipeline_stages

grad_accumulation_step

Parameters
  • device_num (int) – Available device number, the value must be in [1, 4096]. Default: 1.

  • global_rank (int) – Global rank id, the value must be in [0, 4095]. Default: 0.

  • gradients_mean (bool) – Whether to perform mean operator after allreduce of gradients. “stand_alone” do not support gradients_mean. Default: False.

  • gradient_fp32_sync (bool) – Run allreduce of gradients in fp32. “stand_alone”, “data_parallel” and “hybrid_parallel” do not support gradient_fp32_sync. Default: True.

  • parallel_mode (str) –

    There are five kinds of parallel modes, “stand_alone”, “data_parallel”, “hybrid_parallel”, “semi_auto_parallel” and “auto_parallel”. Default: “stand_alone”.

    • stand_alone: Only one processor is working.

    • data_parallel: Distributes the data across different processors.

    • hybrid_parallel: Achieves data parallelism and model parallelism manually.

    • semi_auto_parallel: Achieves data parallelism and model parallelism by setting parallel strategies.

    • auto_parallel: Achieving parallelism automatically.

  • auto_parallel_search_mode (str) –

    There are two kinds of shard strategy search modes, “recursive_programming” and “dynamic_programming”. Default: “dynamic_programming”.

    • recursive_programming: Recursive programming search mode.

    • dynamic_programming: Dynamic programming search mode.

  • parameter_broadcast (bool) – Whether to broadcast parameters before training. Before training, in order to have the same network initialization parameter values for all devices, broadcast the parameters on device 0 to other devices. Parameter broadcasting in different parallel modes is different, data_parallel mode, all parameters are broadcast except for the parameter whose attribute layerwise_parallel is True. Hybrid_parallel, semi_auto_parallel and auto_parallel mode, the segmented parameters do not participate in broadcasting. Default: False.

  • strategy_ckpt_load_file (str) – The path to load parallel strategy checkpoint. Default: ‘’

  • strategy_ckpt_save_file (str) – The path to save parallel strategy checkpoint. Default: ‘’

  • full_batch (bool) – If you load whole batch datasets in auto_parallel mode, this parameter should be set with True. Default: False.

  • enable_parallel_optimizer (bool) – This is a developing feature, which shards the weight update computation for data parallel training in the benefit of time and memory saving. Currently, auto and semi auto parallel mode support all optimizers in both Ascend and GPU. Data parallel mode only supports Lamb and AdamWeightDecay in Ascend . Default: False.

  • all_reduce_fusion_config (list) – Set allreduce fusion strategy by parameters indices. Only support ReduceOp.SUM and HCCL_WORLD_GROUP/NCCL_WORLD_GROUP. No Default, if it is not set, the fusion is closed.

  • pipeline_stages (int) – Set the stage information for pipeline parallel. This indicates how the devices are distributed alone the pipeline. The total devices will be divided into ‘pipeline_stags’ stages. This currently could only be used when parallel mode semi_auto_parallel is enabled. Default: 1.

  • grad_accumulation_step (int) – Set the accumulation steps of gradients in auto and semi auto parallel mode. This should be a positive int. Default: 1.

Raises

ValueError – If input key is not attribute in auto parallel context.

Examples

>>> context.set_auto_parallel_context(device_num=8)
>>> context.set_auto_parallel_context(global_rank=0)
>>> context.set_auto_parallel_context(gradients_mean=True)
>>> context.set_auto_parallel_context(gradient_fp32_sync=False)
>>> context.set_auto_parallel_context(parallel_mode="auto_parallel")
>>> context.set_auto_parallel_context(auto_parallel_search_mode="dynamic_programming")
>>> context.set_auto_parallel_context(parameter_broadcast=False)
>>> context.set_auto_parallel_context(strategy_ckpt_load_file="./strategy_stage1.ckpt")
>>> context.set_auto_parallel_context(strategy_ckpt_save_file="./strategy_stage1.ckpt")
>>> context.set_auto_parallel_context(full_batch=True)
>>> context.set_auto_parallel_context(enable_parallel_optimizer=False)
>>> context.set_auto_parallel_context(all_reduce_fusion_config=[8, 160])
>>> context.set_auto_parallel_context(pipeline_stages=2)
mindspore.context.set_context(**kwargs)[source]

Set context for running environment.

Context should be configured before running your program. If there is no configuration, it will automatic acquisition according to device target by default. GRAPH_MODE or PYNATIVE_MODE can be set by mode attribute and both modes support all backends, default mode is GRAPH_MODE.

When the save_graphs attribute is set to True, attribute of save_graphs_path is used to set the intermediate compilation graph storage path. By default, the graphs are saved in the current directory. For other configurations and arguments, please refer to the corresponding module description, the configuration is optional and can be enabled when needed.

Note

Attribute name is required for setting attributes. The mode is not recommended to be changed after net was initialized because the implementations of some operations are different in graph mode and pynative mode. Default: GRAPH_MODE.

Some configurations are device specific, see the below table for details:

Common(CPU/GPU/Ascend)

Ascend

GPU

check_bprop

print_file_path

max_device_memory

device_id

enable_dump

enable_graph_kernel

device_target

save_dump_path

graph_kernel_flags

enable_sparse

enable_graph_kernel

max_call_depth

enable_reduce_precision

mode

enable_profiling

reserve_class_name_in_scope

profiling_options

save_graphs

variable_memory_max_size

save_graphs_path

auto_tune_mode

env_config_path

graph_kernel_flags

grad_for_scalar

save_compile_cache

load_compile_cache

Parameters
  • mode (int) – Running in GRAPH_MODE(0) or PYNATIVE_MODE(1). Default: GRAPH_MODE(0).

  • precompile_only (bool) – Whether to only precompile the network. If set, the network will only be compiled and not executed. Default: False.

  • device_target (str) – The target device to run, support “Ascend”, “GPU”, and “CPU”.

  • device_id (int) – ID of the target device, the value must be in [0, device_num_per_host-1], while device_num_per_host should be no more than 4096. Default: 0.

  • save_graphs (bool) – Whether to save graphs. Default: False.

  • save_graphs_path (str) –

    Path to save graphs. Default: “.”.

    If the program is executed in the parallel mode, save_graphs_path should consist of the path and the current device id, to ensure that writing file conflicts won’t happen when the different processes try to create the files in the same directory. For example, the device_id can be generated by device_id = os.getenv(“DEVICE_ID”) and the save_graphs_path can be set by context.set_context(save_graphs_path=”path/to/ir/files”+device_id).

  • enable_graph_kernel (bool) – Whether to enable graph kernel fusion to optimize network execution performance. Default: False.

  • graph_kernel_flags (str) –

    Optimization options of graph kernel fusion. Experienced user only. For example, context.set_context(graph_kernel_flags=”–opt_level=2 –dump_as_text”). Some general options:

    • opt_level: optimization level between 0 and 3. Default: 2. Graph kernel fusion can be enabled equivalently by setting opt_level greater than 0.

    • dump_as_text: dump detail info as text files. Default: false.

    More options can be referred from the implementation code. These options can also be set by environment variable MS_GRAPH_KERNEL_FLAGS, without modifying network source code. For example, export MS_GRAPH_KERNEL_FLAGS=”–opt_level=2 –dump_as_text”.

  • reserve_class_name_in_scope (bool) – Whether to save the network class name in the scope. Default: True. Each node has a scope. A scope of a subnode is the name of its parent node. If reserve_class_name_in_scope is set, the class name will be saved after keyword ‘net-’ in the scope. For example: Default/net-Net1/net-Net2 (reserve_class_name_in_scope=True) Default/net/net (reserve_class_name_in_scope=False)

  • enable_reduce_precision (bool) – Whether to enable precision reduction. Default: True.

  • enable_dump (bool) – Whether to enable dump. Default: False.

  • save_dump_path (str) – When the program is executed on Ascend, operators can dump data in this path. The root dump path is configured in /home/HwHiAiUser/ide_daemon/ide_daemon.cfg. So the real dump path is “{configured root dump path}/{save_dump_path}”. Default: “.”.

  • variable_memory_max_size (str) – Set the maximum size of the variable memory max size. Default: “0GB”.

  • enable_profiling (bool) – Whether to open profiling. Default: False.

  • profiling_options (str) –

    Set profiling collection options, operators can profiling data here. The values of profiling collection options are as follows, supporting the collection of multiple data.

    • output: the saving the path of the profiling collection result file. The directory spectified by this parameter needs to be created in advance on the training environment (container or host side) and ensure that the running user configured during installation has read and write permissions.It supports the configuration of absolute or relative paths(relative to the current path when executing the command line). The absolute path configuration starts with ‘/’, for example:/home/data/output. The relative path configuration directly starts with the directory name,for example:output.

    • training_trace: collect iterative trajectory data, that is, the training task and software information of the AI software stack, to achieve performance analysis of the training task, focusing on data enhancement, forward and backward calculation, gradient aggregation update and other related data. The value is on/off.

    • task_trace: collect task trajectory data, that is, the hardware information of the HWTS/AICore of the Ascend 910 processor, and analyze the information of beginning and ending of the task. The value is on/off.

    • aicpu: collect profiling data enhanced by aicpu data. The value is on/off.

    • fp_point: specify the start position of the forward operator of the training network iteration trajectory, which is used to record the start timestamp of the forward calculation.The configuration value is the name of the first operator specified in the forward direction. when the value is empty,the system will automatically obtain the forward operator name.

    • bp_point: specify the end position of the iteration trajectory reversal operator of the training network, record the end timestamp of the backward calculation. The configuration value is the name of the operator after the specified reverse. when the value is empty,the system will automatically obtain the backward operator name.

    • aic_metrics: the values are as follows: ArithmeticUtilization: percentage statistics of various calculation indicators. PipeUtilization: the time-consuming ratio of calculation unit and handling unit,this item is the default value. Memory: percentage of external memory read and write instructions. MemoryL0: percentage of internal memory read and write instructions. ResourceConflictRatio: proportion of pipline queue instructions.

    The profiling_options is like ‘{“output”:’/home/data/output’,’training_trace’:’on’}’

  • check_bprop (bool) – Whether to check back propagation nodes. The checking ensures that the shape and dtype of back propagation node outputs is the same as input parameters. Default: False.

  • max_device_memory (str) – Sets the maximum memory available for devices. Currently, it is only supported on GPU. The format is “xxGB”. Default: “1024GB”.

  • print_file_path (str) – The path of saving print data. If this parameter is set, print data is saved to a file by default, and turns off printing to the screen. If the file already exists, add a timestamp suffix to the file. Default: ‘’.

  • enable_sparse (bool) – Whether to enable sparsity feature. Default: False. For details of sparsity and sparse tensor, please check https://www.mindspore.cn/docs/programming_guide/zh-CN/r1.3/tensor.html.

  • max_call_depth (int) – Specify the maximum depth of function call. Must be positive integer. Default: 1000.

  • env_config_path (str) – Config path for DFX.

  • auto_tune_mode (str) – The mode of auto tune when op building, get the best tiling performance, default: NO_TUNE. The value must be in [‘RL’, ‘GA’, ‘RL,GA’]. RL: rl_tune; GA: ga_tune; RL,GA: rl_tune/ga_tune(Automatic selection). - rl_tune: Reinforecement Learning tune. - ga_tune: Genetic Algorithm tune.

  • grad_for_scalar (bool) – Whether to get gradient for scalar. If set, the gradient of scalar input parameter can be calculated. Now, only part of the scalar operators support this calculation. Default: False.

  • save_compile_cache (bool) – Whether to cache the graph compiled by frontend. Default: False. This is an experimental prototype that is subject to change and/or deletion.

  • load_compile_cache (bool) – Whether to use the cache of the graph compiled by frontend. When it is true, the graph compilation will skip the frontend compilation process. It means that you should make sure the network has not been changed since the last execution. Currently we have not support automatic checking the changes yet. Default: False. This is an experimental prototype that is subject to change and/or deletion.

Raises

ValueError – If input key is not an attribute in context.

Examples

>>> context.set_context(mode=context.GRAPH_MODE)
>>> context.set_context(mode=context.PYNATIVE_MODE)
>>> context.set_context(device_target="Ascend")
>>> context.set_context(device_id=0)
>>> context.set_context(save_graphs=True, save_graphs_path="./model.ms")
>>> context.set_context(enable_reduce_precision=True)
>>> context.set_context(enable_dump=True, save_dump_path=".")
>>> context.set_context(reserve_class_name_in_scope=True)
>>> context.set_context(variable_memory_max_size="6GB")
>>> context.set_context(mode=context.GRAPH_MODE,
...                     device_target="Ascend",device_id=0, save_graphs=True,
...                     save_graphs_path="/mindspore")
>>> context.set_context(enable_profiling=True,
...                     profiling_options='{"output":"/home/data/output","training_trace":"on"}')
>>> context.set_context(max_device_memory="3.5GB")
>>> context.set_context(print_file_path="print.pb")
>>> context.set_context(max_call_depth=80)
>>> context.set_context(env_config_path="./env_config.json")
mindspore.context.set_ps_context(**kwargs)[source]

Set parameter server training mode context.

Note

Some other environment variables should also be set for parameter server training mode. These environment variables are listed below:

  • MS_SERVER_NUM: Server number

  • MS_WORKER_NUM: Worker number

  • MS_SCHED_HOST: Scheduler IP address

  • MS_SCHED_PORT: Scheduler port

  • MS_ROLE: The role of this process:

  • MS_SCHED: represents the scheduler,

  • MS_WORKER: represents the worker,

  • MS_PSERVER: represents the Server

Parameters

enable_ps (bool) – Whether to enable parameter server training mode. Only after enable_ps is set True, the environment variables will be effective. Default: False.

Raises

ValueError – If input key is not the attribute in parameter server training mode context.

Examples

>>> context.set_ps_context(enable_ps=True)