Environment Variables
MindSpore environment variables are as follows:
Environment Variable |
Module |
Function |
Type |
Value Range |
Configuration Relationship |
Mandatory or Not |
Default Value |
---|---|---|---|---|---|---|---|
MS_BUILD_PROCESS_NUM |
MindSpore |
Specifies the number of parallel operator build processes during Ascend backend compilation. |
Integer |
The number of parallel operator build processes ranges from 1 to 24. |
None |
Optional(Only Ascend backend) |
None |
MS_COMPILER_CACHE_PATH |
MindSpore |
MindSpore compile cache directory and save the graph or operator cache files like |
String |
File path, which can be a relative path or an absolute path. |
None |
Optional |
None |
MS_COMPILER_CACHE_ENABLE |
MindSpore |
Specifies whether to save or load the cache of the graph compiled by front-end. The function is the same as the enable_compile_cache in MindSpore context. |
Integer |
0: Disable the compile cache |
If it is used together with |
Optional |
None |
MS_COMPILER_OP_LEVEL |
MindSpore |
Generate the TBE instruction mapping file during Ascend backend compilation |
Integer |
The value of compiler op level should be one of [0, 1, 2, 3, 4]. |
None |
Optional(Only Ascend backend) |
None |
MS_DEV_DISABLE_PREBUILD |
MindSpore |
Turn off operator prebuild processes during Ascend backend compilation. The prebuild processing may fix the attr |
Boolean |
true:turn off prebuild, false: enable prebuild |
None |
Optional(Only Ascend backend) |
None |
MS_GRAPH_KERNEL_FLAGS |
MindSpore |
Control options of graph kernel fusion, it can be used to open or close the graph kernel fusion, supports fine-tune of several optimizations in graph kernel fusion and supports dumping the fusion process, which is helpful in problems location and performance tuning. |
String |
Refer to the value setting of graph_kernel_flags in mindspore/context.py |
None |
Optional |
None |
RANK_TABLE_FILE |
MindSpore |
Specifies the file to which a path points, including |
String |
File path, which can be a relative path or an absolute path. |
This variable is used together with RANK_SIZE. |
Optional (when the Ascend AI Processor is used, specified by user when a distributed case is executed) |
None |
RANK_SIZE |
MindSpore |
Specifies the number of Ascend AI Processors to be called during deep learning. |
Integer |
The number of Ascend AI Processors to be called ranges from 1 to 8. |
This variable is used together with RANK_TABLE_FILE |
Optional (when the Ascend AI Processor is used, specified by user when a distributed case is executed) |
None |
RANK_ID |
MindSpore |
Specifies the logical ID of the Ascend AI Processor called during deep learning. |
Integer |
The value ranges from 0 to 7. When multiple servers are running concurrently, |
None |
Optional |
None |
GLOG_v |
MindSpore |
For details about the function and usage, see GLOG_v |
Integer |
0-DEBUG |
None |
Optional |
2 |
GLOG_logtostderr |
MindSpore |
For details about the function and usage, see GLOG_logtostderr |
Integer |
1:logs are output to the screen |
This variable is used together with GLOG_log_dir |
Optional |
1 |
GLOG_log_dir |
MindSpore |
For details about the function and usage, see GLOG_log_dir |
String |
File path, which can be a relative path or an absolute path. |
This variable is used together with GLOG_logtostderr |
Optional |
None |
GLOG_stderrthreshold |
For details about the function and usage, see GLOG_stderrthreshold |
Integer |
0-DEBUG |
None |
Optional |
2 |
|
MS_SUBMODULE_LOG_v |
MindSpore |
For details about the function and usage, see MS_SUBMODULE_LOG_v |
Dict{String:Integer…} |
LogLevel: 0-DEBUG, 1-INFO, 2-WARNING, 3-ERROR |
None |
Optional |
None |
GLOG_log_max |
MindSpore |
For details about the function and usage, see GLOG_log_max |
Integer |
>0 |
None |
Optional |
50 |
logger_maxBytes |
MindSpore |
For details about the function and usage, see logger_maxBytes |
Integer |
None |
None |
Optional |
52428800 |
logger_backupCount |
For details about the function and usage, see logger_backupCount |
Integer |
None |
None |
Optional |
30 |
|
OPTION_PROTO_LIB_PATH |
MindSpore |
Specifies the RPOTO dependent library path. |
String |
File path, which can be a relative path or an absolute path. |
None |
Optional |
None |
MS_RDR_ENABLE |
MindSpore |
Determines whether to enable running data recorder (RDR). If a running exception occurs in MindSpore, the pre-recorded data in MindSpore is automatically exported to assist in locating the cause of the running exception. |
Integer |
1:enables RDR |
This variable is used together with |
Optional |
None |
MS_RDR_MODE |
MindSpore |
Determines the exporting mode of running data recorder(RDR). |
Integer |
1:export data when training process terminates in exceptional scenario |
This variable is used together with |
Optional |
1 |
MS_RDR_PATH |
MindSpore |
Specifies the system path for storing the data recorded by running data recorder (RDR). |
String |
Directory path, which should be an absolute path. |
This variable is used together with |
Optional |
None |
MS_OM_PATH |
MindSpore |
Specifies the save path for the file(analyze_fail.dat/*.npy) which is dumped if task exception or a compiling graph error occurred. The file will be saved to the path of |
String |
File path, which can be a relative path or an absolute path. |
None |
Optional |
None |
MINDSPORE_DUMP_CONFIG |
MindSpore |
Specify the path of the configuration file that the cloud-side Dump or the device-side Dump depends on. |
String |
File path, which can be a relative path or an absolute path. |
None |
Optional |
None |
MS_DIAGNOSTIC_DATA_PATH |
MindSpore |
When the cloud-side Dump is enabled, if the |
String |
File path, only absolute path is supported. |
This variable is used together with MINDSPORE_DUMP_CONFIG. |
Optional |
None |
MS_ENABLE_CACHE |
MindData |
Determines whether to enable the cache function for datasets during data processing to accelerate dataset reading and argumentation processing. |
String |
TRUE: enables the cache function during data processing. |
This variable is used together with MS_CACHE_HOST and MS_CACHE_PORT. |
Optional |
None |
MS_CACHE_HOST |
MindData |
Specifies the IP address of the host where the cache server is located when the cache function is enabled. |
String |
IP address of the host where the cache server is located. |
This variable is used together with MS_ENABLE_CACHE=TRUE and MS_CACHE_PORT. |
Optional |
None |
MS_CACHE_PORT |
MindData |
Specifies the port number of the host where the cache server is located when the cache function is enabled. |
String |
Port number of the host where the cache server is located. |
This variable is used together with MS_ENABLE_CACHE=TRUE and MS_CACHE_HOST. |
Optional |
None |
DATASET_ENABLE_NUMA |
MindData |
Determines whether to enable numa bind feature. Most of time this configuration can improve performance on distribute scenario. |
String |
True: Enables the numa bind feature. |
This variable is used together with libnuma.so. |
Optional |
None |
OPTIMIZE |
MindData |
Determines whether to optimize the pipeline tree for dataset during data processing. This variable can improve the data processing efficiency in the data processing operator fusion scenario. |
String |
true: enables pipeline tree optimization. |
None |
Optional |
None |
ENABLE_MS_DEBUGGER |
Debugger |
Determines whether to enable Debugger during training. |
Boolean |
1: enables Debugger. |
This variable is used together with MS_DEBUGGER_HOST and MS_DEBUGGER_PORT. |
Optional |
None |
MS_DEBUGGER_HOST |
Debugger |
Specifies the IP of the MindInsight Debugger Server. |
String |
IP address of the host where the MindInsight Debugger Server is located. |
This variable is used together with ENABLE_MS_DEBUGGER=1 and MS_DEBUGGER_PORT. |
Optional |
None |
MS_DEBUGGER_PORT |
Debugger |
Specifies the port for connecting to the MindInsight Debugger Server. |
Integer |
Port number ranges from 1 to 65536. |
This variable is used together with ENABLE_MS_DEBUGGER=1 and MS_DEBUGGER_HOST. |
Optional |
None |
MS_DEBUGGER_PARTIAL_MEM |
Debugger |
Determines whether to enable partial memory overcommitment. (Memory overcommitment is disabled only for nodes selected on Debugger.) |
Boolean |
1: enables memory overcommitment for nodes selected on Debugger. |
None |
Optional |
None |
GRAPH_OP_RUN |
MindSpore |
When running the pipeline large network model in task sink mode in graph mode, it may not be able to start as expected due to the limitation of stream resources. This environment variable can specify the execution mode of the graph mode. set this variable to 1, indicating that model will be executed in nontask sink mode which has performance degradation, otherwise executed in task sink mode. |
Integer |
0: task sink mode. |
None |
Optional |
None |
GROUP_INFO_FILE |
MindSpore |
Specify communication group information storage path |
String |
Communication group information file path, supporting relative path and absolute path. |
None |
Optional |
None |
MS_DEV_ENABLE_FALLBACK |
MindSpore |
Fallback function is enabled when the environment variable is set to a value other than 0. |
Integer |
1: enables fallback function |
None |
Optional |
1 |