mindspore.profiler
Profiler Module Introduction.
This module provides Python APIs to enable the profiling of MindSpore neural networks. Users can import the mindspore.profiler.Profiler, initialize the Profiler object to start profiling, and use Profiler.analyse() to stop profiling and analyse the results. To visualize the profiling results, users can open MindSpore Web, find the corresponding ‘run’ button/option and click the profile link. Now, Profiler supports the AICore operator analysis.
- class mindspore.profiler.ProfileOption[source]
This Class is deprecated. Profile Option Enum which be used in Profiler.profile.
- class mindspore.profiler.Profiler(**kwargs)[source]
Performance profiling API.
This API enables MindSpore users to profile the performance of neural network. Profiler supports Ascend and GPU, both of them are used in the same way, but only output_path in args works on GPU. And it can only be initialized once.
- Parameters
output_path (str) – Output data path.
optypes_not_deal (str) – This parameter is deprecated. (Ascend only) Op type names, determine the data of which optype should be collected and analysed, will deal with all op if null. Different op types should be separated by comma.
ascend_job_id (str) – This parameter is deprecated. (Ascend only) The directory where the profiling files to be parsed are located. This parameter is used to support offline parsing.
profile_communication (bool) – Whether to collect communication performance data in a multi devices training, collect when True. Default is False. Setting this parameter has no effect during single device training.
profile_memory (bool) – Whether to collect tensor memory data, collect when True. Default is False.
start_profile (bool) – The start_profile parameter controls whether to enable or disable performance data collection based on conditions. The default value is True.
- Raises
RuntimeError – If ascend_job_id is transferred with data that does not match the current MindSpore version.
Examples
>>> import numpy as np >>> from mindspore import nn, context >>> from mindspore import Model >>> import mindspore.dataset as ds >>> from mindspore.profiler import Profiler >>> >>> >>> class Net(nn.Cell): ... def __init__(self): ... super(Net, self).__init__() ... self.fc = nn.Dense(2,2) ... def construct(self, x): ... return self.fc(x) >>> >>> def generator(): ... for i in range(2): ... yield (np.ones([2, 2]).astype(np.float32), np.ones([2]).astype(np.int32)) >>> >>> def train(net): ... optimizer = nn.Momentum(net.trainable_params(), 1, 0.9) ... loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True) ... data = ds.GeneratorDataset(generator, ["data", "label"]) ... model = Model(net, loss, optimizer) ... model.train(1, data) >>> >>> if __name__ == '__main__': ... # If the device_target is GPU, set the device_target to "GPU" ... context.set_context(mode=context.GRAPH_MODE, device_target="Ascend") ... ... # Init Profiler ... # Note that the Profiler should be initialized after context.set_context and before model.train ... # If you are running in parallel mode on Ascend, the Profiler should be initialized before HCCL ... # initialized. ... profiler = Profiler() ... ... # Train Model ... net = Net() ... train(net) ... ... # Profiler end ... profiler.analyse()
- analyse()[source]
Collect and analyse performance data, called after training or during training. The example shows above.
- static profile(network, profile_option)[source]
Get the number of trainable parameters in the training network.
- Parameters
network (Cell) – The training network.
profile_option (ProfileOption) – The profile option.
- Returns
dict, the key is the option name, the value is the result of option.
- start()[source]
Used for Ascend, GPU, start profiling. Profiling can be turned on based on step and epoch.
- Raises
RuntimeError – If the profiler has already started.
RuntimeError – If MD profiling has stopped, repeated start action is not supported.
RuntimeError – If the start_profile value is set to False.
Examples
>>> class StopAtStep(Callback): >>> def __init__(self, start_step, stop_step): ... super(StopAtStep, self).__init__() ... self.start_step = start_step ... self.stop_step = stop_step ... self.profiler = Profiler(start_profile=False) ... >>> def step_begin(self, run_context): ... cb_params = run_context.original_args() ... step_num = cb_params.cur_step_num ... if step_num == self.start_step: ... self.profiler.start() ... >>> def step_end(self, run_context): ... cb_params = run_context.original_args() ... step_num = cb_params.cur_step_num ... if step_num == self.stop_step: ... self.profiler.stop() ... >>> def end(self, run_context): ... self.profiler.analyse()
- stop()[source]
Used for Ascend, GPU, stop profiling. Profiling can be turned off based on step and epoch.
- Raises
RuntimeError – If the profiler has not started, this function is disabled.
Examples
>>> class StopAtEpoch(Callback): >>> def __init__(self, start_epoch, stop_epoch): ... super(StopAtEpoch, self).__init__() ... self.start_epoch = start_epoch ... self.stop_epoch = stop_epoch ... self.profiler = Profiler(start_profile=False) ... >>> def epoch_begin(self, run_context): ... cb_params = run_context.original_args() ... epoch_num = cb_params.cur_epoch_num ... if epoch_num == self.start_epoch: ... self.profiler.start() ... >>> def epoch_end(self, run_context): ... cb_params = run_context.original_args() ... epoch_num = cb_params.cur_epoch_num ... if epoch_num == self.stop_epoch: ... self.profiler.stop() ... >>> def end(self, run_context): ... self.profiler.analyse()