Profiler Module Introduction.

This module provides Python APIs to enable the profiling of MindSpore neural networks. Users can import the mindspore.profiler.Profiler, initialize the Profiler object to start profiling, and use Profiler.analyse() to stop profiling and analyse the results. To visualize the profiling results, users can open MindSpore Web, find the corresponding ‘run’ button/option and click the profile link. Now, Profiler supports the AICore operator analysis.

class mindspore.profiler.Profiler(**kwargs)[source]

Performance profiling API.

This API enables MindSpore users to profile the performance of neural network. Profiler supports Ascend and GPU, both of them are used in the same way, but only output_path in args works on GPU.

  • output_path (str) – Output data path.

  • optypes_not_deal (str) – (Ascend only) Op type names, determine the data of which optype should be collected and analysed, will deal with all op if null. Different op types should be separated by comma.

  • ascend_job_id (str) – (Ascend only) The directory where the profiling files to be parsed are located. This parameter is used to support offline parsing.

  • profile_communication (bool) – Whether to collect communication performance data in a multi devices training, collect when True. Default is False. Setting this parameter has no effect during single device training.

  • profile_memory (bool) – Whether to collect tensor memory data, collect when True. Default is False.


>>> import numpy as np
>>> from mindspore import nn, context
>>> from mindspore import Model
>>> import mindspore.dataset as ds
>>> from mindspore.profiler import Profiler
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self.fc = nn.Dense(2,2)
...     def construct(self, x):
...         return self.fc(x)
>>> def generator():
...     for i in range(2):
...         yield (np.ones([2, 2]).astype(np.float32), np.ones([2]).astype(np.int32))
>>> def train(net):
...     optimizer = nn.Momentum(net.trainable_params(), 1, 0.9)
...     loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True)
...     data = ds.GeneratorDataset(generator, ["data", "label"])
...     model = Model(net, loss, optimizer)
...     model.train(1, data)
>>> if __name__ == '__main__':
...     # If the device_target is GPU, set the device_target to "GPU"
...     context.set_context(mode=context.GRAPH_MODE, device_target="Ascend")
...     # Init Profiler
...     # Note that the Profiler should be initialized after context.set_context and before model.train
...     # If you are running in parallel mode on Ascend, the Profiler should be initialized before HCCL
...     # initialized.
...     profiler = Profiler()
...     # Train Model
...     net = Net()
...     train(net)
...     # Profiler end
...     profiler.analyse()

Collect and analyse performance data, called after training or during training. The example shows above.

static profile(network, profile_option)[source]

Get the number of trainable parameters in the training network.

  • network (Cell) – The training network.

  • profile_option (ProfileOption) – The profile option.


dict, the key is the option name, the value is the result of option.

class mindspore.profiler.ProfileOption[source]

Profile Option Enum which be used in Profiler.profile.