mindspore.set_dump

mindspore.set_dump(target, enabled=True)[source]

Enable or disable dump for the target and its contents.

target should be an instance of mindspore.nn.Cell or mindspore.ops.Primitive . Please note that this API takes effect only when Synchronous Dump is enabled and the dump_mode field in dump config file is "2" . See the dump document for details. The default enabled status for a mindspore.nn.Cell or mindspore.ops.Primitive is False.

Note

  1. This API is only effective for GRAPH_MODE whose graph compilation level is O0/O1 with Ascend backend.

  2. This API only supports being called before training starts. If you call this API during training, it may not be effective.

  3. After using set_dump(Cell, True) , operators in forward and backward computation (computation generated by the grad operations) of the cell will be dumped.

  4. For mindspore.nn.SoftmaxCrossEntropyWithLogits layer, the forward computation and backward computation use the same set of operators. So you can only see dump data from backward computation. Please note that mindspore.nn.SoftmaxCrossEntropyWithLogits layer will also use the above operators internally when initialized with sparse=True and reduction="mean" .

Parameters
  • target (Union[Cell, Primitive]) – The Cell instance or Primitive instance to which the dump flag is set.

  • enabled (bool, optional) – True means enable dump, False means disable dump. Default: True .

Supported Platforms:

Ascend

Examples

Note

Please set environment variable MINDSPORE_DUMP_CONFIG to the dump config file and set dump_mode field in dump config file to 2 before running this example. See dump document for details.

>>> import numpy as np
>>> import mindspore as ms
>>> import mindspore.nn as nn
>>> from mindspore import Tensor, set_dump
>>>
>>> ms.set_context(device_target="Ascend", mode=ms.GRAPH_MODE)
>>>
>>> class MyNet(nn.Cell):
...     def __init__(self):
...         super().__init__()
...         self.conv1 = nn.Conv2d(5, 6, 5, pad_mode='valid')
...         self.relu1 = nn.ReLU()
...
...     def construct(self, x):
...         x = self.conv1(x)
...         x = self.relu1(x)
...         return x
>>>
>>> if __name__ == "__main__":
...     net = MyNet()
...     set_dump(net.conv1)
...     input_tensor = Tensor(np.ones([1, 5, 10, 10], dtype=np.float32))
...     output = net(input_tensor)