mindspore.ops.tensordump

View Source On Gitee
mindspore.ops.tensordump(file_name, tensor, mode='out')[source]

Save Tensor in numpy's npy format.

In Parallel situation, tensordump will dump slice of data at each rank.

In Ascend platform with graph mode, Your code OpA –> OpB may compiled as OpA –> RedistributionOps –> OpB.

Note: The redistribution operator is introduced, Due to inter-device communication and shard strategies in the static graph parallel scenario.

In case of OpA –> OpB, the dump data of OpA's output is equal to OpB's input.

But in case of OpA –> RedistributionOps –> OpB, The dump data of OpA's output is not equal to OpB's input (Due to the redistribution operators). So the parameter mode is to handle this situation.

Assuming OpA's output is used as both tensordump's input parameter and OpB's input parameter. Different requirements of saving dump data can be achieved by configuring parameter mode:

  • If the mode is 'out', the dump data contains only OpA's output slice.

  • If the mode is 'all', the dump data contains both OpA's output slice and OpB's input slice.

  • If the mode is 'in', the dump data contains only OpB's input slice.

For mode 'all' or 'in', the input slice npy file format is: id_fileName_cNodeID_dumpMode_rankID_dtype.npy.

For mode 'out' or 'all' the output slice npy file format is: id_filename_dtype.npy.

  • id: An auto increment ID.

  • fileName: Value of the parameter file_name (if parameter file_name is a user-specified path, the value of fileName is the last level of the path).

  • cNodeID: The cnode ID in ir graph of step_parallel_end.ir.

  • dumpMode: Value of the parameter mode.

  • rankID: Logical device id.

  • dtype: The original data type. Data of type bfloat16 stored in the .npy file will be converted to float32.

Note

  • The operator of tensordump doesn't support in control flow.

  • If current parallel mode is STAND_ALONE, mode should only be 'out'.

  • Parameter mode will be set to 'out' if user doesn't configure it.

  • This function is used for debugging.

Parameters
  • file_name (str) – The path of the npy file saves.

  • tensor (Tensor) – The tensor that user want to dump.

  • mode (str, optional) – Used to control tensordump behavior, available value is one of ['in', 'out', 'all']. Default value is out.

Raises
Supported Platforms:

Ascend

Examples

Note

Using msrun command to run below example: msrun –worker_num=2 –local_worker_num=2 –master_port=11450 –log_dir=msrun_log –join=True –cluster_time_out=300 tensordump_example.py

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore import nn, Tensor, ops, context
>>> from mindspore.ops import operations as P
>>> from mindspore.communication import init, get_rank
>>> init()
>>> rank_id = get_rank()
>>> dump_path = f'dumps/rank_{rank_id}/mul1_mul2.npy'
>>> class Net(nn.Cell):
...     def __init__(self, strategy1, strategy2):
...         super(Net, self).__init__()
...         self.matmul1 = P.MatMul().shard(strategy1)
...         self.matmul2 = P.MatMul().shard(strategy2)
...
...     def construct(self, x, y, b):
...         out1 = self.matmul1(x, y)
...         ops.tensordump(dump_path, out1, 'all')
...         out2 = self.matmul2(out1, b)
...         return out2
...
>>> ms.set_context(mode=ms.GRAPH_MODE, save_graphs=2)
>>> context.set_auto_parallel_context(parallel_mode='semi_auto_parallel', full_batch=True)
>>> strategy1 = ((1, 2), (2, 1))
>>> strategy2 = ((1, 2), (2, 1))
>>> net = Net(strategy1, strategy2)
>>> x = Tensor(0.1 * np.random.randn(64, 64).astype(np.float32))
>>> y = Tensor(0.1 * np.random.randn(64, 64).astype(np.float32))
>>> b = Tensor(0.1 * np.random.randn(64, 64).astype(np.float32))
>>> out = net(x, y, b)
>>> print(f"out shape is: {out.shape}")
>>> matmul1_output_slice = np.load('0_mul1_mul2_Float32.npy')                       # load matmul1's output slice
>>> matmul2_input_slice = np.load('1_mul1_mul2_CNode_64_all_rank_0_Float32.npy')    # load matmul2's input slice