mindspore.shard

mindspore.shard(fn, in_strategy, out_strategy=None, parameter_plan=None, device='Ascend', level=0)[source]

Defining the input and output layouts of this cell and the parallel strategies of remaining ops will be generated by sharding propagation. In PyNative mode, use this method to specify a Cell for distributed execution in graph mode. in_strategy and out_strategy define the input and output layout respectively. in_strategy/out_strategy should be a tuple, each element of which corresponds to the desired layout of this input/output, and None represents data_parallel, which can refer to the description of mindspore.ops.Primitive.shard. The parallel strategies of remaining operators are derived from the strategy specified by the input and output.

Note

You need to set the execution mode to PyNative mode, set the parallel mode in set_auto_parallel_context to “auto_parallel” and the search mode to “sharding_propagation”. If the input contain Parameter, its strategy should be set in in_strategy.

Parameters
  • fn (Union[Cell, Function]) – Function to be executed in parallel. Its arguments and return value must be Tensor or Parameter. If fn is a Cell with parameters, fn needs to be an instantiated object, otherwise its arguments cannot be accessed.

  • in_strategy (tuple) – Define the layout of inputs, each element of the tuple should be a tuple or None. Tuple defines the layout of the corresponding input and None represents a data parallel strategy.

  • out_strategy (Union[tuple, None]) – Define the layout of outputs similar with in_strategy. It is not in use right now. Default: None.

  • parameter_plan (Union[dict, None]) – Define the layout for the specified parameters. Each element in dict defines the layout of the parameter like “param_name: layout”. The key is a parameter name of type ‘str’. The value is a 1-D integer tuple, indicating the corresponding layout. If the parameter name is incorrect or the corresponding parameter has been set, the parameter setting will be ignored. Default: None.

  • device (string) – Select a certain device target. It is not in use right now. Support [“CPU”, “GPU”, “Ascend”]. Default: “Ascend”.

  • level (int) – Option for parallel strategy infer algorithm, namely the object function, maximize computation over communication ratio, maximize speed performance, minimize memory usage etc. It is not in use right now. Support [“0”, “1”, “2”]. Default: “0”.

Returns

Function, return the function that will be executed under auto parallel process.

Raises
  • AssertionError

    • If execute mode is not PYNATIVE_MODE. - If parallel mode is not “auto_parallel”. - If search_mode it not “sharding_propagation”. - If device_target it not “Ascend” or “GPU”.

  • TypeError

    • If in_strategy is not a tuple. - If out_strategy is not a tuple or None. - If parameter_plan is not a dict or None. - If any key in parameter_plan is not a str. - If any value in parameter_plan is not a tuple. - If device is not a str. - If level is not a integer.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindspore import Tensor, set_context, set_auto_parallel_context, shard, PYNATIVE_MODE
>>> from mindspore.communication import init
>>> set_context(mode=PYNATIVE_MODE)
>>> init()
>>> set_auto_parallel_context(parallel_mode="auto_parallel", search_mode="sharding_propagation",
...                           device_num=2)
>>> def test_shard(x, y):
...     return x + y
>>> x = Tensor(np.ones(shape=(32, 10)))
>>> y = Tensor(np.ones(shape=(32, 10)))
>>> output = shard(test_shard, in_strategy=((2, 1), (2, 1)))(x, y)
>>> print(output.shape)
(32, 10)