mindspore.parallel.Layout
- class mindspore.parallel.Layout(device_matrix, alias_name, rank_list=None)[source]
Topological abstraction describing cluster devices for tensor slice placement on the cluster.
Note
It is valid only in semi auto parallel or auto parallel mode.
The multiplication result of the device_matrix must be equal to the device count in a pipeline stage.
When the layout function is invoked to constructs a sharding strategy, each alias name is only allowed to be used once to shard a tensor.
- Parameters
device_matrix (tuple) – Describe the shape of devices arrangement, its element type is int.
alias_name (tuple) – The alias name for each axis of device_matrix, its length shoits element type is string. When using "interleaved_parallel" as an alias name, the tensor would be split into multiple copies on the corresponding partition dimension on a single card.
rank_list (list, optional) – Data is allocated to the device according to rank_list. Default:
None
.
- Raises
TypeError – device_matrix is not a tuple type.
TypeError – alias_name is not a tuple type.
TypeError – 'rank_list' is not a list type.
ValueError – device_matrix length is not equal to alias_name length.
TypeError – The element of device_matrix is not int type.
TypeError – The element of alias_name is not a str type.
TypeError – The element of rank_list is not int type.
ValueError – The element of alias_name is an empty str.
ValueError – The element of alias_name is "None".
ValueError – alias_name contains repeated element.
- Supported Platforms:
Ascend
Examples
>>> from mindspore.parallel import Layout >>> layout = Layout((2, 2, 2), ("dp", "sp", "mp")) >>> layout0 = layout("dp", "mp") >>> print(layout0.to_dict()) {"device_matrix": (2, 2, 2), "tensor_map": (2, 0), "interleaved_parallel": False, 'alias_name': {'dp', 'sp', 'mp'}, "rank_list": [0, 1, 2, 3, 4, 1, 6, 7]} >>> # Total device num is 4, but split the tensor in local device into two copies. >>> layout = Layout((2, 2, 2), ("dp", "sp", "interleaved_parallel")) >>> layout1 = layout(("dp", "interleaved_parallel"), "sp")