mindspore.Layout
- class mindspore.Layout(device_matrix, alias_name)[source]
Parallel layout describes the detailed sharding information.
Note
It is valid only in semi auto parallel or auto parallel mode.
The multiplication result of the device_matrix must be equal to the device count in a pipeline stage.
When the layout function is invoked to constructs a sharding strategy, each alias name is only allowed to be used once to shard a tensor.
- Parameters
- Raises
TypeError – device_matrix is not a tuple type.
TypeError – alias_name is not a tuple type.
ValueError – device_matrix length is not equal to alias_name length.
TypeError – The element of device_matrix is not int type.
TypeError – The element of alias_name is not a str type.
ValueError – The element of alias_name is an empty str.
ValueError – The element of alias_name is “None”.
ValueError – alias_name contains repeated element.
Examples
>>> from mindspore import Layout >>> layout = Layout((2, 2, 2), ("dp", "sp", "mp")) >>> layout0 = layout("dp", "mp") >>> print(layout0.to_dict()) {"device_matrix": (2, 2, 2), "tensor_map": (2, 0)}