mindspore.ops.broadcast_to
- mindspore.ops.broadcast_to(x, shape)[source]
Broadcasts input tensor to a given shape. The dim of input shape must be smaller than or equal to that of target shape. Suppose input shape is \((x_1, x_2, ..., x_m)\), target shape is \((*, y_1, y_2, ..., y_m)\), where \(*\) means any additional dimension. The broadcast rules are as follows:
Compare the value of \(x_m\) and \(y_m\), \(x_{m-1}\) and \(y_{m-1}\), …, \(x_1\) and \(y_1\) consecutively and decide whether these shapes are broadcastable and what the broadcast result is.
If the value pairs at a specific dim are equal, then that value goes right into that dim of output shape. With an input shape \((2, 3)\), target shape \((2, 3)\) , the inferred output shape is \((2, 3)\).
If the value pairs are unequal, there are three cases:
Case 1: If the value of the target shape in the dimension is -1, the value of the output shape in the dimension is the value of the corresponding input shape in the dimension.
Case 2: If the value of target shape in the dimension is not -1, but the corresponding value in the input shape is 1, then the corresponding value of the output shape is that of the target shape. With an input shape \((1, 3)\), target shape \((8, 3)\), the output shape is \((8, 3)\).
Case 3: If the corresponding values of the two shapes do not satisfy the above cases, it means that broadcasting from the input shape to the target shape is not supported.
So far we got the last m dims of the outshape, now focus on the first \(*\) dims, there are two cases:
If the first \(*\) dims of output shape does not have -1 in it, then fill the input shape with ones until their length are the same, and then refer to Case 2 mentioned above to calculate the output shape. With target shape \((3, 1, 4, 1, 5, 9)\), input shape \((1, 5, 9)\), the filled input shape will be \((1, 1, 1, 1, 5, 9)\) and thus the output shape is \((3, 1, 4, 1, 5, 9)\).
If the first \(*\) dims of output shape have -1 in it, it implies this -1 is conrresponding to a non-existing dim so they’re not broadcastable. With target shape \((3, -1, 4, 1, 5, 9)\), input shape \((1, 5, 9)\), instead of operating the dim-filling process first, it raises errors directly.
- Parameters
- Returns
Tensor, with the given shape and the same data type as x.
- Raises
TypeError – If shape is not a tuple.
ValueError – If the target and input shapes are incompatible, or if a - 1 in the target shape is in an invalid location.
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> shape = (2, 3) >>> x = Tensor(np.array([1, 2, 3]).astype(np.float32)) >>> output = ops.broadcast_to(x, shape) >>> print(output) [[1. 2. 3.] [1. 2. 3.]] >>> shape = (-1, 2) >>> x = Tensor(np.array([[1], [2]]).astype(np.float32)) >>> output = ops.broadcast_to(x, shape) >>> print(output) [[1. 1.] [2. 2.]]