mindspore.ops.ScatterNdUpdate
- class mindspore.ops.ScatterNdUpdate(*args, **kwargs)[source]
Updates tensor values by using input indices and value.
Using given values to update tensor value, along with the input indices.
input_x has rank P and indices has rank Q where Q >= 2.
indices has shape \((i_0, i_1, ..., i_{Q-2}, N)\) where N <= P.
The last dimension of indices (with length N ) indicates slices along the N th dimension of input_x.
updates is a tensor of rank Q-1+P-N. Its shape is: \((i_0, i_1, ..., i_{Q-2}, x\_shape_N, ..., x\_shape_{P-1})\).
Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, lower priority data type will be converted to relatively highest priority data type. RuntimeError exception will be thrown when the data type conversion of Parameter is required.
- Parameters
use_locking (bool) – Whether protect the assignment by a lock. Default: True.
- Inputs:
input_x (Parameter) - The target tensor, with data type of Parameter.
indices (Tensor) - The index of input tensor, with int32 data type. The rank of indices must be at least 2 and indices_shape[-1] <= len(shape).
updates (Tensor) - The tensor to be updated to the input tensor, has the same type as input. The shape is indices_shape[:-1] + x_shape[indices_shape[-1]:].
- Outputs:
Tensor, has the same shape and type as input_x.
- Raises
TypeError – If use_locking is not a bool.
- Supported Platforms:
Ascend
CPU
Examples
>>> np_x = np.array([[-0.1, 0.3, 3.6], [0.4, 0.5, -3.2]]) >>> input_x = mindspore.Parameter(Tensor(np_x, mindspore.float32), name="x") >>> indices = Tensor(np.array([[0, 0], [1, 1]]), mindspore.int32) >>> updates = Tensor(np.array([1.0, 2.2]), mindspore.float32) >>> op = ops.ScatterNdUpdate() >>> output = op(input_x, indices, updates) >>> print(output) [[1. 0.3 3.6] [0.4 2.2 -3.2]]