mindspore.ops.scatter_div

mindspore.ops.scatter_div(input_x, indices, updates)[source]

Updates the value of the input tensor through the divide operation.

Using given values to update tensor value through the div operation, along with the input indices. This operation outputs the input_x after the update is done, which makes it convenient to use the updated value.

for each \(i, ..., j\) in indices.shape:

\[\text{input_x}[\text{indices}[i, ..., j], :] \mathrel{/}= \text{updates}[i, ..., j, :]\]

Inputs of input_x and updates comply with the implicit type conversion rules to make the data types consistent. If they have different data types, the lower priority data type will be converted to the relatively highest priority data type.

Parameters
  • input_x (Parameter) – The target tensor, with data type of Parameter. The shape is \((N,*)\) where \(*\) means,any number of additional dimensions.

  • indices (Tensor) – The index to do divide operation whose data type must be mindspore.int32 or mindspore.int64.

  • updates (Tensor) – The tensor doing the divide operation with input_x, the data type is same as input_x, the shape is indices.shape + input_x.shape[1:].

Returns

Tensor, the updated input_x, has the same shape and type as input_x.

Raises
  • TypeError – If indices is not an int32 or an int64.

  • ValueError – If the shape of updates is not equal to indices.shape + input_x.shape[1:].

  • RuntimeError – If the data type of input_x and updates conversion of Parameter is required when data type conversion of Parameter is not supported.

  • RuntimeError – On the Ascend platform, the input data dimension of input_x , indices and updates is greater than 8 dimensions.

Supported Platforms:

Ascend GPU CPU

Examples

>>> input_x = Parameter(Tensor(np.array([[6.0, 6.0, 6.0], [2.0, 2.0, 2.0]]), mindspore.float32), name="x")
>>> indices = Tensor(np.array([0, 1]), mindspore.int32)
>>> updates = Tensor(np.array([[2.0, 2.0, 2.0], [2.0, 2.0, 2.0]]), mindspore.float32)
>>> output = ops.scatter_div(input_x, indices, updates)
>>> print(output)
[[3. 3. 3.]
 [1. 1. 1.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[105.0, 105.0, 105.0],
...                                      [315.0, 315.0, 315.0]]), mindspore.float32), name="x")
>>> # for indices = [[0, 1], [1, 1]]
>>> # step 1: [0, 1]
>>> # input_x[0] = [105.0, 105.0, 105.0] / [1.0, 1.0, 1.0] = [105.0, 105.0, 105.0]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [3.0, 3.0, 3.0] = [105.0, 105.0, 105.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [105.0, 105.0, 105.0] / [5.0, 5.0, 5.0] = [21.0, 21.0, 21.0]
>>> # input_x[1] = [21.0, 21.0, 21.0] / [7.0, 7.0, 7.0] = [3.0, 3.0, 3.0]
>>> indices = Tensor(np.array([[0, 1], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mindspore.float32)
>>> output = ops.scatter_div(input_x, indices, updates)
>>> print(output)
[[105. 105. 105.]
 [  3.   3.   3.]]
>>> # for input_x will be updated after the operation is completed. input_x need to be re-initialized.
>>> input_x = Parameter(Tensor(np.array([[105.0, 105.0, 105.0],
...                                      [315.0, 315.0, 315.0]]), mindspore.float32), name="x")
>>> # for indices = [[1, 0], [1, 1]]
>>> # step 1: [1, 0]
>>> # input_x[0] = [105.0, 105.0, 105.0] / [3.0, 3.0, 3.0] = [35.0, 35.0, 35.0]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [1.0, 1.0, 1.0] = [315.0, 315.0, 315.0]
>>> # step 2: [1, 1]
>>> # input_x[1] = [315.0, 315.0, 315.0] / [5.0, 5.0, 5.0] = [63.0 63.0 63.0]
>>> # input_x[1] = [63.0 63.0 63.0] / [7.0, 7.0, 7.0] = [9.0, 9.0, 9.0]
>>> indices = Tensor(np.array([[1, 0], [1, 1]]), mindspore.int32)
>>> updates = Tensor(np.array([[[1.0, 1.0, 1.0], [3.0, 3.0, 3.0]],
...                            [[5.0, 5.0, 5.0], [7.0, 7.0, 7.0]]]), mindspore.float32)
>>> output = ops.scatter_div(input_x, indices, updates)
>>> print(output)
[[35. 35. 35.]
 [ 9.  9.  9.]]