mindspore.ops.clip_by_global_norm

View Source On Gitee
mindspore.ops.clip_by_global_norm(x, clip_norm=1.0, use_norm=None)[source]

Clips tensor values by the ratio of the sum of their norms.

Note

  • On the SEMI_AUTO_PARALLEL mode or AUTO_PARALLEL mode, if the input x is the gradient, the gradient norm values on all devices will be automatically aggregated by allreduce inserted after the local square sum of the gradients.

Parameters
  • x (Union(tuple[Tensor], list[Tensor])) – Input data to clip.

  • clip_norm (Union(float, int)) – The clipping ratio, it should be greater than 0. Default 1.0 .

  • use_norm (None) – The global norm. Currently only none is supported. Default None .

Returns

Tuple of tensors

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> x1 = mindspore.tensor([[2., 3.], [1., 2.]], dtype=mindspore.float32)
>>> x2 = mindspore.tensor([[1., 4.], [3., 1.]], dtype=mindspore.float32)
>>> input_x = (x1, x2)
>>> out = mindspore.ops.clip_by_global_norm(input_x, 1.0)
>>> print(out)
(Tensor(shape=[2, 2], dtype=Float32, value=
[[ 2.98142403e-01,  4.47213590e-01],
 [ 1.49071202e-01,  2.98142403e-01]]), Tensor(shape=[2, 2], dtype=Float32, value=
[[ 1.49071202e-01,  5.96284807e-01],
 [ 4.47213590e-01,  1.49071202e-01]]))