mindspore.ops.NPUGetFloatStatus
- class mindspore.ops.NPUGetFloatStatus[source]
mindspore.ops.NPUGetFloatStatus
updates the flag which is the output tensor ofmindspore.ops.NPUAllocFloatStatus
with the latest overflow status.Note
The flag is a tensor whose shape is (8,) and data type is mindspore.dtype.float32. If the sum of the flag equals to 0, there is no overflow happened. If the sum of the flag is bigger than 0, there is overflow happened. In addition, there are strict sequencing requirements for use, i.e., before using the NPUGetFloatStatus operator, need to ensure that the NPUClearFlotStatus and your compute has been executed. We use
mindspore.ops.Depend
to ensure the execution order.- Inputs:
x (Tensor) - The output tensor of NPUAllocFloatStatus. The data type must be float16 or float32. \((N,*)\) where \(*\) means, any number of additional dimensions, its rank should be less than 8.
- Outputs:
Tensor, has the same shape as x. All the elements in the tensor will be zero.
- Supported Platforms:
Ascend
Examples
>>> import numpy as np >>> import mindspore.nn as nn >>> from mindspore import ops >>> from mindspore.common import dtype as mstype >>> from mindspore.common.tensor import Tensor >>> class Net(nn.Cell): ... def __init__(self): ... super().__init__() ... self.alloc_status = ops.NPUAllocFloatStatus() ... self.get_status = ops.NPUGetFloatStatus() ... self.clear_status = ops.NPUClearFloatStatus() ... self.sub = ops.Sub() ... self.neg = ops.Neg() ... ... def construct(self, x): ... init = self.alloc_status() ... clear_status = self.clear_status(init) ... x = ops.depend(x, clear_status) ... res = self.sub(x, self.neg(x)) ... init = ops.depend(init, res) ... get_status = self.get_status(init) ... res = ops.depend(res, get_status) ... return res >>> >>> value = 5 >>> data = np.full((2, 3), value, dtype=np.float16) >>> x = Tensor(data, dtype=mstype.float16) >>> net = Net() >>> res = net(x) >>> print(res) [[10. 10. 10.] [10. 10. 10.]]