mindspore.numpy.amin

mindspore.numpy.amin(a, axis=None, keepdims=False, initial=None, where=True)[source]

Returns the minimum of an array or minimum along an axis.

Note

Numpy argument out is not supported. On GPU, the supported dtypes are np.float16, and np.float32.

Parameters
  • a (Tensor) – Input data.

  • axis (None or int or tuple of ints, optional) – defaults to None. Axis or axes along which to operate. By default, flattened input is used. If this is a tuple of ints, the minimum is selected over multiple axes, instead of a single axis or all the axes as before.

  • keepdims (boolean, optional) – defaults to False. If this is set to True, the axes which are reduced are left in the result as dimensions with size one. With this option, the result will broadcast correctly against the input array.

  • initial (scalar, optional) – The maximum value of an output element. Must be present to allow computation on empty slice.

  • where (boolean Tensor, optional) – defaults to True. A boolean array which is broadcasted to match the dimensions of array, and selects elements to include in the reduction. If non-default value is passed, initial must also be provided.

Returns

Tensor or scalar, minimum of a. If axis is None, the result is a scalar value. If axis is given, the result is an array of dimension a.ndim - 1.

Raises

TypeError – if the input is not a tensor.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore.numpy as np
>>> a = np.arange(4).reshape((2,2)).astype('float32')
>>> output = np.amin(a)
>>> print(output)
0.0
>>> output = np.amin(a, axis=0)
>>> print(output)
[0. 1.]
>>> output = np.amin(a, axis=1)
>>> print(output)
[0. 2.]
>>> output = np.amin(a, where=np.array([False, True]), initial=10, axis=0)
>>> print(output)
[10.  1.]