mindspore.nn.AdaptiveMaxPool2d
- class mindspore.nn.AdaptiveMaxPool2d(output_size, return_indices=False)[source]
This operator applies a 2D adaptive max pooling to an input signal composed of multiple input planes. That is, for any input size, the size of the specified output is H x W. The number of output features is equal to the number of input planes.
The input and output data format can be “NCHW” and “CHW”. N is the batch size, C is the number of channels, H is the feature height, and W is the feature width.
For max adaptive pool2d:
\[\begin{split}\begin{align} h_{start} &= floor(i * H_{in} / H_{out})\\ h_{end} &= ceil((i + 1) * H_{in} / H_{out})\\ w_{start} &= floor(j * W_{in} / W_{out})\\ w_{end} &= ceil((j + 1) * W_{in} / W_{out})\\ Output(i,j) &= {\max Input[h_{start}:h_{end}, w_{start}:w_{end}]} \end{align}\end{split}\]Note
Ascend platform only supports float16 type for input.
- Parameters
output_size (Union[int, tuple]) – The target output size. output_size can be a tuple \((H, W)\), or an int H for \((H, H)\). \(H\) and \(W\) can be int or None. If it is None, it means the output size is the same as the input size.
return_indices (bool) – If return_indices is
True
, the indices of max value would be output. Default:False
.
- Inputs:
input (Tensor) - The input of AdaptiveMaxPool2d, which is a 3D or 4D tensor, with float16, float32 or float64 data type.
- Outputs:
Tensor, with the same type as the input. Shape of the output is input_shape[:len(input_shape) - len(out_shape)] + out_shape.
- Raises
TypeError – If output_size is not int or tuple.
TypeError – If input is not a tensor.
TypeError – If return_indices is not a bool.
TypeError – If dtype of input is not float16, float32 or float64.
ValueError – If output_size is a tuple and the length of output_size is not 2.
ValueError – If the dimension of input is not NCHW or CHW.
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> import mindspore as ms >>> import numpy as np >>> # case 1: output_size=(None, 2) >>> input = ms.Tensor(np.array([[[[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]], ... [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]], ... [[1.0, 2.0, 3.0], [4.0, 5.0, 6.0], [7.0, 8.0, 9.0]]]]), ms.float32) >>> adaptive_max_pool_2d = ms.nn.AdaptiveMaxPool2d((None, 2)) >>> output = adaptive_max_pool_2d(input) >>> print(output) [[[[2. 3.] [5. 6.] [8. 9.]] [[2. 3.] [5. 6.] [8. 9.]] [[2. 3.] [5. 6.] [8. 9.]]]] >>> # case 2: output_size=2 >>> adaptive_max_pool_2d = ms.nn.AdaptiveMaxPool2d(2) >>> output = adaptive_max_pool_2d(input) >>> print(output) [[[[5. 6.] [8. 9.]] [[5. 6.] [8. 9.]] [[5. 6.] [8. 9.]]]] >>> # case 3: output_size=(1, 2) >>> adaptive_max_pool_2d = ms.nn.AdaptiveMaxPool2d((1, 2)) >>> output = adaptive_max_pool_2d(input) >>> print(output) [[[[8. 9.]] [[8. 9.]] [[8. 9.]]]]