比较与torch.nn.MaxPool2d的功能差异
torch.nn.MaxPool2d
torch.nn.MaxPool2d(
kernel_size,
stride=None,
padding=0,
dilation=1,
return_indices=False,
ceil_mode=False
)
更多内容详见torch.nn.MaxPool2d。
mindspore.nn.MaxPool2d
class mindspore.nn.MaxPool2d(
kernel_size=1,
stride=1,
pad_mode="valid",
data_format="NCHW"
)
更多内容详见mindspore.nn.MaxPool2d。
使用方式
PyTorch:可以通过padding参数调整输出的shape。若输入的shape为 \( (N, C, H_{in}, W_{in}) \),则输出的shape为 \( (N, C, H_{out}, W_{out}) \),其中
\[
H_{out} = \left\lfloor\frac{H_{in} + 2 * \text{padding[0]} - \text{dilation[0]}
\times (\text{kernel_size[0]} - 1) - 1}{\text{stride[0]}} + 1\right\rfloor
\]
\[
W_{out} = \left\lfloor\frac{W_{in} + 2 * \text{padding[1]} - \text{dilation[1]}
\times (\text{kernel_size[1]} - 1) - 1}{\text{stride[1]}} + 1\right\rfloor
\]
MindSpore:没有padding参数,仅通过pad_mode参数控制pad模式。若输入的shape为 \( (N, C, H_{in}, W_{in}) \),则输出的shape为 \( (N, C, H_{out}, W_{out}) \),其中
pad_mode为”valid”:
\[ H_{out} = \left\lfloor\frac{H_{in} - ({kernel\_size[0]} - 1)}{\text{stride[0]}}\right\rfloor \]\[ W_{out} = \left\lfloor\frac{W_{in} - ({kernel\_size[1]} - 1)}{\text{stride[1]}}\right\rfloor \]pad_mode为”same”:
\[ H_{out} = \left\lfloor\frac{H_{in}}{\text{stride[0]}}\right\rfloor \]\[ W_{out} = \left\lfloor\frac{W_{in}}{\text{stride[1]}}\right\rfloor \]
代码示例
import mindspore
from mindspore import Tensor
import mindspore.nn as nn
import torch
import numpy as np
# In MindSpore, pad_mode="valid"
pool = nn.MaxPool2d(kernel_size=3, stride=2, pad_mode="valid")
input_x = Tensor(np.random.randn(20, 16, 50, 32).astype(np.float32))
output = pool(input_x)
print(output.shape)
# Out:
# (20, 16, 24, 15)
# In MindSpore, pad_mode="same"
pool = nn.MaxPool2d(kernel_size=3, stride=2, pad_mode="same")
input_x = Tensor(np.random.randn(20, 16, 50, 32).astype(np.float32))
output = pool(input_x)
print(output.shape)
# Out:
# (20, 16, 25, 16)
# In torch, padding=1
m = torch.nn.MaxPool2d(3, stride=2, padding=1)
input_x = torch.randn(20, 16, 50, 32)
output = m(input_x)
print(output.shape)
# Out:
# torch.Size([20, 16, 25, 16])