mindspore.mint.nn.Conv2d
- class mindspore.mint.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', dtype=None)[source]
2D convolution layer.
Applies a 2D convolution over an input tensor which is typically of shape \((N, C_{in}, H_{in}, W_{in})\), where \(N\) is batch size, \(C\) is channel number, \(H\) is feature height, \(W\) is feature width.
The output is calculated based on formula:
\[\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{in} - 1} \text{ccor}({\text{weight}(C_{\text{out}_j}, k), \text{X}(N_i, k)})\]where \(bias\) is the output channel bias, \(ccor\) is the cross-correlation, \(weight\) is the convolution kernel value and \(X\) represents the input feature map.
\(i\) corresponds to the batch number, the range is \([0, N-1]\), where \(N\) is the batch size of the input.
\(j\) corresponds to the output channel, the range is \([0, C_{out}-1]\), where \(C_{out}\) is the number of output channels, which is also equal to the number of kernels.
\(k\) corresponds to the input channel, the range is \([0, C_{in}-1]\), where \(C_{in}\) is the number of input channels, which is also equal to the number of channels in the convolutional kernels.
Therefore, in the above formula, \({bias}(C_{\text{out}_j})\) represents the bias of the \(j\)-th output channel, \({weight}(C_{\text{out}_j}, k)\) represents the slice of the \(j\)-th convolutional kernel in the \(k\)-th channel, and \({X}(N_i, k)\) represents the slice of the \(k\)-th input channel in the \(i\)-th batch of the input feature map.
The shape of the convolutional kernel is given by \((\text{kernel_size[0]},\text{kernel_size[1]})\), where \(\text{kernel_size[0]}\) and \(\text{kernel_size[1]}\) are the height and width of the kernel, respectively. If we consider the input and output channels as well as the groups parameter, the complete kernel shape will be \((C_{out}, C_{in} / \text{groups}, \text{kernel_size[0]}, \text{kernel_size[1]})\), where groups is the number of groups dividing x's input channel when applying groups convolution.
For more details about convolution layer, please refer to Gradient Based Learning Applied to Document Recognition.
- Parameters
in_channels (int) – The channel number of the input tensor of the Conv2d layer.
out_channels (int) – The channel number of the output tensor of the Conv2d layer.
kernel_size (Union[int, tuple[int], list[int]]) – Specifies the height and width of the 2D convolution kernel. The data type is an integer or a tuple of two integers. An integer represents the height and width of the convolution kernel. A tuple of two integers represents the height and width of the convolution kernel respectively.
stride (Union[int, tuple[int], list[int]], optional) – The movement stride of the 2D convolution kernel. The data type is an integer or a tuple of two integers. An integer represents the movement step size in both height and width directions. A tuple of two integers represents the movement step size in the height and width directions respectively. Default:
1
.padding (Union[int, tuple[int], list[int], str], optional) –
The number of padding on the height and width directions of the input. The data type is an integer or a tuple of two integers or string {
"valid"
,"same"
}. If padding is an integer, then padding_{H} and padding_{W} are all equal to padding. If padding is a tuple of 2 integers, then padding_{H} and padding_{W} is equal to padding[0] and padding[1] respectively. The value should be greater than or equal to 0. Default:0
."same"
: Pad the input around its edges so that the shape of input and output are the same when stride is set to1
. The amount of padding to is calculated by the operator internally, If the amount is even, it is uniformly distributed around the input, if it is odd, the excess amount goes to the right/bottom side. If this mode is set, stride must be 1."valid"
: No padding is applied to the input, and the output returns the maximum possible height and width. Extra pixels that could not complete a full stride will be discarded.
padding_mode (str, optional) – Specifies the padding mode with a padding value of 0. It can be set to:
"zeros"
,"reflect"
"circular"
or"replicate"
. Default:"zeros"
.dilation (Union[int, tuple[int], list[int]], optional) – Specifies the dilation rate to use for dilated convolution. It can be a single int or a tuple of 2 or 4 integers. A single int means the dilation size is the same in both the height and width directions. A tuple of two ints represents the dilation size in the height and width directions, respectively. For a tuple of four ints, the two ints correspond to (N, C) dimension are treated as 1, and the two correspond to (H, W) dimensions is the dilation size in the height and width directions respectively. Assuming \(dilation=(d0, d1)\), the convolutional kernel samples the input with a spacing of \(d0-1\) elements in the height direction and \(d1-1\) elements in the width direction. The values in the height and width dimensions are in the ranges [1, H] and [1, W], respectively. Default:
1
.groups (int, optional) –
Splits filter into groups, in_channels and out_channels must be divisible by groups. If the groups is equal to in_channels and out_channels, this 2D convolution layer also can be called 2D depthwise convolution layer. Default:
1
. The following restraints must be met:\((C_{in} \text{ % } \text{groups} == 0)\)
\((C_{out} \text{ % } \text{groups} == 0)\)
\((C_{out} >= \text{groups})\)
\((\text{kernel_size[1]} = C_{in} / \text{groups})\)
bias (bool, optional) – Whether the Conv2d layer has a bias parameter. Default:
True
.dtype (
mindspore.dtype
, optional) – Dtype of Parameters. Default:None
, usingmstype.float32
.
- Inputs:
x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\) or \((C_{in}, H_{in}, W_{in})\).
- Outputs:
Tensor of shape \((N, C_{out}, H_{out}, W_{out})\) or \((C_{out}, H_{out}, W_{out})\).
padding is
'same'
:\[\begin{split}\begin{array}{ll} \\ H_{out} = \left \lceil{\frac{H_{in}}{\text{stride[0]}}} \right \rceil \\ W_{out} = \left \lceil{\frac{W_{in}}{\text{stride[1]}}} \right \rceil \\ \end{array}\end{split}\]padding is
'valid'
:\[\begin{split}\begin{array}{ll} \\ H_{out} = \left \lceil{\frac{H_{in} - \text{dilation[0]} \times (\text{kernel_size[0]} - 1) } {\text{stride[0]}}} \right \rceil \\ W_{out} = \left \lceil{\frac{W_{in} - \text{dilation[1]} \times (\text{kernel_size[1]} - 1) } {\text{stride[1]}}} \right \rceil \\ \end{array}\end{split}\]padding is int or tuple/list:
\[\begin{split}\begin{array}{ll} \\ H_{out} = \left \lfloor{\frac{H_{in} + padding[0] + padding[1] - (\text{kernel_size[0]} - 1) \times \text{dilation[0]} - 1 }{\text{stride[0]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} + padding[2] + padding[3] - (\text{kernel_size[1]} - 1) \times \text{dilation[1]} - 1 }{\text{stride[1]}} + 1} \right \rfloor \\ \end{array}\end{split}\]
- Raises
ValueError – Args and size of the input feature map should satisfy the output formula to ensure that the size of the output feature map is positive; otherwise, an error will be reported.
RuntimeError – On Ascend, due to the limitation of the L1 cache size of different NPU chip, if input size or kernel size is too large, it may trigger an error.
TypeError – If in_channels, out_channels or groups is not an int.
TypeError – If kernel_size, stride or dilation is neither an int nor a tuple.
ValueError – If in_channels, out_channels, kernel_size, stride or dilation is less than 1.
ValueError – If padding is less than 0.
ValueError – If padding is same , stride is not equal to 1.
ValueError – The input parameters do not satisfy the convolution output formula.
ValueError – The kernel_size cannot exceed the size of the input feature map.
ValueError – The value of padding cannot cause the calculation area to exceed the input size.
- Supported Platforms:
Ascend
Examples
>>> import mindspore >>> from mindspore import Tensor, mint >>> import numpy as np >>> net = mint.nn.Conv2d(120, 240, 4, bias=False) >>> x = Tensor(np.ones([1, 120, 1024, 640]), mindspore.float32) >>> output = net(x).shape >>> print(output) (1, 240, 1024, 640)