mindspore.nn.Conv2d
- class mindspore.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, pad_mode='same', padding=0, dilation=1, group=1, has_bias=False, weight_init=None, bias_init=None, data_format='NCHW', dtype=mstype.float32)[source]
2D convolution layer.
Applies a 2D convolution over an input tensor which is typically of shape \((N, C_{in}, H_{in}, W_{in})\), where \(N\) is batch size, \(C\) is channel number, \(H\) is feature height, \(W\) is feature width.
The output is calculated based on formula:
\[\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{in} - 1} \text{ccor}({\text{weight}(C_{\text{out}_j}, k), \text{X}(N_i, k)})\]where \(bias\) is the output channel bias, \(ccor\) is the cross-correlation, \(weight\) is the convolution kernel value and \(X\) represents the input feature map.
Here are the indices' meanings:
\(i\) corresponds to the batch number, the range is \([0, N-1]\), where \(N\) is the batch size of the input.
\(j\) corresponds to the output channel, the range is \([0, C_{out}-1]\), where \(C_{out}\) is the number of output channels, which is also equal to the number of kernels.
\(k\) corresponds to the input channel, the range is \([0, C_{in}-1]\), where \(C_{in}\) is the number of input channels, which is also equal to the number of channels in the convolutional kernels.
Therefore, in the above formula, \({bias}(C_{\text{out}_j})\) represents the bias of the \(j\)-th output channel, \({weight}(C_{\text{out}_j}, k)\) represents the slice of the \(j\)-th convolutional kernel in the \(k\)-th channel, and \({X}(N_i, k)\) represents the slice of the \(k\)-th input channel in the \(i\)-th batch of the input feature map.
The shape of the convolutional kernel is given by \((\text{kernel_size[0]},\text{kernel_size[1]})\), where \(\text{kernel_size[0]}\) and \(\text{kernel_size[1]}\) are the height and width of the kernel, respectively. If we consider the input and output channels as well as the group parameter, the complete kernel shape will be \((C_{out}, C_{in} / \text{group}, \text{kernel_size[0]}, \text{kernel_size[1]})\), where group is the number of groups dividing x's input channel when applying group convolution.
For more details about convolution layer, please refer to Gradient Based Learning Applied to Document Recognition.
Note
On Ascend platform, only group convolution in depthwise convolution scenarios is supported. That is, when group>1, condition in_channels = out_channels = group must be satisfied.
- Parameters
in_channels (int) – The channel number of the input tensor of the Conv2d layer.
out_channels (int) – The channel number of the output tensor of the Conv2d layer.
kernel_size (Union[int, tuple[int]]) – Specifies the height and width of the 2D convolution kernel. The data type is an integer or a tuple of two integers. An integer represents the height and width of the convolution kernel. A tuple of two integers represents the height and width of the convolution kernel respectively.
stride (Union[int, tuple[int]], optional) – The movement stride of the 2D convolution kernel. The data type is an integer or a tuple of two or four integers. An integer represents the movement step size in both height and width directions. A tuple of two integers represents the movement step size in the height and width directions respectively. Default:
1
.pad_mode (str, optional) –
Specifies the padding mode with a padding value of 0. It can be set to:
"same"
,"valid"
or"pad"
. Default:"same"
."same"
: Pad the input around its edges so that the shape of input and output are the same when stride is set to1
. The amount of padding to is calculated by the operator internally, If the amount is even, it is uniformly distributed around the input, if it is odd, the excess amount goes to the right/bottom side. If this mode is set, padding must be 0."valid"
: No padding is applied to the input, and the output returns the maximum possible height and width. Extra pixels that could not complete a full stride will be discarded. If this mode is set, padding must be 0."pad"
: Pad the input with a specified amount. In this mode, the amount of padding in the height and width directions is determined by the padding parameter. If this mode is set, padding must be greater than or equal to 0.
padding (Union[int, tuple[int]], optional) – The number of padding on the height and width directions of the input. The data type is an integer or a tuple of four integers. If padding is an integer, then the top, bottom, left, and right padding are all equal to padding. If padding is a tuple of 4 integers, then the top, bottom, left, and right padding is equal to padding[0], padding[1], padding[2], and padding[3] respectively. The value should be greater than or equal to 0. Default:
0
.dilation (Union(int, tuple[int]), optional) – Specifies the dilation rate to use for dilated convolution. It can be a single int or a tuple of 2 or 4 integers. A single int means the dilation size is the same in both the height and width directions. A tuple of two ints represents the dilation size in the height and width directions, respectively. For a tuple of four ints, the two ints correspond to (N, C) dimension are treated as 1, and the two correspond to (H, W) dimensions is the dilation size in the height and width directions respectively. Assuming \(dilation=(d0, d1)\), the convolutional kernel samples the input with a spacing of \(d0-1\) elements in the height direction and \(d1-1\) elements in the width direction. The values in the height and width dimensions are in the ranges [1, H] and [1, W], respectively. Default:
1
.group (int, optional) – Splits filter into groups, in_channels and out_channels must be divisible by group. If the group is equal to in_channels and out_channels, this 2D convolution layer also can be called 2D depthwise convolution layer. Default:
1
.has_bias (bool, optional) – Whether the Conv2d layer has a bias parameter. Default:
False
.weight_init (Union[Tensor, str, Initializer, numbers.Number], optional) – Initialization method of weight parameter. It can be a Tensor, a string, an Initializer or a numbers.Number. When a string is specified, values from
'TruncatedNormal'
,'Normal'
,'Uniform'
,'HeUniform'
and'XavierUniform'
distributions as well as constant'One'
and'Zero'
distributions are possible. Alias'xavier_uniform'
,'he_uniform'
,'ones'
and'zeros'
are acceptable. Uppercase and lowercase are both acceptable. Refer to the values of Initializer, for more details. Default:None
, weight will be initialized using'HeUniform'
.bias_init (Union[Tensor, str, Initializer, numbers.Number], optional) –
Initialization method of bias parameter. Available initialization methods are the same as 'weight_init'. Refer to the values of Initializer, for more details. Default:
None
, bias will be initialized using'Uniform'
.data_format (str, optional) – The optional value for data format, is
'NHWC'
or'NCHW'
. Default:'NCHW'
. (NHWC is only supported in GPU now.)dtype (
mindspore.dtype
) – Dtype of Parameters. Default:mstype.float32
.
- Inputs:
x (Tensor) - Tensor of shape \((N, C_{in}, H_{in}, W_{in})\) or \((N, H_{in}, W_{in}, C_{in})\).
- Outputs:
Tensor of shape \((N, C_{out}, H_{out}, W_{out})\) or \((N, H_{out}, W_{out}, C_{out})\).
pad_mode is
'same'
:\[\begin{split}\begin{array}{ll} \\ H_{out} = \left \lceil{\frac{H_{in}}{\text{stride[0]}}} \right \rceil \\ W_{out} = \left \lceil{\frac{W_{in}}{\text{stride[1]}}} \right \rceil \\ \end{array}\end{split}\]pad_mode is
'valid'
:\[\begin{split}\begin{array}{ll} \\ H_{out} = \left \lceil{\frac{H_{in} - \text{dilation[0]} \times (\text{kernel_size[0]} - 1) } {\text{stride[0]}}} \right \rceil \\ W_{out} = \left \lceil{\frac{W_{in} - \text{dilation[1]} \times (\text{kernel_size[1]} - 1) } {\text{stride[1]}}} \right \rceil \\ \end{array}\end{split}\]pad_mode is
'pad'
:\[\begin{split}\begin{array}{ll} \\ H_{out} = \left \lfloor{\frac{H_{in} + padding[0] + padding[1] - (\text{kernel_size[0]} - 1) \times \text{dilation[0]} - 1 }{\text{stride[0]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} + padding[2] + padding[3] - (\text{kernel_size[1]} - 1) \times \text{dilation[1]} - 1 }{\text{stride[1]}} + 1} \right \rfloor \\ \end{array}\end{split}\]
- Raises
TypeError – If in_channels, out_channels or group is not an int.
TypeError – If kernel_size, stride, padding or dilation is neither an int not a tuple.
ValueError – If in_channels, out_channels, kernel_size, stride or dilation is less than 1.
ValueError – If padding is less than 0.
ValueError – If pad_mode is not one of 'same', 'valid', 'pad'.
ValueError – If padding is a tuple whose length is not equal to 4.
ValueError – If pad_mode is not equal to 'pad' and padding is not equal to (0, 0, 0, 0).
ValueError – If data_format is neither 'NCHW' nor 'NHWC'.
- Supported Platforms:
Ascend
GPU
CPU
Examples
>>> import mindspore >>> from mindspore import Tensor, nn >>> import numpy as np >>> net = nn.Conv2d(120, 240, 4, has_bias=False, weight_init='normal') >>> x = Tensor(np.ones([1, 120, 1024, 640]), mindspore.float32) >>> output = net(x).shape >>> print(output) (1, 240, 1024, 640)