mindspore.mint.nn.functional.conv2d
- mindspore.mint.nn.functional.conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)[source]
Applies a 2D convolution over an input tensor. The input tensor is typically of shape \((N, C_{in}, H_{in}, W_{in})\) or \((C_{in}, H_{in}, W_{in})\), where \(N\) is batch size, \(C\) is channel number, \(H\) is feature height, \(W\) is feature width.
The output is calculated based on formula:
\[\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{in} - 1} \text{ccor}({\text{weight}(C_{\text{out}_j}, k), \text{X}(N_i, k)})\]where \(bias\) is the output channel bias, \(ccor\) is the cross-correlation, , \(weight\) is the convolution kernel value and \(X\) represents the input feature map.
\(i\) corresponds to the batch number, the range is \([0, N-1]\), where \(N\) is the batch size of the input.
\(j\) corresponds to the output channel, the range is \([0, C_{out}-1]\), where \(C_{out}\) is the number of output channels, which is also equal to the number of kernels.
\(k\) corresponds to the input channel, the range is \([0, C_{in}-1]\), where \(C_{in}\) is the number of input channels, which is also equal to the number of channels in the convolutional kernels.
Therefore, in the above formula, \({bias}(C_{out_j})\) represents the bias of the \(j\)-th output channel, \({weight}(C_{out_j}, k)\) represents the slice of the \(j\)-th convolutional kernel in the \(k\)-th channel, and \({X}(N_i, k)\) represents the slice of the \(k\)-th input channel in the \(i\)-th batch of the input feature map.
The shape of the convolutional kernel is given by \((\text{kernel_size[0]}, \text{kernel_size[1]})\), where \(\text{kernel_size[0]}\) and \(\text{kernel_size[1]}\) are the height and width of the kernel, respectively. If we consider the input and output channels as well as the group parameter, the complete kernel shape will be \((C_{out}, C_{in} / \text{group}, \text{kernel_size[0]}, \text{kernel_size[1]})\), where group is the number of groups dividing x's input channel when applying group convolution.
For more details about convolution layer, please refer to Gradient Based Learning Applied to Document Recognition and ConvNets.
Warning
This is an experimental API that is subject to change or deletion.
- Parameters
input (Tensor) – Tensor of shape \((N, C_{in}, H_{in}, W_{in})\) or \((C_{in}, H_{in}, W_{in})\).
weight (Tensor) – Tensor of shape \((N, C_{in} / \text{groups}, \text{kernel_size[0]}, \text{kernel_size[1]})\), then the size of kernel is \((\text{kernel_size[0]}, \text{kernel_size[1]})\).
bias (Tensor, optional) – Bias Tensor with shape \((C_{out})\). When bias is
None
, zeros will be used. Default:None
.stride (Union(int, tuple[int]), optional) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default:
1
.padding (Union[int, tuple[int], str], optional) –
The number of padding on the height and width directions of the input. The data type is an integer or a tuple of two integers or string {valid, same}. If padding is an integer, then padding_{H} and padding_{W} are all equal to padding. If padding is a tuple of 2 integers, then padding_{H} and padding_{W} is equal to padding[0] and padding[1] respectively. The value should be greater than or equal to 0. Default:
0
."same"
: Pad the input around its edges so that the shape of input and output are the same when stride is set to1
. The amount of padding to is calculated by the operator internally, If the amount is even, it is uniformly distributed around the input, if it is odd, the excess amount goes to the right/bottom side. If this mode is set, stride must be 1."valid"
: No padding is applied to the input, and the output returns the maximum possible height and width. Extra pixels that could not complete a full stride will be discarded.
dilation (Union(int, tuple[int]), optional) – Gaps between kernel elements.The data type is int or a tuple of 2 integers. Specifies the dilation rate to use for dilated convolution. If set to be \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. Its value must be greater than or equal to 1 and bounded by the height and width of the input x. Default:
1
.groups (int, optional) –
Splits input into groups. Default:
1
.\((C_{in} \text{ % } \text{groups} == 0)\) , \((C_{out} \text{ % } \text{groups} == 0)\) , \((C_{out} >= \text{groups})\) , \((\text{kernel_size[1]} = C_{in} / \text{groups})\)
- Returns
Tensor, the value that applied 2D convolution. The shape is \((N, C_{out}, H_{out}, W_{out})\). To see how different pad modes affect the output shape, please refer to
mindspore.mint.nn.Conv2d
for more details.- Raises
ValueError – Args and size of the input feature map should satisfy the output formula to ensure that the size of the output feature map is positive; otherwise, an error will be reported. For more details on the output formula, please refer to
mindspore.mint.nn.Conv2d
.RuntimeError – On Ascend, due to the limitation of the L1 cache size of different NPU chip, if input size or kernel size is too large, it may trigger an error.
TypeError – If in_channels , out_channels or groups is not an int.
TypeError – If kernel_size , stride or dilation is neither an int nor a tuple.
TypeError – If bias is not a Tensor.
ValueError – If the shape of bias is not \((C_{out})\) .
ValueError – If stride or dilation is less than 1.
ValueError – If padding is same , stride is not equal to 1.
ValueError – The input parameters do not satisfy the convolution output formula.
ValueError – The KernelSize cannot exceed the size of the input feature map.
ValueError – The value of padding cannot cause the calculation area to exceed the input size.
- Supported Platforms:
Ascend
Examples
>>> import mindspore >>> import numpy as np >>> from mindspore import Tensor, mint, mint >>> x = Tensor(np.ones([10, 32, 32, 32]), mindspore.float32) >>> weight = Tensor(np.ones([32, 32, 3, 3]), mindspore.float32) >>> output = mint.nn.functional.conv2d(x, weight) >>> print(output.shape) (10, 32, 30, 30)