mindspore.ops.conv3d
- mindspore.ops.conv3d(input, weight, bias=None, stride=1, pad_mode='valid', padding=0, dilation=1, groups=1)[source]
Applies a 3D convolution over an input tensor. The input tensor is typically of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\), where \(N\) is batch size, \(C\) is channel number, \(D\) is feature depth, \(H\) is feature height, \(W\) is feature width.
The output is calculated based on formula:
\[\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{in} - 1} \text{ccor}({\text{weight}(C_{\text{out}_j}, k), \text{X}(N_i, k)})\]where \(bias\) is the output channel bias, \(ccor\) is the cross-correlation, , \(weight\) is the convolution kernel value and \(X\) represents the input feature map.
Here are the indices’ meanings: - \(i\) corresponds to the batch number, ranging from 0 to N-1, where N is the batch size of the input.
\(j\) corresponds to the output channel, ranging from 0 to C_{out}-1, where C_{out} is the number of output channels, which is also equal to the number of kernels.
\(k\) corresponds to the input channel, ranging from 0 to C_{in}-1, where C_{in} is the number of input channels, which is also equal to the number of channels in the convolutional kernels.
Therefore, in the above formula, \({bias}(C_{out_j})\) represents the bias of the \(j\)-th output channel, \({weight}(C_{out_j}, k)\) represents the slice of the \(j\)-th convolutional kernel in the \(k\)-th channel, and \({X}(N_i, k)\) represents the slice of the \(k\)-th input channel in the \(i\)-th batch of the input feature map.
The shape of the convolutional kernel is given by \((\text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\) where \(kernel\_size[0]\) , \(kernel\_size[1]\) and \(kernel\_size[2]\) are the depth, height and width of the kernel, respectively. If we consider the input and output channels as well as the group parameter, the complete kernel shape will be \((C_{out}, C_{in} / \text{group}, \text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), where group is the number of groups dividing x’s input channel when applying group convolution.
For more details about convolution layer, please refer to Gradient Based Learning Applied to Document Recognition.
Note
On Ascend platform, groups = 1 must be satisfied.
On Ascend dilation on depth only supports the case of 1.
- Parameters
input (Tensor) – Tensor of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\).
weight (Tensor) – Set size of kernel is \((\text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), then the shape is \((C_{out}, C_{in}, \text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[1]})\).
bias (Tensor, optional) – Bias Tensor with shape \((C_{out})\). When bias is None, zeros will be used. Default:
None
.stride (Union[int, tuple[int]], optional) – The distance of kernel moving, it can be an int number that represents the depth, height and width of movement or a tuple of three int numbers that represent depth, height and width movement respectively. Default:
1
.pad_mode (str, optional) –
Specifies padding mode. The optional values are
"same"
,"valid"
and"pad"
. Default:"valid"
."same"
: Adopts the way of completion. The depth, height and width of the output will be equal to the input x divided by stride. The padding will be evenly calculated in head and tail, top and bottom, left and right directions possiblily. Otherwise, the last extra padding will be calculated from the tail, bottom and the right side. If this mode is set, pad must be 0."valid"
: Adopts the way of discarding. The possible largest depth, height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad must be 0."pad"
: Implicit paddings on both sides of the input in depth, height and width. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.
padding (Union[int, tuple[int], list[int]], optional) – The pad value to be filled. If pad is an integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If pad is a tuple/list of 3 integers, the padding of head, tail, top, bottom, left and right equal to pad[0], pad[0], pad[1], pad[1], pad[2] and pad[2] correspondingly. Default:
0
.dilation (Union[int, tuple[int]], optional) – The data type is int or a tuple of 3 integers \((dilation_d, dilation_h, dilation_w)\). Currently, dilation on depth only supports the case of 1 on Ascend backend. Specifies the dilation rate to use for dilated convolution. If set \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. The value ranges for the depth, height, and width dimensions are [1, D], [1, H], and [1, W], respectively. Default:
1
.groups (int, optional) – The number of groups into which the filter is divided. in_channels and out_channels must be divisible by group. Default:
1
.
- Returns
Tensor, the value that applied 3D convolution. The shape is \((N, C_{out}, D_{out}, H_{out}, W_{out})\).
pad_mode is
"same"
:\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lceil{\frac{D_{in}}{\text{stride[0]}}} \right \rceil \\ H_{out} = \left \lceil{\frac{H_{in}}{\text{stride[1]}}} \right \rceil \\ W_{out} = \left \lceil{\frac{W_{in}}{\text{stride[2]}}} \right \rceil \\ \end{array}\end{split}\]pad_mode is
"valid"
:\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lfloor{\frac{D_{in} - \text{dilation[0]} \times (\text{kernel_size[0]} - 1) } {\text{stride[0]}} + 1} \right \rfloor \\ H_{out} = \left \lfloor{\frac{H_{in} - \text{dilation[1]} \times (\text{kernel_size[1]} - 1) } {\text{stride[1]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} - \text{dilation[2]} \times (\text{kernel_size[2]} - 1) } {\text{stride[2]}} + 1} \right \rfloor \\ \end{array}\end{split}\]pad_mode is
"pad"
:\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lfloor{\frac{D_{in} + padding[0] + padding[1] - (\text{dilation[0]} - 1) \times \text{kernel_size[0]} - 1 }{\text{stride[0]}} + 1} \right \rfloor \\ H_{out} = \left \lfloor{\frac{H_{in} + padding[2] + padding[3] - (\text{dilation[1]} - 1) \times \text{kernel_size[1]} - 1 }{\text{stride[1]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} + padding[4] + padding[5] - (\text{dilation[2]} - 1) \times \text{kernel_size[2]} - 1 }{\text{stride[2]}} + 1} \right \rfloor \\ \end{array}\end{split}\]- Raises
TypeError – If out_channel or groups is not an int.
TypeError – If stride, padding or dilation is neither an int nor a tuple.
TypeError – If bias is not a Tensor.
ValueError – If the shape of bias is not \(C_{out}\).
ValueError – If stride or dilation is less than 1.
ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.
ValueError – If padding is a tuple or list whose length is not equal to 3.
ValueError – If pad_mode is not equal to ‘pad’ and pad is greater than 0.
- Supported Platforms:
Ascend
GPU
Examples
>>> import mindspore >>> import numpy as np >>> from mindspore import Tensor, ops >>> x = Tensor(np.ones([16, 3, 10, 32, 32]), mindspore.float16) >>> weight = Tensor(np.ones([32, 3, 4, 3, 3]), mindspore.float16) >>> output = ops.conv3d(x, weight, pad_mode="same", padding=0, stride=1, dilation=1, groups=1) >>> print(output.shape) (16, 32, 10, 32, 32) >>> output = ops.conv3d(x, weight, pad_mode="valid", padding=0, stride=1, dilation=1, groups=1) >>> print(output.shape) (16, 32, 7, 30, 30) >>> output = ops.conv3d(x, weight, pad_mode="pad", padding=(2, 1, 1), stride=1, dilation=1, groups=1) >>> print(output.shape) (16, 32, 11, 32, 32)