mindspore.ops.Conv3D

View Source On Gitee
class mindspore.ops.Conv3D(out_channel, kernel_size, mode=1, pad_mode='valid', pad=0, stride=1, dilation=1, group=1, data_format='NCDHW')[source]

3D convolution layer.

Applies a 3D convolution over an input tensor which is typically of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\), where \(N\) is batch size, \(C\) is channel number, \(D\) is feature depth, \(H\) is feature height, \(W\) is feature width.

The output is calculated based on formula:

\[\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{in} - 1} \text{ccor}({\text{weight}(C_{\text{out}_j}, k), \text{X}(N_i, k)})\]

where \(bias\) is the output channel bias, \(ccor\) is the cross-correlation, , \(weight\) is the convolution kernel value and \(X\) represents the input feature map.

Here are the indices’ meanings: - \(i\) corresponds to the batch number, ranging from 0 to N-1, where N is the batch size of the input.

  • \(j\) corresponds to the output channel, ranging from 0 to C_{out}-1, where C_{out} is the number of output channels, which is also equal to the number of kernels.

  • \(k\) corresponds to the input channel, ranging from 0 to C_{in}-1, where C_{in} is the number of input channels, which is also equal to the number of channels in the convolutional kernels.

Therefore, in the above formula, \({bias}(C_{out_j})\) represents the bias of the \(j\)-th output channel, \({weight}(C_{out_j}, k)\) represents the slice of the \(j\)-th convolutional kernel in the \(k\)-th channel, and \({X}(N_i, k)\) represents the slice of the \(k\)-th input channel in the \(i\)-th batch of the input feature map.

The shape of the convolutional kernel is given by \((\text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\) where \(kernel\_size[0]\) , \(kernel\_size[1]\) and \(kernel\_size[2]\) are the depth, height and width of the kernel, respectively. If we consider the input and output channels as well as the group parameter, the complete kernel shape will be \((C_{out}, C_{in} / \text{group}, \text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), where group is the number of groups dividing x’s input channel when applying group convolution.

For more details about convolution layer, please refer to Gradient Based Learning Applied to Document Recognition.

Note

  1. On Ascend platform, groups = 1 must be satisfied.

  2. On Ascend dilation on depth only supports the case of 1.

Parameters
  • out_channel (int) – Specifies output channel \(C_{out}\).

  • kernel_size (Union[int, tuple[int]]) – Specifies the depth, height and width of the 3D convolution kernel. It can be a single int or a tuple of 3 integers. A single int means the value is for depth, height and the width. A tuple of 3 ints means the first value is for depth and the rest is for the height and width.

  • mode (int, optional) – Modes for different convolutions. It is currently not used. Default: 1 .

  • stride (Union[int, tuple[int]], optional) – The distance of kernel moving, it can be an int number that represents the depth, height and width of movement or a tuple of three int numbers that represent depth, height and width movement respectively. Default: 1 .

  • pad_mode (str, optional) –

    Specifies the padding mode with a padding value of 0. It can be set to: "same" , "valid" or "pad" . Default: "valid" .

    • "same": Pad the input around its depth/height/width dimension so that the shape of input and output are the same when stride is set to 1. The amount of padding to is calculated by the operator internally. If the amount is even, it isuniformly distributed around the input, if it is odd, the excess amount goes to the front/right/bottom side. If this mode is set, pad must be 0.

    • "valid": No padding is applied to the input, and the output returns the maximum possible depth, height and width. Extra pixels that could not complete a full stride will be discarded. If this mode is set, pad must be 0.

    • "pad": Pad the input with a specified amount. In this mode, the amount of padding in the depth, height and width dimension is determined by the pad parameter. If this mode is set, pad must be greater than or equal to 0.

  • pad (Union(int, tuple[int]), optional) – Specifies the amount of padding to apply on input when pad_mode is set to "pad". It can be a single int or a tuple of 6 ints. If pad is one integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If pad is a tuple with 6 integers, the paddings of head, tail, top, bottom, left and right is equal to pad[0], pad[1], pad[2], pad[3], pad[4] and pad[5] accordingly. Default: 0 .

  • dilation (Union[int, tuple[int]], optional) – Specifies the dilation rate to use for dilated convolution. It can be a single int or a tuple of 3 integers. A single int means the dilation size is the same in the depth, height and width directions. A tuple of 3 ints represents the dilation size in the depth, height and width directions, respectively. Assuming \(dilation=(d0, d1, d2)\), the convolutional kernel samples the input with a spacing of \(d0-1\) elements in the depth direction, \(d1-1\) elements in the height direction, \(d2-1\) elements in the width direction respectively. The values in the depth, height and width dimensions are in the ranges [1, D], [1, H] and [1, W], respectively. Default: 1 .

  • group (int, optional) – The number of groups into which the filter is divided. in_channels and out_channels must be divisible by group. Default: 1 .

  • data_format (str, optional) – The optional value for data format. Currently only support "NCDHW" .

Inputs:
  • x (Tensor) - Tensor of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\). Currently input data type only support float16 and float32.

  • weight (Tensor) - Set size of kernel is \((k_d, K_h, K_w)\), then the shape is \((C_{out}, C_{in}/groups, k_d, K_h, K_w)\). Currently weight data type only support float16 and float32.

  • bias (Tensor) - Tensor of shape \((C_{out})\). When bias is None, zeros will be used. Default: None .

Outputs:

Tensor, the value that applied 3D convolution. The shape is \((N, C_{out}, D_{out}, H_{out}, W_{out})\).

pad_mode is "same":

\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lceil{\frac{D_{in}}{\text{stride[0]}}} \right \rceil \\ H_{out} = \left \lceil{\frac{H_{in}}{\text{stride[1]}}} \right \rceil \\ W_{out} = \left \lceil{\frac{W_{in}}{\text{stride[2]}}} \right \rceil \\ \end{array}\end{split}\]

pad_mode is "valid":

\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lfloor{\frac{D_{in} - \text{dilation[0]} \times (\text{kernel_size[0]} - 1) } {\text{stride[0]}} + 1} \right \rfloor \\ H_{out} = \left \lfloor{\frac{H_{in} - \text{dilation[1]} \times (\text{kernel_size[1]} - 1) } {\text{stride[1]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} - \text{dilation[2]} \times (\text{kernel_size[2]} - 1) } {\text{stride[2]}} + 1} \right \rfloor \\ \end{array}\end{split}\]

pad_mode is "pad":

\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lfloor{\frac{D_{in} + pad[0] + pad[1] - (\text{dilation[0]} - 1) \times \text{kernel_size[0]} - 1 }{\text{stride[0]}} + 1} \right \rfloor \\ H_{out} = \left \lfloor{\frac{H_{in} + pad[2] + pad[3] - (\text{dilation[1]} - 1) \times \text{kernel_size[1]} - 1 }{\text{stride[1]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} + pad[4] + pad[5] - (\text{dilation[2]} - 1) \times \text{kernel_size[2]} - 1 }{\text{stride[2]}} + 1} \right \rfloor \\ \end{array}\end{split}\]
Raises
  • TypeError – If out_channel or group is not an int.

  • TypeError – If kernel_size, stride, pad or dilation is neither an int nor a tuple.

  • ValueError – If out_channel, kernel_size, stride or dilation is less than 1.

  • ValueError – If pad is less than 0.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If pad is a tuple whose length is not equal to 6.

  • ValueError – If pad_mode is not equal to ‘pad’ and pad is not equal to (0, 0, 0, 0, 0, 0).

  • ValueError – If data_format is not ‘NCDHW’.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> # case 1: specify kernel_size with tuple, all parameters use default values.
>>> x = Tensor(np.ones([16, 3, 10, 32, 32]), mindspore.float16)
>>> weight = Tensor(np.ones([32, 3, 4, 3, 3]), mindspore.float16)
>>> conv3d = ops.Conv3D(out_channel=32, kernel_size=(4, 3, 3))
>>> output = conv3d(x, weight)
>>> print(output.shape)
(16, 32, 7, 30, 30)
>>> # case 2: specify kernel_size with int, all parameters use default values.
>>> x = Tensor(np.ones([10, 20, 32, 32, 32]), mindspore.float32)
>>> weight = Tensor(np.ones([40, 20, 3, 3, 3]), mindspore.float32)
>>> conv3d = ops.Conv3D(out_channel=40, kernel_size=3)
>>> output = conv3d(x, weight)
>>> print(output.shape)
(10, 40, 30, 30, 30)
 >>> # case 3: stride=(1, 2, 3), other parameters being default.
>>> x = Tensor(np.ones([10, 20, 32, 32, 32]), mindspore.float32)
>>> weight = Tensor(np.ones([40, 20, 3, 3, 3]), mindspore.float32)
>>> conv3d = ops.Conv3D(out_channel=40, kernel_size=3, stride=(1, 2, 3))
>>> output = conv3d(x, weight)
>>> print(output.shape)
(10, 40, 30, 15, 10)
 >>> # case 4: pad_mode="pad", other parameters being default.
>>> x = Tensor(np.ones([10, 20, 32, 32, 32]), mindspore.float32)
>>> weight = Tensor(np.ones([40, 20, 3, 3, 3]), mindspore.float32)
>>> conv3d = ops.Conv3D(out_channel=40, kernel_size=3, pad_mode="pad", pad=2)
>>> output = conv3d(x, weight)
>>> print(output.shape)
(10, 40, 34, 34, 34)
 >>> # case 5: dilation=(1, 1, 1), other parameters being default.
>>> x = Tensor(np.ones([10, 20, 32, 32, 32]), mindspore.float32)
>>> weight = Tensor(np.ones([40, 20, 3, 3, 3]), mindspore.float32)
>>> conv3d = ops.Conv3D(out_channel=40, kernel_size=3, dilation=(1, 1, 1))
>>> output = conv3d(x, weight)
>>> print(output.shape)
(10, 40, 30, 30, 30)
>>> # case 6: group=1, other parameters being default.
>>> x = Tensor(np.ones([10, 20, 32, 32, 32]), mindspore.float32)
>>> weight = Tensor(np.ones([40, 20, 3, 3, 3]), mindspore.float32)
>>> conv3d = ops.Conv3D(out_channel=40, kernel_size=3, group=1)
>>> output = conv3d(x, weight)
>>> print(output.shape)
(10, 40, 30, 30, 30)
>>> # case 7: All parameters are specified.
>>> x = Tensor(np.ones([10, 20, 32, 32, 32]), mindspore.float32)
>>> weight = Tensor(np.ones([40, 20, 3, 3, 3]), mindspore.float32)
>>> conv3d = ops.Conv3D(out_channel=40, kernel_size=3, stride=(1, 2, 3), pad_mode="pad",
...                     pad=2, dilation=(1), group=1)
>>> output = conv3d(x, weight)
>>> print(output.shape)
(10, 40, 34, 17, 12)