mindspore.ops.conv3d

mindspore.ops.conv3d(inputs, weight, pad_mode='valid', padding=0, stride=1, dilation=1, group=1)[source]

Applies a 3D convolution over an input tensor. The input tensor is typically of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\) and output shape \((N, C_{out}, D_{out}, H_{out}, W_{out})\). Where \(N\) is batch size, \(C\) is channel number, \(D\) is depth, \(H\) is height, \(W\) is width. the formula is defined as:

\[\operatorname{out}\left(N_{i}, C_{\text {out}_j}\right)=\operatorname{bias}\left(C_{\text {out}_j}\right)+ \sum_{k=0}^{C_{in}-1} ccor(\text {weight}\left(C_{\text {out}_j}, k\right), \operatorname{input}\left(N_{i}, k\right))\]

where \(k\) is kernel, \(ccor\) is the cross-correlation , \(C_{in}\) is the channel number of the input, \(out_{j}\) corresponds to the jth channel of the output and \(j\) is in the range of \([0,C_{out}-1]\). \(\text{weight}(C_{\text{out}_j}, k)\) is a convolution kernel slice with shape \((\text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), where \(\text{kernel_size[0]}\), \(\text{kernel_size[1]}\) and \(\text{kernel_size[2]}\) are the depth, height and width of the convolution kernel respectively. \(\text{bias}\) is the bias parameter and \(\text{X}\) is the input tensor. The shape of full convolution kernel is \((C_{out}, C_{in} / \text{group}, \text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), where group is the number of groups to split the input x in the channel dimension.

For more details, please refers to the paper Gradient Based Learning Applied to Document Recognition .

Note

On Ascend platform, only group convolution in depthwise convolution scenarios is supported. That is, when group>1, condition C_{in} = C_{out} = group must be satisfied.

Parameters
  • inputs (Tensor) – Tensor of shape \((N, C_{in}, D_{in}, H_{in}, W_{in})\).

  • weight (Tensor) – Set size of kernel is \((\text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), then the shape is \((C_{out}, C_{in}, \text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[1]})\).

  • pad_mode (str, optional) –

    Specifies padding mode. The optional values are “same”, “valid” and “pad”. Default: “valid”.

    • same: Adopts the way of completion. The depth, height and width of the output will be equal to the input x divided by stride. The padding will be evenly calculated in head and tail, top and bottom, left and right directions possiblily. Otherwise, the last extra padding will be calculated from the tail, bottom and the right side. If this mode is set, pad must be 0.

    • valid: Adopts the way of discarding. The possible largest depth, height and width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, pad must be 0.

    • pad: Implicit paddings on both sides of the input in depth, height and width. The number of pad will be padded to the input Tensor borders. pad must be greater than or equal to 0.

  • padding (Union[int, tuple[int]], optional) – The pad value to be filled. Default: 0. If pad is an integer, the paddings of head, tail, top, bottom, left and right are the same, equal to pad. If pad is a tuple of six integers, the padding of head, tail, top, bottom, left and right equal to pad[0], pad[1], pad[2], pad[3], pad[4] and pad[5] correspondingly.

  • stride (Union[int, tuple[int]], optional) – The distance of kernel moving, an int number that represents the height and width of movement are both strides, or a tuple of two int numbers that represent height and width of movement respectively. Default: 1.

  • dilation (Union[int, tuple[int]], optional) – The data type is int or a tuple of 3 integers \((dilation_d, dilation_h, dilation_w)\). Currently, dilation on depth only supports the case of 1 on Ascend backend. Specifies the dilation rate to use for dilated convolution. If set \(k > 1\), there will be \(k - 1\) pixels skipped for each sampling location. Its value must be greater than or equal to 1 and bounded by the height and width of the input. Default: 1.

  • group (int, optional) – Splits filter into groups. Default: 1.

Returns

Tensor, the value that applied 3D convolution. The shape is \((N, C_{out}, D_{out}, H_{out}, W_{out})\).

pad_mode is ‘same’:

\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lceil{\frac{D_{in}}{\text{stride[0]}}} \right \rceil \\ H_{out} = \left \lceil{\frac{H_{in}}{\text{stride[1]}}} \right \rceil \\ W_{out} = \left \lceil{\frac{W_{in}}{\text{stride[2]}}} \right \rceil \\ \end{array}\end{split}\]

pad_mode is ‘valid’:

\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lfloor{\frac{D_{in} - \text{dilation[0]} \times (\text{kernel_size[0]} - 1) } {\text{stride[0]}} + 1} \right \rfloor \\ H_{out} = \left \lfloor{\frac{H_{in} - \text{dilation[1]} \times (\text{kernel_size[1]} - 1) } {\text{stride[1]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} - \text{dilation[2]} \times (\text{kernel_size[2]} - 1) } {\text{stride[2]}} + 1} \right \rfloor \\ \end{array}\end{split}\]

pad_mode is ‘pad’:

\[\begin{split}\begin{array}{ll} \\ D_{out} = \left \lfloor{\frac{D_{in} + padding[0] + padding[1] - (\text{dilation[0]} - 1) \times \text{kernel_size[0]} - 1 }{\text{stride[0]}} + 1} \right \rfloor \\ H_{out} = \left \lfloor{\frac{H_{in} + padding[2] + padding[3] - (\text{dilation[1]} - 1) \times \text{kernel_size[1]} - 1 }{\text{stride[1]}} + 1} \right \rfloor \\ W_{out} = \left \lfloor{\frac{W_{in} + padding[4] + padding[5] - (\text{dilation[2]} - 1) \times \text{kernel_size[2]} - 1 }{\text{stride[2]}} + 1} \right \rfloor \\ \end{array}\end{split}\]

Raises
  • TypeError – If out_channel or group is not an int.

  • TypeError – If stride, padding or dilation is neither an int nor a tuple.

  • ValueError – If stride or dilation is less than 1.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If padding is a tuple whose length is not equal to 4.

  • ValueError – If pad_mode is not equal to ‘pad’ and pad is not equal to (0, 0, 0, 0, 0, 0).

Supported Platforms:

Ascend GPU CPU

Examples

>>> x = Tensor(np.ones([16, 3, 10, 32, 32]), mindspore.float16)
>>> weight = Tensor(np.ones([32, 3, 4, 3, 3]), mindspore.float16)
>>> output = ops.conv3d(x, weight)
>>> print(output.shape)
(16, 32, 7, 30, 30)