mindspore.ops.conv1d

mindspore.ops.conv1d(input, weight, bias=None, stride=1, pad_mode='valid', padding=0, dilation=1, groups=1)[source]

Applies a 1D convolution over an input tensor. The input Tensor is typically of shape \((N, C_{in}, L_{in})\), where \(N\) is batch size, \(C\) is channel number, \(L\) is input sequence width.

The output is calculated based on formula:

\[\text{out}(N_i, C_{\text{out}_j}) = \text{bias}(C_{\text{out}_j}) + \sum_{k = 0}^{C_{in} - 1} \text{ccor}({\text{weight}(C_{\text{out}_j}, k), \text{X}(N_i, k)})\]

where \(bias\) is the output channel bias, \(ccor\) is the cross-correlation, , \(weight\) is the convolution kernel value and \(X\) represents the input feature map.

Here are the indices’ meanings:

  • \(i\) corresponds to the batch number, the range is \([0, N-1]\), where \(N\) is the batch size of the input.

  • \(j\) corresponds to the output channel, ranging from \([0, C_{out}-1]\), where \(C_{out}\) is the number of output channels, which is also equal to the number of kernels.

  • \(k\) corresponds to the input channel, ranging from \([0, C_{in}-1]\), where \(C_{in}\) is the number of input channels, which is also equal to the number of channels in the convolutional kernels.

Therefore, in the above formula, \({bias}(C_{\text{out}_j})\) represents the bias of the \(j\)-th output channel, \({weight}(C_{\text{out}_j}, k)\) represents the slice of the \(j\)-th convolutional kernel in the \(k\)-th channel, and \({X}(N_i, k)\) represents the slice of the \(k\)-th input channel in the \(i\)-th batch of the input feature map.

The shape of the convolutional kernel is given by \((\text{kernel_size})\), where \(\text{kernel_size}\) is the width of the kernel. If we consider the input and output channels as well as the group parameter, the complete kernel shape will be \((C_{out}, C_{in} / \text{group}, \text{kernel_size})\), where group is the number of groups dividing x’s input channel when applying group convolution.

For more details about convolution layer, please refer to Gradient Based Learning Applied to Document Recognition and ConvNets .

Note

On Ascend platform, only group convolution in depthwise convolution scenarios is supported. That is, when groups>1, condition \(C_{in}\) = \(C_{out}\) = groups must be satisfied.

Parameters
  • input (Tensor) – Input Tensor of shape \((N, C_{in}, L_{in})\).

  • weight (Tensor) – The convolutional kernel value, it should has shape \((N, C_{in} / \text{groups}, \text{kernel_size})\).

  • bias (Tensor, optional) – Bias Tensor with shape \((C_{out})\). When bias is None, zeros will be used. Default: None .

  • stride (Union(int, tuple[int]), optional) – The distance of kernel moving, an int number or a tuple of one int that represents width of movement. Default: 1.

  • pad_mode (str, optional) –

    Specifies padding mode. The optional values are "same" , "valid" and "pad" . Default: "valid" .

    • "same": Adopts the way of completion. The height and width of the output will be equal to the input x divided by stride. The padding will be evenly calculated in left and right possiblily. Otherwise, the last extra padding will be calculated from the right side. If this mode is set, padding must be 0.

    • "valid": Adopts the way of discarding. The possible largest width of output will be returned without padding. Extra pixels will be discarded. If this mode is set, padding must be 0.

    • "pad": Implicit paddings on both sides of the input x. The number of padding will be padded to the input Tensor borders. padding must be greater than or equal to 0.

  • padding (Union(int, tuple[int], list[int]), optional) – Specifies the amount of padding to apply on both side of input when pad_mode is set to "pad". The paddings of left and right are the same, equal to padding or padding[0] when padding is a tuple of 1 integer. Default: 0 .

  • dilation (Union(int, tuple[int]), optional) – Specifies the dilation rate to use for dilated convolution. It can be a single int or a tuple of 1 integer. Assuming \(dilation=(d0,)\), the convolutional kernel samples the input with a spacing of \(d0-1\) elements in the width direction. The value should be in the ranges [1, L]. Default: 1 .

  • groups (int, optional) – Splits input into groups. Default: 1 .

Returns

Tensor, the value that applied 1D convolution. The shape is \((N, C_{out}, L_{out})\). To see how different pad modes affect the output shape, please refer to mindspore.nn.Conv1d for more details.

Raises
  • TypeError – If stride, padding or dilation is neither an int nor a tuple.

  • TypeErrorgroups is not an int.

  • TypeError – If bias is not a Tensor.

  • ValueError – If the shape of bias is not \((C_{out})\) .

  • ValueError – If stride or dilation is less than 1.

  • ValueError – If pad_mode is not one of ‘same’, ‘valid’ or ‘pad’.

  • ValueError – If padding is a tuple whose length is not equal to 1.

  • ValueError – If pad_mode is not equal to ‘pad’ and padding is greater than 0.

Supported Platforms:

Ascend GPU

Examples

>>> import mindspore
>>> import numpy as np
>>> from mindspore import Tensor, ops
>>> x = Tensor(np.arange(64).reshape((4, 4, 4)), mindspore.float32)
>>> weight = Tensor(np.arange(8).reshape((2, 2, 2)), mindspore.float32)
>>> bias = Tensor([-0.12345, 2.7683], mindspore.float32)
>>> output = ops.conv1d(x, weight, pad_mode='pad', padding=(1,), bias=bias, groups=2)
>>> print(output.shape)
(4, 2, 5)