mindflow.cell.PeRCNN
- class mindflow.cell.PeRCNN(dim, in_channels, hidden_channels, kernel_size, dt, nu, laplace_kernel=None, conv_layers_num=3, padding='periodic', compute_dtype=ms.float32)[source]
Recurrent convolutional neural network Cell. lazy_inline is used to accelerate the compile stage, but now it only functions in Ascend backends. PeRCNN currently supports input with two physical components. For inputs with different shape, users must manually add or remove corresponding parameters and pi_blocks.
- Parameters
dim (int) – The physical dimension of input. Length of the shape of a 2D input is 4, of a 3D input is 5. Data follows NCHW or NCDHW format.
kernel_size (int) – Specifies the convolution kernel for parallel convolution layers.
in_channels (int) – The number of channels in the input space.
hidden_channels (int) – Number of channels in the output space of parallel convolution layers.
padding (str) – Boundary padding. Now only periodic padding is supported. Default:
periodic
laplace_kernel (mindspore.Tensor) – For 3D, Set size of kernel is :math:`(text{kernel_size[0]},
text{kernel_size[1]} – math:`(C_{out}, C_{in},
text{kernel_size[2]})` – math:`(C_{out}, C_{in},
is (then the shape) – math:`(C_{out}, C_{in},
text{kernel_size[0]} – \((N, C_{in} / \text{groups}, \text{kernel_size[0]}, \text{kernel_size[1]})\), then the size of kernel is \((\text{kernel_size[0]}, \text{kernel_size[1]})\).
text{kernel_size[1]} – \((N, C_{in} / \text{groups}, \text{kernel_size[0]}, \text{kernel_size[1]})\), then the size of kernel is \((\text{kernel_size[0]}, \text{kernel_size[1]})\).
2D (text{kernel_size[1]})`. For) – \((N, C_{in} / \text{groups}, \text{kernel_size[0]}, \text{kernel_size[1]})\), then the size of kernel is \((\text{kernel_size[0]}, \text{kernel_size[1]})\).
shape (Tensor of) – \((N, C_{in} / \text{groups}, \text{kernel_size[0]}, \text{kernel_size[1]})\), then the size of kernel is \((\text{kernel_size[0]}, \text{kernel_size[1]})\).
conv_layers_num (int) – Number of parallel convolution layers. Default:
3
.compute_dtype (dtype.Number) – The data type of PeRCNN. Default:
mindspore.float32
. Should bemindspore.float16
ormindspore.float32
. mindspore.float32 is recommended for GPU backends, mindspore.float16 is recommended for Ascend backends.
- Inputs:
- input (Tensor) - Tensor of shape \((batch\_size, channels, depth, height, width)\) for 3D.
Tensor of shape \((batch\_size, channels, height, width)\) for 2D.
- Outputs:
Tensor, has the same shape as input.
- Raises
- Supported Platforms:
Ascend
GPU
Examples
>>> import numpy as np >>> import mindspore as ms >>> from mindflow.cell.neural_operators.percnn import PeRCNN
>>> laplace_3d = [[[[[0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0], >>> [0.0, 0.0, -0.08333333333333333, 0.0, 0.0], >>> [0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0]], >>> [[0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0], >>> [0.0, 0.0, 1.3333333333333333, 0.0, 0.0], >>> [0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0]], >>> [[0.0, 0.0, -0.08333333333333333, 0.0, 0.0], >>> [0.0, 0.0, 1.3333333333333333, 0.0, 0.0], >>> [-0.08333333333333333, 1.3333333333333333, -7.5, 1.3333333333333333, -0.08333333333333333], >>> [0.0, 0.0, 1.3333333333333333, 0.0, 0.0], >>> [0.0, 0.0, -0.08333333333333333, 0.0, 0.0]], >>> [[0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0], >>> [0.0, 0.0, 1.3333333333333333, 0.0, 0.0], >>> [0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0]], >>> [[0.0, 0.0, 0.0, 0.0, 0.0], >>> [0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, -0.08333333333333333, 0.0, 0.0], >>> [0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0]]]]] >>> laplace = np.array(laplace_3d) >>> grid_size = 48 >>> field = 100 >>> dx_3d = field / grid_size
>>> laplace_3d_kernel = ms.Tensor(1 / dx_3d**2 * laplace, dtype=ms.float32)
>>> rcnn_ms = PeRCNN( >>> dim=3, >>> in_channels=2, >>> hidden_channels=2, >>> kernel_size=1, >>> dt=0.5, >>> nu=0.274, >>> laplace_kernel=laplace_3d_kernel, >>> conv_layers_num=3, >>> compute_dtype=ms.float32, >>> ) >>> input = np.random.randn(1, 2, 48, 48, 48) >>> input = ms.Tensor(input, ms.float32) >>> output = rcnn_ms(input) >>> print(output.shape)
(1, 2, 48, 48, 48)