mindflow.cell.PeRCNN

View Source On Gitee
class mindflow.cell.PeRCNN(dim, in_channels, hidden_channels, kernel_size, dt, nu, laplace_kernel=None, conv_layers_num=3, padding='periodic', compute_dtype=ms.float32)[source]

Physics-embedded Recurrent Convolutional Neural Network (PeRCNN) Cell. It forcibly encodes given physics structure to facilitate the learning of the spatiotemporal dynamics in sparse data regimes. PeRCNN can be applied to a variety of problems regarding the PDE system, including forward and inverse analysis, data-driven modeling, and discovery of PDEs.

For more details, please refers to the paper Encoding physics to learn reaction–diffusion processes .

lazy_inline is used to accelerate the compile stage, but now it only functions in Ascend backends. PeRCNN currently supports input with two physical components. For inputs with different shape, users must manually add or remove corresponding parameters and pi_blocks.

Parameters
  • dim (int) – The physical dimension of input. Length of the shape of a 2D input is 4, of a 3D input is 5. Data follows NCHW or NCDHW format.

  • in_channels (int) – The number of channels in the input space.

  • hidden_channels (int) – Number of channels in the output space of parallel convolution layers.

  • kernel_size (int) – Specifies the convolution kernel for parallel convolution layers.

  • dt (Union[int, float]) – The time step of PeRCNN.

  • nu (Union[int, float]) – The coefficient of diffusion term.

  • laplace_kernel (mindspore.Tensor) – For 3D, Set size of kernel is \((\text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[2]})\), then the shape is \((C_{out}, C_{in}, \text{kernel_size[0]}, \text{kernel_size[1]}, \text{kernel_size[1]})\). For 2D, Tensor of shape \((N, C_{in} / \text{groups}, \text{kernel_size[0]}, \text{kernel_size[1]})\), then the size of kernel is \((\text{kernel_size[0]}, \text{kernel_size[1]})\).

  • conv_layers_num (int) – Number of parallel convolution layers. Default: 3.

  • padding (str) – Boundary padding, currently only periodic padding supported. Default: periodic.

  • compute_dtype (dtype.Number) – The data type of PeRCNN. Default: mindspore.float32. Should be mindspore.float16 or mindspore.float32. mindspore.float32 is recommended for GPU backends, mindspore.float16 is recommended for Ascend backends.

Inputs:
  • x (Tensor) - Tensor of shape \((batch\_size, channels, depth, height, width)\) for 3D. Tensor of shape \((batch\_size, channels, height, width)\) for 2D.

Outputs:

Tensor, has the same shape as x.

Raises
  • TypeError – If dim, in_channels, hidden_channels, kernel_size is not an int.

  • TypeError – If dt and nu is not an int nor a float.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindflow.cell import PeRCNN
>>> laplace_3d = [[[[[0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0],
...                 [0.0, 0.0, -0.08333333333333333, 0.0, 0.0],
...                 [0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0]],
...                 [[0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0],
...                 [0.0, 0.0, 1.3333333333333333, 0.0, 0.0],
...                 [0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0]],
...                 [[0.0, 0.0, -0.08333333333333333, 0.0, 0.0],
...                 [0.0, 0.0, 1.3333333333333333, 0.0, 0.0],
...                 [-0.08333333333333333, 1.3333333333333333, -7.5, 1.3333333333333333, -0.08333333333333333],
...                 [0.0, 0.0, 1.3333333333333333, 0.0, 0.0],
...                 [0.0, 0.0, -0.08333333333333333, 0.0, 0.0]],
...                 [[0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0],
...                 [0.0, 0.0, 1.3333333333333333, 0.0, 0.0],
...                 [0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0]],
...                 [[0.0, 0.0, 0.0, 0.0, 0.0],
...                 [0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, -0.08333333333333333, 0.0, 0.0],
...                 [0.0, 0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0, 0.0]]]]]
>>> laplace = np.array(laplace_3d)
>>> grid_size = 48
>>> field = 100
>>> dx_3d = field / grid_size
>>> laplace_3d_kernel = ms.Tensor(1 / dx_3d**2 * laplace, dtype=ms.float32)
>>> rcnn_ms = PeRCNN(
...     dim=3,
...     in_channels=2,
...     hidden_channels=2,
...     kernel_size=1,
...     dt=0.5,
...     nu=0.274,
...     laplace_kernel=laplace_3d_kernel,
...     conv_layers_num=3,
...     compute_dtype=ms.float32,
...   )
>>> input = np.random.randn(1, 2, 48, 48, 48)
>>> input = ms.Tensor(input, ms.float32)
>>> output = rcnn_ms(input)
>>> print(output.shape)
(1, 2, 48, 48, 48)