mindearth.cell.AFNONet

View Source On Gitee
class mindearth.cell.AFNONet(image_size=(128, 256), in_channels=1, out_channels=1, patch_size=8, encoder_depths=12, encoder_embed_dim=768, mlp_ratio=4, dropout_rate=1.0, compute_dtype=mindspore.float32)[source]

The AFNO model is a deep learning model that based on the Fourier Neural Operator (AFNO) and the Vision Transformer structure. The details can be found in Adaptive Fourier Neural Operators: Efficient Token Mixers For Transformers.

Parameters
  • image_size (tuple[int]) – The size of the input image. Default: (128, 256).

  • in_channels (int) – The number of channels in the input space. Default: 1.

  • out_channels (int) – The number of channels in the output space. Default: 1.

  • patch_size (int) – The patch size of image. Default: 8.

  • encoder_depths (int) – The encoder depth of encoder layer. Default: 12.

  • encoder_embed_dim (int) – The encoder embedding dimension of encoder layer. Default: 768.

  • mlp_ratio (int) – The rate of mlp layer. Default: 4.

  • dropout_rate (float) – The rate of dropout layer. Default: 1.0.

  • compute_dtype (dtype) – The data type for encoder, decoding_embedding, decoder and dense layer. Default: mindspore.float32.

Inputs:
  • x (Tensor) - Tensor of shape \((batch\_size, feature\_size, image\_height, image\_width)\).

Outputs:
  • output (Tensor) -Tensor of shape \((batch\_size, patch\_size, embed\_dim)\), where \(patch\_size = (image\_height * image\_width) / (patch\_size * patch\_size)\).

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> from mindspore.common.initializer import initializer, Normal
>>> from mindearth.cell import AFNONet
>>> B, C, H, W = 16, 20, 128, 256
>>> input_ = initializer(Normal(), [B, C, H, W])
>>> net = AFNONet(image_size=(H, W), in_channels=C, out_channels=C, compute_dtype=dtype.float32)
>>> output = net(input_)
>>> print(output.shape)
(16, 128, 5120)