{
    "cells": [
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {
                "pycharm": {
                    "name": "#%% md\n"
                }
            },
            "source": [
                "# 基于Fourier Neural Operator的Navier-Stokes equation求解\n",
                "\n",
                "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.2/resource/_static/logo_notebook.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.2/mindflow/zh_cn/data_driven/mindspore_navier_stokes_FNO3D.ipynb) [![下载样例代码](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.2/resource/_static/logo_download_code.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.2/mindflow/zh_cn/data_driven/mindspore_navier_stokes_FNO3D.py) [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.2/docs/mindflow/docs/source_zh_cn/data_driven/navier_stokes_FNO3D.ipynb)\n",
                "\n"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {
                "pycharm": {
                    "name": "#%% md\n"
                }
            },
            "source": [
                "## 概述\n",
                "\n",
                "计算流体力学是21世纪流体力学领域的重要技术之一,其通过使用数值方法在计算机中对流体力学的控制方程进行求解,从而实现流动的分析、预测和控制。传统的有限元法(finite element method,FEM)和有限差分法(finite difference method,FDM)常用于复杂的仿真流程(物理建模、网格划分、数值离散、迭代求解等)和较高的计算成本,往往效率低下。因此,借助AI提升流体仿真效率是十分必要的。\n",
                "\n",
                "近年来,随着神经网络的迅猛发展,为科学计算提供了新的范式。经典的神经网络是在有限维度的空间进行映射,只能学习与特定离散化相关的解。与经典神经网络不同,傅里叶神经算子(Fourier Neural Operator,FNO)是一种能够学习无限维函数空间映射的新型深度学习架构。该架构可直接学习从任意函数参数到解的映射,用于解决一类偏微分方程的求解问题,具有更强的泛化能力。更多信息可参考[Fourier Neural Operator for Parametric Partial Differential Equations](https://arxiv.org/abs/2010.08895)。\n",
                "\n",
                "本案例教程介绍利用三维傅里叶神经算子的纳维-斯托克斯方程(Navier-Stokes equation)求解方法。\n"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {
                "pycharm": {
                    "name": "#%% md\n"
                }
            },
            "source": [
                "## 纳维-斯托克斯方程(Navier-Stokes equation)\n",
                "\n",
                "纳维-斯托克斯方程(Navier-Stokes equation)是计算流体力学领域的经典方程,是一组描述流体动量守恒的偏微分方程,简称N-S方程。它在二维不可压缩流动中的涡度形式如下:\n",
                "\n",
                "$$ \\partial_t w(x, t)+u(x, t) \\cdot \\nabla w(x, t)=\\nu \\Delta w(x, t)+f(x), \\quad x \\in(0,1)^2, t \\in(0, T]. $$\n",
                "\n",
                "$$ \\nabla \\cdot u(x, t)=0, \\quad x \\in(0,1)^2, t \\in[0, T]. $$\n",
                "\n",
                "$$ w(x, 0)=w_0(x), \\quad x \\in(0,1)^2. $$\n",
                "\n",
                "其中$u$表示速度场,$w=\\nabla \\times u$表示涡度,$w_0(x)$表示初始条件,$\\nu$表示粘度系数,$f(x)$为外力合力项。\n"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 问题描述\n",
                "\n",
                "本案例利用Fourier Neural Operator学习某一个时刻对应涡度到下一时刻涡度的映射,实现二维不可压缩N-S方程的求解:\n",
                "\n",
                "$$ w_t \\mapsto w(\\cdot, t+1). $$\n"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 技术路径\n",
                "\n",
                "MindSpore Flow求解该问题的具体流程如下:\n",
                "\n",
                "1. 创建数据集。\n",
                "2. 构建模型。\n",
                "3. 优化器与损失函数。\n",
                "4. 模型训练。\n"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## Fourier Neural Operator\n",
                "\n",
                "Fourier Neural Operator模型构架如下图所示。图中$w_0(x)$表示初始涡度,通过Lifting Layer实现输入向量的高维映射,然后将映射结果作为Fourier Layer的输入,进行频域信息的非线性变换,最后由Decoding Layer将变换结果映射至最终的预测结果$w_1(x)$。\n",
                "\n",
                "Lifting Layer、Fourier Layer以及Decoding Layer共同组成了Fourier Neural Operator。\n",
                "\n",
                "![Fourier Neural Operator模型构架](images/FNO.png)\n",
                "\n",
                "Fourier Layer网络结构如下图所示。图中V表示输入向量,上框表示向量经过傅里叶变换后,经过线性变换R,过滤高频信息,然后进行傅里叶逆变换;另一分支经过线性变换W,最后通过激活函数,得到Fourier Layer输出向量。\n",
                "\n",
                "![Fourier Layer网络结构](images/FNO-2.png)\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 1,
            "metadata": {
                "pycharm": {
                    "name": "#%%\n"
                }
            },
            "outputs": [],
            "source": [
                "import os\n",
                "import time\n",
                "import numpy as np\n",
                "\n",
                "from mindspore import nn, ops, jit, data_sink, save_checkpoint, context, Tensor, ops\n",
                "from mindspore.common import set_seed\n",
                "from mindspore import dtype as mstype\n"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "下述`src`包可以在[applications/data_driven/navier_stokes/fno3d/src](https://gitee.com/mindspore/mindscience/tree/r2.2/MindFlow/applications/data_driven/navier_stokes/fno3d/src)下载。\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 2,
            "metadata": {
                "pycharm": {
                    "name": "#%%\n"
                }
            },
            "outputs": [],
            "source": [
                "from mindflow import get_warmup_cosine_annealing_lr, load_yaml_config\n",
                "from mindflow.cell.neural_operators.fno import FNO3D\n",
                "\n",
                "from src import LpLoss, UnitGaussianNormalizer, create_training_dataset\n",
                "\n",
                "set_seed(0)\n",
                "np.random.seed(0)\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 3,
            "metadata": {
                "pycharm": {
                    "name": "#%%\n"
                }
            },
            "outputs": [],
            "source": [
                "# set context for training: using graph mode for high performance training with GPU acceleration\n",
                "context.set_context(mode=context.GRAPH_MODE, device_target='GPU', device_id=0)\n",
                "use_ascend = context.get_context(attr_key='device_target') == \"Ascend\"\n",
                "config = load_yaml_config('./configs/fno3d.yaml')\n",
                "data_params = config[\"data\"]\n",
                "model_params = config[\"model\"]\n",
                "optimizer_params = config[\"optimizer\"]\n",
                "\n",
                "sub = model_params[\"sub\"]\n",
                "grid_size = model_params[\"input_resolution\"] // sub\n",
                "input_timestep = model_params[\"input_timestep\"]\n",
                "output_timestep = model_params[\"output_timestep\"]\n"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {
                "pycharm": {
                    "name": "#%% md\n"
                }
            },
            "source": [
                "## 创建数据集\n",
                "\n",
                "训练与测试数据下载: [data_driven/navier_stokes_3d_fno/dataset](https://download.mindspore.cn/mindscience/mindflow/dataset/applications/data_driven/navier_stokes/dataset/) .\n",
                "\n",
                "本案例根据Zongyi Li在 [Fourier Neural Operator for Parametric Partial Differential Equations](https://arxiv.org/pdf/2010.08895.pdf) 一文中对数据集的设置生成训练数据集与测试数据集。具体设置如下:\n",
                "\n",
                "基于周期性边界,生成满足如下分布的初始条件$w_0(x)$:\n",
                "\n",
                "$$ w_0 \\sim \\mu, \\mu=\\mathcal{N}\\left(0,7^{3 / 2}(-\\Delta+49 I)^{-2.5}\\right). $$\n",
                "\n",
                "外力项设置为:\n",
                "\n",
                "$$ f(x)=0.1\\left(\\sin \\left(2 \\pi\\left(x_1+x_2\\right)\\right)+\\right.\\cos(2 \\pi(x_1+x_2))). $$\n",
                "\n",
                "采用`Crank-Nicolson`方法生成数据,时间步长设置为1e-4,最终数据以每 t = 1 个时间单位记录解。所有数据均在256×256的网格上生成,并被下采样至64×64网格。本案例选取粘度系数$\\nu=1e−3$,训练集样本量为1000个,测试集样本量为200个。\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 4,
            "metadata": {
                "pycharm": {
                    "name": "#%%\n"
                }
            },
            "outputs": [
                {
                    "name": "stdout",
                    "output_type": "stream",
                    "text": [
                        "Data preparation finished\n"
                    ]
                }
            ],
            "source": [
                "train_a = Tensor(np.load(os.path.join(\n",
                "    data_params[\"path\"], \"train_a.npy\")), mstype.float32)\n",
                "train_u = Tensor(np.load(os.path.join(\n",
                "    data_params[\"path\"], \"train_u.npy\")), mstype.float32)\n",
                "test_a = Tensor(np.load(os.path.join(\n",
                "    data_params[\"path\"], \"test_a.npy\")), mstype.float32)\n",
                "test_u = Tensor(np.load(os.path.join(\n",
                "    data_params[\"path\"], \"test_u.npy\")), mstype.float32)\n",
                "train_loader = create_training_dataset(data_params,\n",
                "                                       shuffle=True)\n"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {
                "pycharm": {
                    "name": "#%% md\n"
                }
            },
            "source": [
                "## 构建模型\n",
                "\n",
                "网络由1层Lifting layer、多层Fourier Layer以及1层Decoding layer叠加组成:\n",
                "\n",
                "- Lifting layer对应样例代码中`FNO3D.fc0`,将输出数据$x$映射至高维;\n",
                "\n",
                "- 多层Fourier Layer的叠加对应样例代码中`FNO3D.fno_seq`,本案例采用离散傅里叶变换实现时域与频域的转换;\n",
                "\n",
                "- Decoding layer对应代码中`FNO3D.fc1`与`FNO3D.fc2`,获得最终的预测值。\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 5,
            "metadata": {
                "pycharm": {
                    "name": "#%%\n"
                }
            },
            "outputs": [],
            "source": [
                "if use_ascend:\n",
                "    compute_type = mstype.float16\n",
                "else:\n",
                "    compute_type = mstype.float32\n",
                "# prepare model\n",
                "model = FNO3D(in_channels=model_params[\"in_channels\"],\n",
                "              out_channels=model_params[\"out_channels\"],\n",
                "              n_modes=model_params[\"modes\"],\n",
                "              resolutions=[model_params[\"input_resolution\"],\n",
                "                           model_params[\"input_resolution\"], output_timestep],\n",
                "              hidden_channels=model_params[\"width\"],\n",
                "              n_layers=model_params[\"depth\"],\n",
                "              projection_channels=4*model_params[\"width\"],\n",
                "              fno_compute_dtype=compute_type\n",
                "              )\n",
                "\n",
                "model_params_list = []\n",
                "for k, v in model_params.items():\n",
                "    model_params_list.append(f\"{k}-{v}\")\n",
                "model_name = \"_\".join(model_params_list)\n"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {
                "pycharm": {
                    "name": "#%% md\n"
                }
            },
            "source": [
                "## 优化器与损失函数\n",
                "\n",
                "使用相对均方根误差作为网络训练损失函数:\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 6,
            "metadata": {
                "pycharm": {
                    "name": "#%%\n"
                }
            },
            "outputs": [],
            "source": [
                "\n",
                "lr = get_warmup_cosine_annealing_lr(lr_init=optimizer_params[\"initial_lr\"],\n",
                "                                    last_epoch=optimizer_params[\"train_epochs\"],\n",
                "                                    steps_per_epoch=train_loader.get_dataset_size(),\n",
                "                                    warmup_epochs=optimizer_params[\"warmup_epochs\"])\n",
                "optimizer = nn.optim.Adam(model.trainable_params(),\n",
                "                          learning_rate=Tensor(lr), weight_decay=optimizer_params['weight_decay'])\n",
                "loss_fn = LpLoss()\n",
                "a_normalizer = UnitGaussianNormalizer(train_a)\n",
                "y_normalizer = UnitGaussianNormalizer(train_u)\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 1,
            "metadata": {},
            "outputs": [],
            "source": [
                "def calculate_l2_error(model, inputs, labels):\n",
                "    \"\"\"\n",
                "    Evaluate the model respect to input data and label.\n",
                "    Args:\n",
                "        model (Cell): list of expressions node can by identified by mindspore.\n",
                "        inputs (Tensor): the input data of network.\n",
                "        labels (Tensor): the true output value of given inputs.\n",
                "    \"\"\"\n",
                "    print(\"================================Start Evaluation================================\")\n",
                "    time_beg = time.time()\n",
                "    rms_error = 0.0\n",
                "    for i in range(labels.shape[0]):\n",
                "        label = labels[i:i + 1]\n",
                "        test_batch = inputs[i:i + 1]\n",
                "        test_batch = a_normalizer.encode(test_batch)\n",
                "        label = y_normalizer.encode(label)\n",
                "        test_batch = test_batch.reshape(\n",
                "            1, grid_size, grid_size, 1, input_timestep).repeat(output_timestep, axis=3)\n",
                "        prediction = model(test_batch).reshape(\n",
                "            1, grid_size, grid_size, output_timestep)\n",
                "        prediction = y_normalizer.decode(prediction)\n",
                "        label = y_normalizer.decode(label)\n",
                "        rms_error_step = loss_fn(prediction.reshape(\n",
                "            1, -1), label.reshape(1, -1))\n",
                "        rms_error += rms_error_step\n",
                "\n",
                "    rms_error = rms_error / labels.shape[0]\n",
                "    print(\"mean rms_error:\", rms_error)\n",
                "    print(\"predict total time: {} s\".format(time.time() - time_beg))\n",
                "    print(\"=================================End Evaluation=================================\")\n"
            ]
        },
        {
            "cell_type": "markdown",
            "metadata": {},
            "source": [
                "## 训练函数\n",
                "\n",
                "使用MindSpore>= 2.0.0的版本,可以使用函数式编程范式训练神经网络,单步训练函数使用jit装饰。数据下沉函数data_sink,传入单步训练函数和训练数据集。\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 2,
            "metadata": {},
            "outputs": [],
            "source": [
                "def forward_fn(data, label):\n",
                "    bs = data.shape[0]\n",
                "    data = a_normalizer.encode(data)\n",
                "    label = y_normalizer.encode(label)\n",
                "    data = data.reshape(bs, grid_size, grid_size, 1, input_timestep).repeat(\n",
                "        output_timestep, axis=3)\n",
                "    logits = model(data).reshape(bs, grid_size, grid_size, output_timestep)\n",
                "    logits = y_normalizer.decode(logits)\n",
                "    label = y_normalizer.decode(label)\n",
                "    loss = loss_fn(logits.reshape(bs, -1), label.reshape(bs, -1))\n",
                "    return loss\n",
                "\n",
                "\n",
                "grad_fn = ops.value_and_grad(\n",
                "    forward_fn, None, optimizer.parameters, has_aux=False)\n",
                "\n",
                "\n",
                "@jit\n",
                "def train_step(data, label):\n",
                "    loss, grads = grad_fn(data, label)\n",
                "    loss = ops.depend(loss, optimizer(grads))\n",
                "    return loss\n",
                "\n",
                "\n",
                "sink_process = data_sink(train_step, train_loader, sink_size=100)\n"
            ]
        },
        {
            "attachments": {},
            "cell_type": "markdown",
            "metadata": {
                "pycharm": {
                    "name": "#%% md\n"
                }
            },
            "source": [
                "## 模型训练\n",
                "\n",
                "使用**MindSpore >= 2.0.0**的版本,可以使用函数式编程范式训练神经网络。\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 7,
            "metadata": {
                "pycharm": {
                    "name": "#%%\n"
                }
            },
            "outputs": [],
            "source": [
                "def train():\n",
                "    summary_dir = os.path.join(config[\"summary_dir\"], model_name)\n",
                "    ckpt_dir = os.path.join(summary_dir, \"ckpt\")\n",
                "    if not os.path.exists(ckpt_dir):\n",
                "        os.makedirs(ckpt_dir)\n",
                "    model.set_train()\n",
                "    for step in range(1, 1 + optimizer_params[\"train_epochs\"]):\n",
                "        local_time_beg = time.time()\n",
                "        cur_loss = sink_process()\n",
                "        print(\n",
                "            f\"epoch: {step} train loss: {cur_loss} epoch time: {time.time() - local_time_beg:.2f}s\")\n",
                "        if step % 10 == 0:\n",
                "            print(f\"loss: {cur_loss.asnumpy():>7f}\")\n",
                "            print(\"step: {}, time elapsed: {}ms\".format(\n",
                "                step, (time.time() - local_time_beg) * 1000))\n",
                "            calculate_l2_error(model, test_a, test_u)\n",
                "            save_checkpoint(model, os.path.join(\n",
                "                ckpt_dir, model_params[\"name\"]))\n"
            ]
        },
        {
            "cell_type": "code",
            "execution_count": 3,
            "metadata": {},
            "outputs": [
                {
                    "name": "stdout",
                    "output_type": "stream",
                    "text": [
                        "pid: 1993\n",
                        "2023-02-01 12:14:12.2323\n",
                        "use_ascend: False\n",
                        "device_id: 2\n",
                        "Data preparation finished\n",
                        "steps_per_epoch:  1000\n",
                        "epoch: 1 train loss: 1.7631323 epoch time: 50.41s\n",
                        "epoch: 2 train loss: 1.9283392 epoch time: 36.59s\n",
                        "epoch: 3 train loss: 1.4265916 epoch time: 35.09s\n",
                        "epoch: 4 train loss: 1.8609437 epoch time: 34.41s\n",
                        "epoch: 5 train loss: 1.5222052 epoch time: 34.60s\n",
                        "epoch: 6 train loss: 1.3424721 epoch time: 33.85s\n",
                        "epoch: 7 train loss: 1.607729 epoch time: 33.11s\n",
                        "epoch: 8 train loss: 1.3308442 epoch time: 33.05s\n",
                        "epoch: 9 train loss: 1.3169765 epoch time: 33.90s\n",
                        "epoch: 10 train loss: 1.4149593 epoch time: 33.91s\n",
                        "...\n",
                        "predict total time: 15.179609298706055 s\n",
                        "epoch: 141 train loss: 0.777328 epoch time: 32.55s\n",
                        "epoch: 142 train loss: 0.7008966 epoch time: 32.52s\n",
                        "epoch: 143 train loss: 0.72377646 epoch time: 32.57s\n",
                        "epoch: 144 train loss: 0.72175145 epoch time: 32.44s\n",
                        "epoch: 145 train loss: 0.6235678 epoch time: 32.46s\n",
                        "epoch: 146 train loss: 0.9351083 epoch time: 32.45s\n",
                        "epoch: 147 train loss: 0.9283789 epoch time: 32.47s\n",
                        "epoch: 148 train loss: 0.7655642 epoch time: 32.60s\n",
                        "epoch: 149 train loss: 0.7233772 epoch time: 32.65s\n",
                        "epoch: 150 train loss: 0.86825275 epoch time: 32.59s\n",
                        "================================Start Evaluation================================\n",
                        "mean rel_rmse_error: 0.07437102290522307\n",
                        "=================================End Evaluation=================================\n",
                        "predict total time: 15.212349653244019 s\n"
                    ]
                }
            ],
            "source": [
                "train()\n"
            ]
        }
    ],
    "metadata": {
        "kernelspec": {
            "display_name": "Python 3",
            "language": "python",
            "name": "python3"
        },
        "language_info": {
            "codemirror_mode": {
                "name": "ipython",
                "version": 3
            },
            "file_extension": ".py",
            "mimetype": "text/x-python",
            "name": "python",
            "nbconvert_exporter": "python",
            "pygments_lexer": "ipython3",
            "version": "3.10.6"
        },
        "vscode": {
            "interpreter": {
                "hash": "916dbcbb3f70747c44a77c7bcd40155683ae19c65e1c03b4aa3499c5328201f1"
            }
        }
    },
    "nbformat": 4,
    "nbformat_minor": 1
}