{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "# 基于FNO求解二维Navier-Stokes\n",
    "\n",
    "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.2/resource/_static/logo_notebook.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.2/mindflow/zh_cn/data_driven/mindspore_navier_stokes_FNO2D.ipynb) [![下载样例代码](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.2/resource/_static/logo_download_code.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.2/mindflow/zh_cn/data_driven/mindspore_navier_stokes_FNO2D.py) [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.2/docs/mindflow/docs/source_zh_cn/data_driven/navier_stokes_FNO2D.ipynb)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## 概述\n",
    "\n",
    "计算流体力学是21世纪流体力学领域的重要技术之一,其通过使用数值方法在计算机中对流体力学的控制方程进行求解,从而实现流动的分析、预测和控制。传统的有限元法(finite element method,FEM)和有限差分法(finite difference method,FDM)常用于复杂的仿真流程(物理建模、网格划分、数值离散、迭代求解等)和较高的计算成本,往往效率低下。因此,借助AI提升流体仿真效率是十分必要的。\n",
    "\n",
    "近年来,随着神经网络的迅猛发展,为科学计算提供了新的范式。经典的神经网络是在有限维度的空间进行映射,只能学习与特定离散化相关的解。与经典神经网络不同,傅里叶神经算子(Fourier Neural Operator,FNO)是一种能够学习无限维函数空间映射的新型深度学习架构。该架构可直接学习从任意函数参数到解的映射,用于解决一类偏微分方程的求解问题,具有更强的泛化能力。更多信息可参考[Fourier Neural Operator for Parametric Partial Differential Equations](https://arxiv.org/abs/2010.08895)。\n",
    "\n",
    "本案例教程介绍利用傅里叶神经算子的纳维-斯托克斯方程(Navier-Stokes equation)求解方法。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## 纳维-斯托克斯方程(Navier-Stokes equation)\n",
    "\n",
    "纳维-斯托克斯方程(Navier-Stokes equation)是计算流体力学领域的经典方程,是一组描述流体动量守恒的偏微分方程,简称N-S方程。它在二维不可压缩流动中的涡度形式如下:\n",
    "\n",
    "$$\n",
    "\\partial_t w(x, t)+u(x, t) \\cdot \\nabla w(x, t)=\\nu \\Delta w(x, t)+f(x), \\quad x \\in(0,1)^2, t \\in(0, T]\n",
    "$$\n",
    "\n",
    "$$\n",
    "\\nabla \\cdot u(x, t)=0, \\quad x \\in(0,1)^2, t \\in[0, T]\n",
    "$$\n",
    "\n",
    "$$\n",
    "w(x, 0)=w_0(x), \\quad x \\in(0,1)^2\n",
    "$$\n",
    "\n",
    "其中$u$表示速度场,$w=\\nabla \\times u$表示涡度,$w_0(x)$表示初始条件,$\\nu$表示粘度系数,$f(x)$为外力合力项。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 问题描述\n",
    "\n",
    "本案例利用Fourier Neural Operator学习某一个时刻对应涡度到下一时刻涡度的映射,实现二维不可压缩N-S方程的求解:\n",
    "\n",
    "$$\n",
    "w_t \\mapsto w(\\cdot, t+1)\n",
    "$$"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 技术路径\n",
    "\n",
    "MindSpore Flow求解该问题的具体流程如下:\n",
    "\n",
    "1. 创建数据集。\n",
    "2. 构建模型。\n",
    "3. 优化器与损失函数。\n",
    "4. 模型训练。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Fourier Neural Operator\n",
    "\n",
    "Fourier Neural Operator模型构架如下图所示。图中$w_0(x)$表示初始涡度,通过Lifting Layer实现输入向量的高维映射,然后将映射结果作为Fourier Layer的输入,进行频域信息的非线性变换,最后由Decoding Layer将变换结果映射至最终的预测结果$w_1(x)$。\n",
    "\n",
    "Lifting Layer、Fourier Layer以及Decoding Layer共同组成了Fourier Neural Operator。\n",
    "\n",
    "![Fourier Neural Operator模型构架](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.2/docs/mindflow/docs/source_zh_cn/data_driven/images/FNO.png)\n",
    "\n",
    "Fourier Layer网络结构如下图所示。图中V表示输入向量,上框表示向量经过傅里叶变换后,经过线性变换R,过滤高频信息,然后进行傅里叶逆变换;另一分支经过线性变换W,最后通过激活函数,得到Fourier Layer输出向量。\n",
    "\n",
    "![Fourier Layer网络结构](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.2/docs/mindflow/docs/source_zh_cn/data_driven/images/FNO-2.png)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "import os\n",
    "import time\n",
    "import numpy as np\n",
    "\n",
    "import mindspore\n",
    "from mindspore import nn, ops, Tensor, jit, set_seed"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "下述`src`包可以在[applications/data_driven/navier_stokes/fno2d/src](https://gitee.com/mindspore/mindscience/tree/r0.6/MindFlow/applications/data_driven/navier_stokes/fno2d/src)下载。\n",
    "配置文件可在[config](https://gitee.com/mindspore/mindscience/blob/r0.6/MindFlow/applications/data_driven/navier_stokes/fno2d/configs/fno2d.yaml)中修改。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from mindflow.cell import FNO2D\n",
    "from mindflow.common import get_warmup_cosine_annealing_lr\n",
    "from mindflow.loss import RelativeRMSELoss\n",
    "from mindflow.utils import load_yaml_config\n",
    "from mindflow.pde import UnsteadyFlowWithLoss\n",
    "from src import calculate_l2_error, create_training_dataset\n",
    "\n",
    "set_seed(0)\n",
    "np.random.seed(0)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "# set context for training: using graph mode for high performance training with GPU acceleration\n",
    "mindspore.set_context(mode=mindspore.GRAPH_MODE, device_target='GPU', device_id=2)\n",
    "use_ascend = mindspore.get_context(attr_key='device_target') == \"Ascend\"\n",
    "config = load_yaml_config('navier_stokes_2d.yaml')\n",
    "data_params = config[\"data\"]\n",
    "model_params = config[\"model\"]\n",
    "optimizer_params = config[\"optimizer\"]"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## 创建数据集\n",
    "\n",
    "训练与测试数据下载: [data_driven/navier_stokes/dataset](https://download.mindspore.cn/mindscience/mindflow/dataset/applications/data_driven/navier_stokes/dataset/)。\n",
    "\n",
    "本案例根据Zongyi Li在 [Fourier Neural Operator for Parametric Partial Differential Equations](https://arxiv.org/pdf/2010.08895.pdf) 一文中对数据集的设置生成训练数据集与测试数据集。具体设置如下:\n",
    "\n",
    "基于周期性边界,生成满足如下分布的初始条件$w_0(x)$:\n",
    "\n",
    "$$\n",
    "w_0 \\sim \\mu, \\mu=\\mathcal{N}\\left(0,7^{3 / 2}(-\\Delta+49 I)^{-2.5}\\right)\n",
    "$$\n",
    "\n",
    "外力项设置为:\n",
    "\n",
    "$$\n",
    "f(x)=0.1\\left(\\sin \\left(2 \\pi\\left(x_1+x_2\\right)\\right)+\\right.\\cos(2 \\pi(x_1+x_2)))\n",
    "$$\n",
    "\n",
    "采用`Crank-Nicolson`方法生成数据,时间步长设置为1e-4,最终数据以每 t = 1 个时间单位记录解。所有数据均在256×256的网格上生成,并被下采样至64×64网格。本案例选取粘度系数$\\nu=1e−5$,训练集样本量为19000个,测试集样本量为3800个。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Data preparation finished\n"
     ]
    }
   ],
   "source": [
    "train_dataset = create_training_dataset(data_params, input_resolution=model_params[\"input_resolution\"], shuffle=True)\n",
    "test_input = np.load(os.path.join(data_params[\"path\"], \"test/inputs.npy\"))\n",
    "test_label = np.load(os.path.join(data_params[\"path\"], \"test/label.npy\"))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## 构建模型\n",
    "\n",
    "网络由1层Lifting layer、多层Fourier Layer以及1层Decoding layer叠加组成:\n",
    "\n",
    "- Lifting layer对应样例代码中`FNO2D.fc0`,将输出数据$x$映射至高维;\n",
    "\n",
    "- 多层Fourier Layer的叠加对应样例代码中`FNO2D.fno_seq`,本案例采用离散傅里叶变换实现时域与频域的转换;\n",
    "\n",
    "- Decoding layer对应代码中`FNO2D.fc1`与`FNO2D.fc2`,获得最终的预测值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "model = FNO2D(in_channels=model_params[\"in_channels\"],\n",
    "              out_channels=model_params[\"out_channels\"],\n",
    "              resolution=model_params[\"input_resolution\"],\n",
    "              modes=model_params[\"modes\"],\n",
    "              channels=model_params[\"width\"],\n",
    "              depths=model_params[\"depth\"]\n",
    "              )\n",
    "\n",
    "model_params_list = []\n",
    "for k, v in model_params.items():\n",
    "    model_params_list.append(f\"{k}-{v}\")\n",
    "model_name = \"_\".join(model_params_list)"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## 优化器与损失函数\n",
    "\n",
    "使用相对均方根误差作为网络训练损失函数:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "steps_per_epoch = train_dataset.get_dataset_size()\n",
    "lr = get_warmup_cosine_annealing_lr(lr_init=optimizer_params[\"initial_lr\"],\n",
    "                                    last_epoch=optimizer_params[\"train_epochs\"],\n",
    "                                    steps_per_epoch=steps_per_epoch,\n",
    "                                    warmup_epochs=optimizer_params[\"warmup_epochs\"])\n",
    "\n",
    "optimizer = nn.Adam(model.trainable_params(), learning_rate=Tensor(lr))\n",
    "\n",
    "problem = UnsteadyFlowWithLoss(model, loss_fn=RelativeRMSELoss(), data_format=\"NHWC\")"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## 模型训练\n",
    "\n",
    "使用**MindSpore >= 2.0.0**的版本,可以使用函数式编程范式训练神经网络。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "def train():\n",
    "    if use_ascend:\n",
    "        from mindspore.amp import DynamicLossScaler, auto_mixed_precision, all_finite\n",
    "        loss_scaler = DynamicLossScaler(1024, 2, 100)\n",
    "        auto_mixed_precision(model, 'O3')\n",
    "\n",
    "    def forward_fn(train_inputs, train_label):\n",
    "        loss = problem.get_loss(train_inputs, train_label)\n",
    "        if use_ascend:\n",
    "            loss = loss_scaler.scale(loss)\n",
    "        return loss\n",
    "\n",
    "    grad_fn = mindspore.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=False)\n",
    "\n",
    "    @jit\n",
    "    def train_step(train_inputs, train_label):\n",
    "        loss, grads = grad_fn(train_inputs, train_label)\n",
    "        if use_ascend:\n",
    "            loss = loss_scaler.unscale(loss)\n",
    "            if all_finite(grads):\n",
    "                grads = loss_scaler.unscale(grads)\n",
    "                loss = ops.depend(loss, optimizer(grads))\n",
    "        else:\n",
    "            loss = ops.depend(loss, optimizer(grads))\n",
    "        return loss\n",
    "\n",
    "    sink_process = mindspore.data_sink(train_step, train_dataset, sink_size=1)\n",
    "    summary_dir = os.path.join(config[\"summary_dir\"], model_name)\n",
    "\n",
    "    for cur_epoch in range(optimizer_params[\"train_epochs\"]):\n",
    "        local_time_beg = time.time()\n",
    "        model.set_train()\n",
    "        cur_loss = 0.0\n",
    "        for _ in range(steps_per_epoch):\n",
    "            cur_loss = sink_process()\n",
    "\n",
    "        print(\"epoch: %s, loss is %s\" % (cur_epoch + 1, cur_loss), flush=True)\n",
    "        local_time_end = time.time()\n",
    "        epoch_seconds = (local_time_end - local_time_beg) * 1000\n",
    "        step_seconds = epoch_seconds / steps_per_epoch\n",
    "        print(\"Train epoch time: {:5.3f} ms, per step time: {:5.3f} ms\".format\n",
    "              (epoch_seconds, step_seconds), flush=True)\n",
    "\n",
    "        if (cur_epoch + 1) % config[\"save_checkpoint_epoches\"] == 0:\n",
    "            ckpt_dir = os.path.join(summary_dir, \"ckpt\")\n",
    "            if not os.path.exists(ckpt_dir):\n",
    "                os.makedirs(ckpt_dir)\n",
    "            mindspore.save_checkpoint(model, os.path.join(ckpt_dir, model_params[\"name\"]))\n",
    "\n",
    "        if (cur_epoch + 1) % config['eval_interval'] == 0:\n",
    "            calculate_l2_error(model, test_input, test_label, config[\"test_batch_size\"])\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "epoch: 1, loss is 1.7631323\n",
      "Train epoch time: 50405.954 ms, per step time: 50.406 ms\n",
      "epoch: 2, loss is 1.9283392\n",
      "Train epoch time: 36591.429 ms, per step time: 36.591 ms\n",
      "epoch: 3, loss is 1.4265916\n",
      "Train epoch time: 35085.079 ms, per step time: 35.085 ms\n",
      "epoch: 4, loss is 1.8609437\n",
      "Train epoch time: 34407.280 ms, per step time: 34.407 ms\n",
      "epoch: 5, loss is 1.5222052\n",
      "Train epoch time: 34596.965 ms, per step time: 34.597 ms\n",
      "epoch: 6, loss is 1.3424721\n",
      "Train epoch time: 33847.209 ms, per step time: 33.847 ms\n",
      "epoch: 7, loss is 1.607729\n",
      "Train epoch time: 33106.981 ms, per step time: 33.107 ms\n",
      "epoch: 8, loss is 1.3308442\n",
      "Train epoch time: 33051.339 ms, per step time: 33.051 ms\n",
      "epoch: 9, loss is 1.3169765\n",
      "Train epoch time: 33901.816 ms, per step time: 33.902 ms\n",
      "epoch: 10, loss is 1.4149593\n",
      "Train epoch time: 33908.748 ms, per step time: 33.909 ms\n",
      "================================Start Evaluation================================\n",
      "mean rel_rmse_error: 0.15500953359901906\n",
      "=================================End Evaluation=================================\n",
      "...\n",
      "epoch: 141, loss is 0.777328\n",
      "Train epoch time: 32549.911 ms, per step time: 32.550 ms\n",
      "epoch: 142, loss is 0.7008966\n",
      "Train epoch time: 32522.572 ms, per step time: 32.523 ms\n",
      "epoch: 143, loss is 0.72377646\n",
      "Train epoch time: 32566.685 ms, per step time: 32.567 ms\n",
      "epoch: 144, loss is 0.72175145\n",
      "Train epoch time: 32435.932 ms, per step time: 32.436 ms\n",
      "epoch: 145, loss is 0.6235678\n",
      "Train epoch time: 32463.707 ms, per step time: 32.464 ms\n",
      "epoch: 146, loss is 0.9351083\n",
      "Train epoch time: 32448.413 ms, per step time: 32.448 ms\n",
      "epoch: 147, loss is 0.9283789\n",
      "Train epoch time: 32472.401 ms, per step time: 32.472 ms\n",
      "epoch: 148, loss is 0.7655642\n",
      "Train epoch time: 32604.642 ms, per step time: 32.605 ms\n",
      "epoch: 149, loss is 0.7233772\n",
      "Train epoch time: 32649.832 ms, per step time: 32.650 ms\n",
      "epoch: 150, loss is 0.86825275\n",
      "Train epoch time: 32589.243 ms, per step time: 32.589 ms\n",
      "================================Start Evaluation================================\n",
      "mean rel_rmse_error: 0.07437102290522307\n",
      "=================================End Evaluation=================================\n",
      "predict total time: 15.212349653244019 s\n"
     ]
    }
   ],
   "source": [
    "train()"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3.9.0 ('py39')",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.0"
  },
  "vscode": {
   "interpreter": {
    "hash": "57ace93c29d9374277a79956c3f1b916d7d9a05468d906842f9921d0d494a29f"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}