{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 雷诺平均Navier-Stokes方程求解周期山流动\n", "\n", "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_notebook.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/mindflow/zh_cn/physics_driven/mindspore_periodic_hill.ipynb) [![下载样例代码](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_download_code.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/master/mindflow/zh_cn/physics_driven/mindspore_periodic_hill.py) [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/master/docs/mindflow/docs/source_zh_cn/physics_driven/periodic_hill.ipynb)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 概述\n", "\n", "雷诺平均Navier-Stokes方程求解周期山流动问题是流体力学和气象学领域中的一个经典数值模拟案例,用于研究空气或流体在周期性山地地形上的流动行为。这个问题旨在探究山地地形对大气或流体运动的影响,从而深入理解气象现象、地形效应以及流体在复杂地形下的特性。本项目采取雷诺平均模型模拟湍流在二维周期山地地形上的流动。\n", "\n", "### 雷诺平均模型\n", "\n", "雷诺平均模型(Reynolds-Averaged Navier-Stokes equations,RANS)是流体力学领域中常用的数值模拟方法之一,用于研究流体在不同雷诺数条件下的平均行为。该模型以法国科学家雷诺(Osborne Reynolds)的名字命名,通过对流场变量进行时间平均操作,提供了一种处理湍流问题的工程实用途径。雷诺平均模型基于雷诺分解原理,将流场变量分解为平均分量与脉动分量。通过对雷诺方程进行时间平均操作,消除了非定常的脉动项,得到了宏观流动的时间平均方程。以二维情况下的雷诺平均动量方程和连续方程为例:\n", "\n", "#### 雷诺平均动量方程\n", "\n", "$$\\rho \\bar{u}_j \\frac{\\partial \\bar{u}_i}{\\partial x_j}=\\rho \\bar{f}_i+\\frac{\\partial}{\\partial x_j}\\left[-\\bar{p} \\delta_{i j}+\\mu\\left(\\frac{\\partial \\bar{u}_i}{\\partial x_j}+\\frac{\\partial \\bar{u}_j}{\\partial x_i}\\right)-\\rho \\overline{u_i^{\\prime} u_j^{\\prime}}\\right].$$\n", "\n", "#### 连续方程\n", "\n", "$$\\frac{\\partial \\overline{u}}{\\partial x} + \\frac{\\partial \\overline{v}}{\\partial y} = 0.$$\n", "\n", "其中,$\\overline{u}$ 和 $\\overline{v}$ 分别代表 x 和 y 方向的速度分量的时间平均值,$\\overline{p}$ 为压力的时间平均值,$\\rho$ 是流体密度,$\\nu$ 是运动粘性系数,$u$ 和 $v$ 分别为 x 和 y 方向的速度分量。\n", "\n", "### 求解模型介绍\n", "\n", "RANS-PINNs方法的核心思想是将物理方程和神经网络相结合,以获得同时具备传统RANS模型准确性和神经网络灵活性的模拟结果。在该方法中,基于RANS的平均流动方程与涉及湍流的各向同性涡粘性模型相结合,形成了精确的基准解。然后,通过物理信息神经网络,将剩余的湍流脉动部分进行建模,进一步提高了模拟精度。\n", "\n", "RANS-PINNs模型结构图如下\n", "\n", "![rans_pinns_structure.png](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindflow/docs/source_zh_cn/physics_driven/images/rans_pinns_structure.png)\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 准备工作\n", "\n", "导入训练所需函数库,其中[src文件夹](https://gitee.com/mindspore/mindscience/tree/master/MindFlow/applications/physics_driven/navier_stokes/periodic_hill/src)包括数据集处理函数、网络模型和损失值计算函数。\n", "\n", "训练默认采用MindSpore框架的静态图模式(GRAPH),在GPU(默认)或Ascend进行训练(单卡)。" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import os\n", "import time\n", "\n", "import numpy as np\n", "\n", "import mindspore\n", "from mindspore import context, nn, ops, jit, set_seed, load_checkpoint, load_param_into_net, data_sink\n", "from mindspore.amp import all_finite\n", "from mindflow.cell import FCSequential\n", "from mindflow.utils import load_yaml_config\n", "\n", "from src import create_train_dataset, create_test_dataset, calculate_l2_error, NavierStokesRANS\n", "from eval import predict\n", "\n", "set_seed(0)\n", "np.random.seed(0)\n", "\n", "context.set_context(mode=context.PYNATIVE_MODE,\n", " device_target=\"GPU\")\n", "use_ascend = context.get_context(attr_key='device_target') == \"Ascend\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 读取参数\n", "\n", "从[rans.yaml](https://gitee.com/mindspore/mindscience/blob/master/MindFlow/applications/physics_driven/navier_stokes/periodic_hill/configs/rans.yaml)文件里导入相应的数据集、模型和优化器的参数配置。" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "# load configurations\n", "config = load_yaml_config('./configs/rans.yaml')\n", "data_params = config[\"data\"]\n", "model_params = config[\"model\"]\n", "optim_params = config[\"optimizer\"]\n", "summary_params = config[\"summary\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 数据集构建\n", "\n", "来源:二维圆柱绕流数值仿真流场数据,由北京航空航天大学航空科学与工程学院于剑副教授团队提供。\n", "\n", "数据说明:\n", "数据格式为numpy的npy,维度为:[300,700, 10]。其中前两个维度分别为流场的长和宽,最后维度为包含(x, y, u, v, p, uu, uv, vv, rho, nu)共计10个变量。其中,x, y, u, v, p分别为流场的x坐标、y坐标、x方向速度、y方向速度、压力;uu, uv, vv雷诺平均统计量;rho为流体密度,nu为运动粘性系数。\n", "\n", "数据集下载链接:\n", "[periodic_hill.npy](https://download.mindspore.cn/mindscience/mindflow/dataset/periodic_hill_2d/)" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "# create training dataset\n", "dataset = create_train_dataset(data_params[\"data_path\"], data_params[\"batch_size\"])\n", "# create test dataset\n", "inputs, label = create_test_dataset(data_params[\"data_path\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 模型初始化\n", "\n", "根据rans.yaml中的配置,初始化RANS-PINNs模型。使用MSELoss损失函数和Adam优化器。" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "momentum_x: u(x, y)*Derivative(u(x, y), x) + v(x, y)*Derivative(u(x, y), y) + 1.0*Derivative(p(x, y), x) - 0.000178571426658891*Derivative(u(x, y), (x, 2)) - 0.000178571426658891*Derivative(u(x, y), (y, 2)) + Derivative(uu(x, y), x) + Derivative(uv(x, y), y)\n", " Item numbers of current derivative formula nodes: 7\n", "momentum_y: u(x, y)*Derivative(v(x, y), x) + v(x, y)*Derivative(v(x, y), y) + 1.0*Derivative(p(x, y), y) + Derivative(uv(x, y), x) - 0.000178571426658891*Derivative(v(x, y), (x, 2)) - 0.000178571426658891*Derivative(v(x, y), (y, 2)) + Derivative(vv(x, y), y)\n", " Item numbers of current derivative formula nodes: 7\n", "continuty: Derivative(u(x, y), x) + Derivative(v(x, y), y)\n", " Item numbers of current derivative formula nodes: 2\n", "bc_u: u(x, y)\n", " Item numbers of current derivative formula nodes: 1\n", "bc_v: v(x, y)\n", " Item numbers of current derivative formula nodes: 1\n", "bc_p: p(x, y)\n", " Item numbers of current derivative formula nodes: 1\n", "bc_uu: uu(x, y)\n", " Item numbers of current derivative formula nodes: 1\n", "bc_uv: uv(x, y)\n", " Item numbers of current derivative formula nodes: 1\n", "bc_vv: vv(x, y)\n", " Item numbers of current derivative formula nodes: 1\n" ] } ], "source": [ "model = FCSequential(in_channels=model_params[\"in_channels\"],\n", " out_channels=model_params[\"out_channels\"],\n", " layers=model_params[\"layers\"],\n", " neurons=model_params[\"neurons\"],\n", " residual=model_params[\"residual\"],\n", " act='tanh')\n", "\n", "if summary_params[\"load_ckpt\"]:\n", " param_dict = load_checkpoint(summary_params[\"load_ckpt_path\"])\n", " load_param_into_net(model, param_dict)\n", "if not os.path.exists(os.path.abspath(summary_params['ckpt_path'])):\n", " os.makedirs(os.path.abspath(summary_params['ckpt_path']))\n", "\n", "params = model.trainable_params()\n", "optimizer = nn.Adam(params, optim_params[\"initial_lr\"], weight_decay=optim_params[\"weight_decay\"])\n", "problem = NavierStokesRANS(model)\n", "\n", "if use_ascend:\n", " from mindspore.amp import DynamicLossScaler, auto_mixed_precision\n", " loss_scaler = DynamicLossScaler(1024, 2, 100)\n", " auto_mixed_precision(model, 'O3')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 模型训练\n", "\n", "使用MindSpore >= 2.0.0的版本,可以使用函数式编程范式训练神经网络。" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "epoch: 1 train loss: 0.033210676 epoch time: 21279.999ms\n", "epoch: 2 train loss: 0.019967956 epoch time: 11001.454ms\n", "epoch: 3 train loss: 0.015202466 epoch time: 11049.534ms\n", "epoch: 4 train loss: 0.009431531 epoch time: 10979.578ms\n", "epoch: 5 train loss: 0.009564591 epoch time: 11857.952ms\n", " predict total time: 361.42492294311523 ms\n", " l2_error, U: 0.3499122378307982 , V: 1.089610520680924 , P: 1.0590148771220198\n", " l2_error, uu: 0.6619816139038208 , uv: 0.9806737880811025 , vv: 1.223253942721496 , Total: 0.3788639206858165\n", "==================================================================================================\n", "epoch: 6 train loss: 0.0080219805 epoch time: 10980.343ms\n", "epoch: 7 train loss: 0.007290244 epoch time: 11141.353ms\n", "epoch: 8 train loss: 0.0072537386 epoch time: 11535.102ms\n", "epoch: 9 train loss: 0.007020033 epoch time: 11041.171ms\n", "epoch: 10 train loss: 0.0072951056 epoch time: 11033.113ms\n", " predict total time: 45.89080810546875 ms\n", " l2_error, U: 0.2574625213886651 , V: 1.0159654927310178 , P: 1.08665077365793\n", " l2_error, uu: 0.6712817201442959 , uv: 1.6285996210166078 , vv: 1.6174848943769466 , Total: 0.2994041993242163\n", "==================================================================================================\n", "epoch: 11 train loss: 0.006911595 epoch time: 11269.898ms\n", "epoch: 12 train loss: 0.0064922348 epoch time: 11014.546ms\n", "epoch: 13 train loss: 0.012375369 epoch time: 10856.192ms\n", "epoch: 14 train loss: 0.0063738413 epoch time: 11219.892ms\n", "epoch: 15 train loss: 0.006205684 epoch time: 11509.733ms\n", " predict total time: 1419.1265106201172 ms\n", " l2_error, U: 0.26029930447820726 , V: 1.0100483948680088 , P: 1.1317783698512909\n", " l2_error, uu: 0.6231199513484501 , uv: 1.097468251696328 , vv: 1.2687142671208649 , Total: 0.301384468926242\n", "==================================================================================================\n", "epoch: 16 train loss: 0.00825448 epoch time: 11118.031ms\n", "epoch: 17 train loss: 0.0061626835 epoch time: 11953.393ms\n", "epoch: 18 train loss: 0.0073482464 epoch time: 11729.854ms\n", "epoch: 19 train loss: 0.0059430953 epoch time: 11183.294ms\n", "epoch: 20 train loss: 0.006461049 epoch time: 11480.535ms\n", " predict total time: 328.2887935638428 ms\n", " l2_error, U: 0.2893996640103185 , V: 1.0164172238860398 , P: 1.118747335999008\n", " l2_error, uu: 0.6171527683696496 , uv: 1.1570214426333394 , vv: 1.5968321768424096 , Total: 0.3270872725014816\n", "==================================================================================================\n", "...\n", "epoch: 496 train loss: 0.001080659 epoch time: 11671.701ms\n", "epoch: 497 train loss: 0.0007907547 epoch time: 11653.532ms\n", "epoch: 498 train loss: 0.0015688213 epoch time: 11612.691ms\n", "epoch: 499 train loss: 0.00085494306 epoch time: 11429.596ms\n", "epoch: 500 train loss: 0.0026226037 epoch time: 11708.611ms\n", " predict total time: 43.506622314453125 ms\n", " l2_error, U: 0.16019161506598686 , V: 0.561610130067435 , P: 0.4730013943213571\n", " l2_error, uu: 1.0206032668202991 , uv: 0.812573326422638 , vv: 1.5239299913682682 , Total: 0.18547458639343734\n" ] } ], "source": [ "def forward_fn(pde_data, data, label):\n", " loss = problem.get_loss(pde_data, data, label)\n", " if use_ascend:\n", " loss = loss_scaler.scale(loss)\n", " return loss\n", "\n", "grad_fn = ops.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=False)\n", "\n", "@jit\n", "def train_step(pde_data, data, label):\n", " loss, grads = grad_fn(pde_data, data, label)\n", " if use_ascend:\n", " loss = loss_scaler.unscale(loss)\n", " is_finite = all_finite(grads)\n", " if is_finite:\n", " grads = loss_scaler.unscale(grads)\n", " loss = ops.depend(loss, optimizer(grads))\n", " loss_scaler.adjust(is_finite)\n", " else:\n", " loss = ops.depend(loss, optimizer(grads))\n", " return loss\n", "\n", "epochs = optim_params[\"train_epochs\"]\n", "sink_process = data_sink(train_step, dataset, sink_size=1)\n", "train_data_size = dataset.get_dataset_size()\n", "\n", "for epoch in range(1, 1 + epochs):\n", " # train\n", " time_beg = time.time()\n", " model.set_train(True)\n", " for _ in range(train_data_size + 1):\n", " step_train_loss = sink_process()\n", " print(f\"epoch: {epoch} train loss: {step_train_loss} epoch time: {(time.time() - time_beg)*1000 :.3f}ms\")\n", " model.set_train(False)\n", " if epoch % summary_params[\"eval_interval_epochs\"] == 0:\n", " # eval\n", " calculate_l2_error(model, inputs, label, config)\n", " predict(model=model, epochs=epoch, input_data=inputs, label=label, path=summary_params[\"visual_dir\"])\n", " if epoch % summary_params[\"save_checkpoint_epochs\"] == 0:\n", " ckpt_name = f\"rans_{epoch+1}.ckpt\"\n", " mindspore.save_checkpoint(model, os.path.join(summary_params['ckpt_path'], ckpt_name))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 训练结果可视化\n", "\n", "以下为RANS-PINNs模型预测结果和真实值的对比:\n", "\n", "![prediction_result.png](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindflow/docs/source_zh_cn/physics_driven/images/prediction_result.png)\n", "\n", "结果展现了流场中不同位置的横向速度U和纵向速度V的分布情况,其中,下图为真实值,上图为预测值。\n", "\n", "以下为RANS-PINNs模型的横向速度剖面图:\n", "\n", "![speed_contour.png](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/master/docs/mindflow/docs/source_zh_cn/physics_driven/images/speed_contour.png)\n", "\n", "其中蓝线为真实值,橙色虚线为预测值。" ] } ], "metadata": { "kernelspec": { "display_name": "mind", "language": "python", "name": "python3" }, "language_info": { "name": "python", "version": "3.9.16" }, "orig_nbformat": 4 }, "nbformat": 4, "nbformat_minor": 2 }