{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# Raynold-averaged Navier-Stokes\n", "\n", "[](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.2/mindflow/en/physics_driven/mindspore_periodic_hill.ipynb) [](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.2/mindflow/en/physics_driven/mindspore_periodic_hill.py) [](https://gitee.com/mindspore/docs/blob/r2.2/docs/mindflow/docs/source_en/physics_driven/periodic_hill.ipynb)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Overview\n", "\n", "The Raynold-averaged Navier-Stokes equation is a classic numerical simulation case in the fields of fluid mechanics and meteorology. It is used to study the flow behavior of air or fluid over a periodic hilly terrain. This problem aims to explore the influence of hilly terrain on atmospheric or fluid motion, leading to a deeper understanding of meteorological phenomena, terrain effects, and fluid characteristics over complex terrain. This project utilizes the Reynolds-averaged model to simulate turbulent flow over a two-dimensional periodic hilly terrain.\n", "\n", "### Reynolds-Averaged Model\n", "\n", "The Reynolds-Averaged Navier-Stokes equations (RANS) are a commonly used numerical simulation approach in fluid mechanics to study the averaged behavior of fluids under different Reynolds numbers. Named after the British scientist Osborne Reynolds, this model involves time-averaging of flow field variables and provides an engineering-oriented approach to deal with turbulent flows. The Reynolds-averaged model is based on Reynolds decomposition, which separates flow field variables into mean and fluctuating components. By time-averaging the Reynolds equations, the unsteady fluctuating terms are eliminated, resulting in time-averaged equations describing the macroscopic flow. Taking the two-dimensional Reynolds-averaged momentum and continuity equations as examples:\n", "\n", "#### Reynolds-Averaged Momentum Equation\n", "\n", "$$\\rho \\bar{u}_j \\frac{\\partial \\bar{u}_i}{\\partial x_j}=\\rho \\bar{f}_i+\\frac{\\partial}{\\partial x_j}\\left[-\\bar{p} \\delta_{i j}+\\mu\\left(\\frac{\\partial \\bar{u}_i}{\\partial x_j}+\\frac{\\partial \\bar{u}_j}{\\partial x_i}\\right)-\\rho \\overline{u_i^{\\prime} u_j^{\\prime}}\\right] .$$\n", "\n", "#### Continuity Equation\n", "\n", "$$\\frac{\\partial \\overline{u}}{\\partial x} + \\frac{\\partial \\overline{v}}{\\partial y} = 0$$\n", "\n", "Here, $\\overline{u}$ and $\\overline{v}$ represent the time-averaged velocity components in the x and y directions, $\\overline{p}$ is the time-averaged pressure, $\\rho$ is fluid density, $\\nu$ is the kinematic viscosity, and $u$ and $v$ are the velocity components in the x and y directions.\n", "\n", "### Model Solution Introduction\n", "\n", "The core idea of the RANS-PINNs (Reynolds-Averaged Navier-Stokes - Physics-Informed Neural Networks) method is to combine physical equations with neural networks to achieve simulation results that possess both the accuracy of traditional RANS models and the flexibility of neural networks. In this approach, the Reynolds-averaged equations for mean flow, along with an isotropic eddy viscosity model for turbulence, are combined to form an accurate baseline solution. Then, the remaining turbulent fluctuation part is modeled using Physics-Informed Neural Networks (PINNs), further enhancing the simulation accuracy.\n", "\n", "The structure of the RANS-PINNs model is depicted below:\n", "\n", "<figure class=\"half\">\n", " <img src=\"./images/rans_pinns_structure.png\" title=\"prediction result\" width=\"500\"/>\n", "</figure>\n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Preparation\n", "\n", "Import the required libraries for training. The [src](https://gitee.com/mindspore/mindscience/tree/r0.6/MindFlow/applications/physics_driven/navier_stokes/periodic_hill/src) folder includes functions for dataset processing, network models, and loss calculation.\n", "\n", "Training is conducted using the graph mode (GRAPH) of the MindSpore framework, and it takes place on the GPU (by default) or Ascend (single card)." ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import os\n", "import time\n", "\n", "import numpy as np\n", "\n", "import mindspore\n", "from mindspore import context, nn, ops, jit, set_seed, load_checkpoint, load_param_into_net, data_sink\n", "from mindspore.amp import all_finite\n", "from mindflow.cell import FCSequential\n", "from mindflow.utils import load_yaml_config\n", "\n", "from src import create_train_dataset, create_test_dataset, calculate_l2_error, NavierStokesRANS\n", "from eval import predict\n", "\n", "set_seed(0)\n", "np.random.seed(0)\n", "\n", "context.set_context(mode=context.PYNATIVE_MODE,\n", " device_target=\"GPU\")\n", "use_ascend = context.get_context(attr_key='device_target') == \"Ascend\"" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Load Parameters\n", "\n", "Import the configuration parameters for the dataset, model, and optimizer from the [rans.yaml](https://gitee.com/mindspore/mindscience/blob/r0.6/MindFlow/applications/physics_driven/navier_stokes/periodic_hill/configs/rans.yaml) file." ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [], "source": [ "# load configurations\n", "config = load_yaml_config('./configs/rans.yaml')\n", "data_params = config[\"data\"]\n", "model_params = config[\"model\"]\n", "optim_params = config[\"optimizer\"]\n", "summary_params = config[\"summary\"]" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Dataset Construction\n", "\n", "Source: Numerical simulation flow field data around a two-dimensional cylinder, provided by Associate Professor Yu Jian's team at the School of Aeronautic Science and Engineering, Beihang University.\n", "\n", "Data Description:\n", "The data is in numpy's npy format with dimensions [300, 700, 10]. The first two dimensions represent the length and width of the flow field, and the last dimension includes variables (x, y, u, v, p, uu, uv, vv, rho, nu), totaling 10 variables. Among these, x, y, u, v, p represent the x-coordinate, y-coordinate, x-direction velocity, y-direction velocity, and pressure of the flow field, respectively. uu, uv, vv are Reynolds-averaged statistical quantities, while rho is fluid density and nu is kinematic viscosity.\n", "\n", "Dataset Download Link:\n", "[periodic_hill.npy](https://download.mindspore.cn/mindscience/mindflow/dataset/periodic_hill_2d/)" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "# create training dataset\n", "# create training dataset\n", "dataset = create_train_dataset(data_params[\"data_path\"], data_params[\"batch_size\"])\n", "# create test dataset\n", "inputs, label = create_test_dataset(data_params[\"data_path\"])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model Initialization\n", "\n", "Initialize the RANS-PINNs model based on the configuration in [rans.yaml](https://gitee.com/mindspore/mindscience/blob/r0.6/MindFlow/applications/physics_driven/navier_stokes/periodic_hill/configs/rans.yaml). Use the Mean Squared Error (MSE) loss function and the Adam optimizer." ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "momentum_x: u(x, y)*Derivative(u(x, y), x) + v(x, y)*Derivative(u(x, y), y) + 1.0*Derivative(p(x, y), x) - 0.000178571426658891*Derivative(u(x, y), (x, 2)) - 0.000178571426658891*Derivative(u(x, y), (y, 2)) + Derivative(uu(x, y), x) + Derivative(uv(x, y), y)\n", " Item numbers of current derivative formula nodes: 7\n", "momentum_y: u(x, y)*Derivative(v(x, y), x) + v(x, y)*Derivative(v(x, y), y) + 1.0*Derivative(p(x, y), y) + Derivative(uv(x, y), x) - 0.000178571426658891*Derivative(v(x, y), (x, 2)) - 0.000178571426658891*Derivative(v(x, y), (y, 2)) + Derivative(vv(x, y), y)\n", " Item numbers of current derivative formula nodes: 7\n", "continuty: Derivative(u(x, y), x) + Derivative(v(x, y), y)\n", " Item numbers of current derivative formula nodes: 2\n", "bc_u: u(x, y)\n", " Item numbers of current derivative formula nodes: 1\n", "bc_v: v(x, y)\n", " Item numbers of current derivative formula nodes: 1\n", "bc_p: p(x, y)\n", " Item numbers of current derivative formula nodes: 1\n", "bc_uu: uu(x, y)\n", " Item numbers of current derivative formula nodes: 1\n", "bc_uv: uv(x, y)\n", " Item numbers of current derivative formula nodes: 1\n", "bc_vv: vv(x, y)\n", " Item numbers of current derivative formula nodes: 1\n" ] } ], "source": [ "model = FCSequential(in_channels=model_params[\"in_channels\"],\n", " out_channels=model_params[\"out_channels\"],\n", " layers=model_params[\"layers\"],\n", " neurons=model_params[\"neurons\"],\n", " residual=model_params[\"residual\"],\n", " act='tanh')\n", "\n", "if summary_params[\"load_ckpt\"]:\n", " param_dict = load_checkpoint(summary_params[\"load_ckpt_path\"])\n", " load_param_into_net(model, param_dict)\n", "if not os.path.exists(os.path.abspath(summary_params['ckpt_path'])):\n", " os.makedirs(os.path.abspath(summary_params['ckpt_path']))\n", "\n", "params = model.trainable_params()\n", "optimizer = nn.Adam(params, optim_params[\"initial_lr\"], weight_decay=optim_params[\"weight_decay\"])\n", "problem = NavierStokesRANS(model)\n", "\n", "if use_ascend:\n", " from mindspore.amp import DynamicLossScaler, auto_mixed_precision\n", " loss_scaler = DynamicLossScaler(1024, 2, 100)\n", " auto_mixed_precision(model, 'O3')" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Model Training\n", "\n", "For versions of MindSpore >= 2.0.0, you can use the functional programming paradigm to train neural networks." ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "epoch: 1 train loss: 0.033210676 epoch time: 21279.999ms\n", "epoch: 2 train loss: 0.019967956 epoch time: 11001.454ms\n", "epoch: 3 train loss: 0.015202466 epoch time: 11049.534ms\n", "epoch: 4 train loss: 0.009431531 epoch time: 10979.578ms\n", "epoch: 5 train loss: 0.009564591 epoch time: 11857.952ms\n", " predict total time: 361.42492294311523 ms\n", " l2_error, U: 0.3499122378307982 , V: 1.089610520680924 , P: 1.0590148771220198\n", " l2_error, uu: 0.6619816139038208 , uv: 0.9806737880811025 , vv: 1.223253942721496 , Total: 0.3788639206858165\n", "==================================================================================================\n", "epoch: 6 train loss: 0.0080219805 epoch time: 10980.343ms\n", "epoch: 7 train loss: 0.007290244 epoch time: 11141.353ms\n", "epoch: 8 train loss: 0.0072537386 epoch time: 11535.102ms\n", "epoch: 9 train loss: 0.007020033 epoch time: 11041.171ms\n", "epoch: 10 train loss: 0.0072951056 epoch time: 11033.113ms\n", " predict total time: 45.89080810546875 ms\n", " l2_error, U: 0.2574625213886651 , V: 1.0159654927310178 , P: 1.08665077365793\n", " l2_error, uu: 0.6712817201442959 , uv: 1.6285996210166078 , vv: 1.6174848943769466 , Total: 0.2994041993242163\n", "==================================================================================================\n", "epoch: 11 train loss: 0.006911595 epoch time: 11269.898ms\n", "epoch: 12 train loss: 0.0064922348 epoch time: 11014.546ms\n", "epoch: 13 train loss: 0.012375369 epoch time: 10856.192ms\n", "epoch: 14 train loss: 0.0063738413 epoch time: 11219.892ms\n", "epoch: 15 train loss: 0.006205684 epoch time: 11509.733ms\n", " predict total time: 1419.1265106201172 ms\n", " l2_error, U: 0.26029930447820726 , V: 1.0100483948680088 , P: 1.1317783698512909\n", " l2_error, uu: 0.6231199513484501 , uv: 1.097468251696328 , vv: 1.2687142671208649 , Total: 0.301384468926242\n", "==================================================================================================\n", "epoch: 16 train loss: 0.00825448 epoch time: 11118.031ms\n", "epoch: 17 train loss: 0.0061626835 epoch time: 11953.393ms\n", "epoch: 18 train loss: 0.0073482464 epoch time: 11729.854ms\n", "epoch: 19 train loss: 0.0059430953 epoch time: 11183.294ms\n", "epoch: 20 train loss: 0.006461049 epoch time: 11480.535ms\n", " predict total time: 328.2887935638428 ms\n", " l2_error, U: 0.2893996640103185 , V: 1.0164172238860398 , P: 1.118747335999008\n", " l2_error, uu: 0.6171527683696496 , uv: 1.1570214426333394 , vv: 1.5968321768424096 , Total: 0.3270872725014816\n", "==================================================================================================\n", "...\n", "epoch: 496 train loss: 0.001080659 epoch time: 11671.701ms\n", "epoch: 497 train loss: 0.0007907547 epoch time: 11653.532ms\n", "epoch: 498 train loss: 0.0015688213 epoch time: 11612.691ms\n", "epoch: 499 train loss: 0.00085494306 epoch time: 11429.596ms\n", "epoch: 500 train loss: 0.0026226037 epoch time: 11708.611ms\n", " predict total time: 43.506622314453125 ms\n", " l2_error, U: 0.16019161506598686 , V: 0.561610130067435 , P: 0.4730013943213571\n", " l2_error, uu: 1.0206032668202991 , uv: 0.812573326422638 , vv: 1.5239299913682682 , Total: 0.18547458639343734\n" ] } ], "source": [ "def forward_fn(pde_data, data, label):\n", " loss = problem.get_loss(pde_data, data, label)\n", " if use_ascend:\n", " loss = loss_scaler.scale(loss)\n", " return loss\n", "\n", "grad_fn = ops.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=False)\n", "\n", "@jit\n", "def train_step(pde_data, data, label):\n", " loss, grads = grad_fn(pde_data, data, label)\n", " if use_ascend:\n", " loss = loss_scaler.unscale(loss)\n", " is_finite = all_finite(grads)\n", " if is_finite:\n", " grads = loss_scaler.unscale(grads)\n", " loss = ops.depend(loss, optimizer(grads))\n", " loss_scaler.adjust(is_finite)\n", " else:\n", " loss = ops.depend(loss, optimizer(grads))\n", " return loss\n", "\n", "epochs = optim_params[\"train_epochs\"]\n", "sink_process = data_sink(train_step, dataset, sink_size=1)\n", "train_data_size = dataset.get_dataset_size()\n", "\n", "for epoch in range(1, 1 + epochs):\n", " # train\n", " time_beg = time.time()\n", " model.set_train(True)\n", " for _ in range(train_data_size + 1):\n", " step_train_loss = sink_process()\n", " print(f\"epoch: {epoch} train loss: {step_train_loss} epoch time: {(time.time() - time_beg)*1000 :.3f}ms\")\n", " model.set_train(False)\n", " if epoch % summary_params[\"eval_interval_epochs\"] == 0:\n", " # eval\n", " calculate_l2_error(model, inputs, label, config)\n", " predict(model=model, epochs=epoch, input_data=inputs, label=label, path=summary_params[\"visual_dir\"])\n", " if epoch % summary_params[\"save_checkpoint_epochs\"] == 0:\n", " ckpt_name = \"rans_{}.ckpt\".format(epoch + 1)\n", " mindspore.save_checkpoint(model, os.path.join(summary_params['ckpt_path'], ckpt_name))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Visualization of Prediction Results\n", "\n", "Below is a comparison between the predicted results of the RANS-PINNs model and the ground truth:\n", "\n", "<figure class=\"half\">\n", " <img src=\"./images/prediction_result.png\" title=\"prediction result\" width=\"500\"/>\n", "</figure>\n", "\n", "The images display the distribution of lateral velocity and vertical velocity at different positions within the flow field. The lower image shows the ground truth, while the upper image displays the predicted values.\n", "\n", "The following is a cross-velocity profile of the RANS-PINNs model:\n", "\n", "<figure class=\"harf\">\n", " <img src=\"./images/speed_contour.png\" title=\"prediction_result\" width=\"500\"/>\n", "</figure>\n", "\n", "where the blue line is the true value and the orange dashed line is the predicted value." ] } ], "metadata": { "kernelspec": { "display_name": "mind", "language": "python", "name": "python3" }, "language_info": { "name": "python", "version": "3.9.16" }, "orig_nbformat": 4 }, "nbformat": 4, "nbformat_minor": 2 }