基于FNO求解一维Burgers
概述
计算流体力学是21世纪流体力学领域的重要技术之一,其通过使用数值方法在计算机中对流体力学的控制方程进行求解,从而实现流动的分析、预测和控制。传统的有限元法(finite element method,FEM)和有限差分法(finite difference method,FDM)常用于复杂的仿真流程(物理建模、网格划分、数值离散、迭代求解等)和较高的计算成本,往往效率低下。因此,借助AI提升流体仿真效率是十分必要的。
近年来,随着神经网络的迅猛发展,为科学计算提供了新的范式。经典的神经网络是在有限维度的空间进行映射,只能学习与特定离散化相关的解。与经典神经网络不同,傅里叶神经算子(Fourier Neural Operator,FNO)是一种能够学习无限维函数空间映射的新型深度学习架构。该架构可直接学习从任意函数参数到解的映射,用于解决一类偏微分方程的求解问题,具有更强的泛化能力。更多信息可参考Fourier Neural Operator for Parametric Partial Differential Equations。
本案例教程介绍利用傅里叶神经算子的1-d Burgers方程求解方法。
伯格斯方程(Burgers’ equation)
一维伯格斯方程(1-d Burgers’ equation)是一个非线性偏微分方程,具有广泛应用,包括一维粘性流体流动建模。它的形式如下:
其中\(u\)表示速度场,\(u_0\)表示初始条件,\(\nu\)表示粘度系数。
问题描述
本案例利用Fourier Neural Operator学习初始状态到下一时刻状态的映射,实现一维Burgers’方程的求解:
技术路径
MindSpore Flow求解该问题的具体流程如下:
创建数据集。
构建模型。
优化器与损失函数。
模型训练。
Fourier Neural Operator
Fourier Neural Operator模型构架如下图所示。图中\(w_0(x)\)表示初始涡度,通过Lifting Layer实现输入向量的高维映射,然后将映射结果作为Fourier Layer的输入,进行频域信息的非线性变换,最后由Decoding Layer将变换结果映射至最终的预测结果\(w_1(x)\)。
Lifting Layer、Fourier Layer以及Decoding Layer共同组成了Fourier Neural Operator。
Fourier Layer网络结构如下图所示。图中V表示输入向量,上框表示向量经过傅里叶变换后,经过线性变换R,过滤高频信息,然后进行傅里叶逆变换;另一分支经过线性变换W,最后通过激活函数,得到Fourier Layer输出向量。
[1]:
import os
import time
import numpy as np
import mindspore as ms
from mindspore.amp import DynamicLossScaler, auto_mixed_precision, all_finite
from mindspore import nn, Tensor, set_seed, ops, data_sink, jit, save_checkpoint
from mindspore import dtype as mstype
from mindflow import FNO1D, RelativeRMSELoss, load_yaml_config, get_warmup_cosine_annealing_lr
from mindflow.pde import UnsteadyFlowWithLoss
下述src
包可以在applications/data_driven/burgers/fno1d/src下载。
[2]:
from src.dataset import create_training_dataset
set_seed(0)
np.random.seed(0)
ms.set_context(mode=ms.GRAPH_MODE, device_target="GPU", device_id=5)
use_ascend = ms.get_context(attr_key='device_target') == "Ascend"
从config中获得模型、数据、优化器的参数。
[3]:
config = load_yaml_config('fno1d.yaml')
data_params = config["data"]
model_params = config["model"]
optimizer_params = config["optimizer"]
创建数据集
下载训练与测试数据集: data_driven/burgers/dataset。
本案例根据Zongyi Li在 Fourier Neural Operator for Parametric Partial Differential Equations 一文中对数据集的设置生成训练数据集与测试数据集。具体设置如下: 基于周期性边界,生成满足如下分布的初始条件\(u_0(x)\):
本案例选取粘度系数\(\nu=0.1\),并使用分步法求解方程,其中热方程部分在傅里叶空间中精确求解,然后使用前向欧拉方法求解非线性部分。训练集样本量为1000个,测试集样本量为200个。
[4]:
# create training dataset
train_dataset = create_training_dataset(data_params, shuffle=True)
# create test dataset
test_input, test_label = np.load(os.path.join(data_params["path"], "test/inputs.npy")), \
np.load(os.path.join(data_params["path"], "test/label.npy"))
test_input = Tensor(np.expand_dims(test_input, -2), mstype.float32)
test_label = Tensor(np.expand_dims(test_label, -2), mstype.float32)
Data preparation finished
input_path: (1000, 1024, 1)
label_path: (1000, 1024)
构建模型
网络由1层Lifting layer、1层Decoding layer以及多层Fourier Layer叠加组成:
Lifting layer对应样例代码中
FNO1D.fc0
,将输出数据\(x\)映射至高维;多层Fourier Layer的叠加对应样例代码中
FNO1D.fno_seq
,本案例采用离散傅里叶变换实现时域与频域的转换;Decoding layer对应代码中
FNO1D.fc1
与FNO1D.fc2
,获得最终的预测值。
基于上述网络结构,进行模型初始化,其中模型参数可在配置文件中修改。
[5]:
model = FNO1D(in_channels=model_params["in_channels"],
out_channels=model_params["out_channels"],
resolution=model_params["resolution"],
modes=model_params["modes"],
channels=model_params["width"],
depths=model_params["depth"])
model_params_list = []
for k, v in model_params.items():
model_params_list.append(f"{k}:{v}")
model_name = "_".join(model_params_list)
print(model_name)
name:FNO1D_in_channels:1_out_channels:1_resolution:1024_modes:16_width:64_depth:4
优化器与损失函数
使用相对均方根误差作为网络训练损失函数:
[6]:
steps_per_epoch = train_dataset.get_dataset_size()
lr = get_warmup_cosine_annealing_lr(lr_init=optimizer_params["initial_lr"],
last_epoch=optimizer_params["train_epochs"],
steps_per_epoch=steps_per_epoch,
warmup_epochs=1)
optimizer = nn.Adam(model.trainable_params(), learning_rate=Tensor(lr))
if use_ascend:
loss_scaler = DynamicLossScaler(1024, 2, 100)
auto_mixed_precision(model, 'O1')
else:
loss_scaler = None
模型训练
使用MindSpore版本>=2.0.0,我们可以使用函数编程来训练神经网络。MindSpore Flow
为非稳态问题 UnsteadyFlowWithLoss
提供了一个训练接口,用于模型训练和评估。
[7]:
problem = UnsteadyFlowWithLoss(model, loss_fn=RelativeRMSELoss(), data_format="NHWTC")
summary_dir = os.path.join(config["summary_dir"], model_name)
print(summary_dir)
def forward_fn(data, label):
loss = problem.get_loss(data, label)
return loss
grad_fn = ms.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=False)
@jit
def train_step(data, label):
loss, grads = grad_fn(data, label)
if use_ascend:
loss = loss_scaler.unscale(loss)
if all_finite(grads):
grads = loss_scaler.unscale(grads)
loss = ops.depend(loss, optimizer(grads))
return loss
sink_process = data_sink(train_step, train_dataset, 1)
summary_dir = os.path.join(config["summary_dir"], model_name)
for epoch in range(1, config["epochs"] + 1):
model.set_train()
local_time_beg = time.time()
for _ in range(steps_per_epoch):
cur_loss = sink_process()
print("epoch: {}, time elapsed: {}ms, loss: {}".format(epoch, (time.time() - local_time_beg) * 1000, cur_loss.asnumpy()))
if epoch % config['eval_interval'] == 0:
model.set_train(False)
print("================================Start Evaluation================================")
rms_error = problem.get_loss(test_input, test_label)/test_input.shape[0]
print("mean rms_error:", rms_error)
print("=================================End Evaluation=================================")
ckpt_dir = os.path.join(summary_dir, "ckpt")
if not os.path.exists(ckpt_dir):
os.makedirs(ckpt_dir)
save_checkpoint(model, os.path.join(ckpt_dir, model_params["name"] + '_epoch' + str(epoch)))
./summary/name:FNO1D_in_channels:1_out_channels:1_resolution:1024_modes:16_width:64_depth:4
epoch: 1, time elapsed: 21747.305870056152ms, loss: 2.167046070098877
epoch: 2, time elapsed: 5525.397539138794ms, loss: 0.5935954451560974
epoch: 3, time elapsed: 5459.984540939331ms, loss: 0.7349425554275513
epoch: 4, time elapsed: 4948.82869720459ms, loss: 0.6338694095611572
epoch: 5, time elapsed: 5571.3865756988525ms, loss: 0.3174982964992523
epoch: 6, time elapsed: 5712.041616439819ms, loss: 0.3099440038204193
epoch: 7, time elapsed: 5218.639135360718ms, loss: 0.3117891848087311
epoch: 8, time elapsed: 4819.460153579712ms, loss: 0.1810857653617859
epoch: 9, time elapsed: 4968.810081481934ms, loss: 0.1386510729789734
epoch: 10, time elapsed: 4849.36785697937ms, loss: 0.2102256715297699
================================Start Evaluation================================
mean rms_error: 0.027940063
=================================End Evaluation=================================
...
epoch: 91, time elapsed: 4398.104429244995ms, loss: 0.019643772393465042
epoch: 92, time elapsed: 5479.56109046936ms, loss: 0.0641067773103714
epoch: 93, time elapsed: 5549.5476722717285ms, loss: 0.02199840545654297
epoch: 94, time elapsed: 6238.730907440186ms, loss: 0.024467874318361282
epoch: 95, time elapsed: 5434.457778930664ms, loss: 0.025712188333272934
epoch: 96, time elapsed: 6481.106281280518ms, loss: 0.02247200347483158
epoch: 97, time elapsed: 6303.435325622559ms, loss: 0.026637140661478043
epoch: 98, time elapsed: 5162.56856918335ms, loss: 0.030040305107831955
epoch: 99, time elapsed: 5364.72225189209ms, loss: 0.02589748054742813
epoch: 100, time elapsed: 5902.378797531128ms, loss: 0.028599221259355545
================================Start Evaluation================================
mean rms_error: 0.0037017763
=================================End Evaluation=================================