{ "cells": [ { "cell_type": "markdown", "id": "4f74759f", "metadata": {}, "source": [ "# 调试调优\n", "\n", "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.0.0-alpha/resource/_static/logo_notebook.png)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.0.0-alpha/zh_cn/migration_guide/mindspore_debug_and_tune.ipynb)\n", "[![下载样例代码](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.0.0-alpha/resource/_static/logo_download_code.png)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.0.0-alpha/zh_cn/migration_guide/mindspore_debug_and_tune.py)\n", "[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.0.0-alpha/resource/_static/logo_source.png)](https://gitee.com/mindspore/docs/blob/r2.0.0-alpha/docs/mindspore/source_zh_cn/migration_guide/debug_and_tune.ipynb)\n", "\n", "## 功能调试\n", "\n", "在网络的迁移过程,建议优先使用PYNATIVE模式进行调试,在PYNATIVE模式下可以进行debug,日志打印也比较友好。在调试ok后转成图模式运行,图模式在执行性能上会更友好,也可以找到一些在编写网络中的问题,比如使用了三方的算子导致梯度截断。\n", "详情请参考 [功能调试](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.0.0-alpha/debug/function_debug.html)。\n", "\n", "## 精度调试\n", "\n", "精度调试的过程基本可以分为以下过程:\n", "\n", "### 1.检查参数\n", "\n", "这部分包含检查所有参数和可训练参数的数量,检查所有参数的shape。\n", "\n", "#### MindSpore获取参数方法\n", "\n", "MindSpore可训练的参数和不可训练的参数都用`Parameter`。" ] }, { "cell_type": "code", "execution_count": 1, "id": "add2e01d", "metadata": { "ExecuteTime": { "end_time": "2022-09-14T08:27:19.655677Z", "start_time": "2022-09-14T08:27:16.422393Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "fc.weight (1, 1)\n", "fc.bias (1,)\n", "all parameter numbers: 2\n", "fc.weight (1, 1)\n", "fc.bias (1,)\n", "trainable parameter numbers: 2\n" ] } ], "source": [ "from mindspore import nn\n", "\n", "class msNet(nn.Cell):\n", " def __init__(self):\n", " super(msNet, self).__init__()\n", " self.fc = nn.Dense(1, 1, weight_init='normal')\n", " def construct(self, x):\n", " output = self.fc(x)\n", " return output\n", "\n", "msnet = msNet()\n", "# 获取所有参数\n", "all_parameter = []\n", "for item in msnet.get_parameters():\n", " all_parameter.append(item)\n", " print(item.name, item.data.shape)\n", "print(f\"all parameter numbers: {len(all_parameter)}\")\n", "\n", "# 获取可训练的参数\n", "trainable_params = msnet.trainable_params()\n", "for item in trainable_params:\n", " print(item.name, item.data.shape)\n", "print(f\"trainable parameter numbers: {len(trainable_params)}\")" ] }, { "cell_type": "markdown", "id": "7228042f", "metadata": {}, "source": [ "#### PyTorch获取参数方法\n", "\n", "PyTorch可训练的参数用`Parameter`,不可训练的参数`Parameter`的`requires_grad=False`或使用`buffer`。" ] }, { "cell_type": "code", "execution_count": 2, "id": "7c256f0e", "metadata": { "ExecuteTime": { "end_time": "2022-09-14T08:27:20.734044Z", "start_time": "2022-09-14T08:27:19.659843Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "fc.weight torch.Size([1, 1])\n", "fc.bias torch.Size([1])\n", "all parameter numbers: 2\n", "trainable parameter numbers: 2\n" ] } ], "source": [ "from torch import nn\n", "\n", "class ptNet(nn.Module):\n", " def __init__(self):\n", " super(ptNet, self).__init__()\n", " self.fc = nn.Linear(1, 1)\n", " def construct(self, x):\n", " output = self.fc(x)\n", " return output\n", "\n", "\n", "ptnet = ptNet()\n", "all_parameter = []\n", "trainable_params = []\n", "# 获取网络里的参数\n", "for name, item in ptnet.named_parameters():\n", " if item.requires_grad:\n", " trainable_params.append(item)\n", " all_parameter.append(item)\n", " print(name, item.shape)\n", "\n", "for name, buffer in ptnet.named_buffers():\n", " all_parameter.append(buffer)\n", " print(name, buffer.shape)\n", "print(f\"all parameter numbers: {len(all_parameter)}\")\n", "print(f\"trainable parameter numbers: {len(trainable_params)}\")" ] }, { "cell_type": "markdown", "id": "5d516d59", "metadata": {}, "source": [ "MindSpore和PyTorch的参数除了BatchNorm区别大一点,其他都差不多。注意MindSpore里没有`num_batches_tracked`的对应,实际使用时这个参数可以用优化器里的`global_step`替代。\n", "\n", "| MindSpore | PyTorch |\n", "| --------- | --------|\n", "| gamma | weight |\n", "| beta | bias |\n", "| moving_mean | running_mean |\n", "| moving_variance | running_var |\n", "| 无 | num_batches_tracked |\n", "\n", "### 2.模型验证\n", "\n", "由于模型算法的实现是和框架没有关系的,训练好的参数可以先转换成MindSpore的[checkpoint](https://www.mindspore.cn/tutorials/zh-CN/r2.0.0-alpha/beginner/save_load.html)文件加载到网络中进行推理验证。\n", "\n", "整个模型验证的流程请参考[resnet网络迁移](https://www.mindspore.cn/docs/zh-CN/r2.0.0-alpha/migration_guide/sample_code.html#%E6%A8%A1%E5%9E%8B%E9%AA%8C%E8%AF%81)。\n", "\n", "### 3.推理验证\n", "\n", "确认模型结构完全一致后,最好再做一次推理验证。整个推理过程除了模型外还有数据集和metrics,当推理结果不一致时,可以采用控制变量法,逐步排除问题。\n", "\n", "整个推理验证的流程请参考[resnet网络迁移](https://www.mindspore.cn/docs/zh-CN/r2.0.0-alpha/migration_guide/sample_code.html#%E6%8E%A8%E7%90%86%E6%B5%81%E7%A8%8B)。\n", "\n", "### 4.训练精度\n", "\n", "当完成了推理验证后,我们基本可以确定基础模型,数据处理和metrics计算没有问题。此时如果训练的精度还是有问题时怎么进行排查呢?\n", "\n", "- 加loss scale,在Ascend上因为Conv、Sort、TopK等算子只能是float16的,MatMul由于性能问题最好也是float16的,所以建议Loss scale操作作为网络训练的标配。\n", "\n", "Ascend 上只支持float16的算子列表:\n", "\n", "| 算子类别 | 具体算子 |\n", "| ------ | ------ |\n", "| 池化 | AdaptiveMaxPool2D, AvgPool3D, AvgPool, MaxPool, MaxPoolWithArgmax, Pooling |\n", "| 循环神经结构 | LSTM, DynamicRNN, GRUV2 |\n", "| 卷积 | Conv2D, Conv2DTranspose, Conv3D, Conv3DTranspose, DepthwiseConv2dNative |\n", "| 矩阵乘 (这类主要float32太慢, 需要转成float16的) | MatMul, BatchMatMul |\n", "| 排序 | Sort, TopK |\n", "| 其他 | BoundingBoxEncode, ExtractImagePatches, ExtractVolumePatches, FusedDbnDw, IOU, NewIm2Col, NMSWithMask |\n", "\n" ] }, { "cell_type": "code", "execution_count": 3, "id": "8e986220", "metadata": { "ExecuteTime": { "end_time": "2022-09-14T08:27:20.756412Z", "start_time": "2022-09-14T08:27:20.738829Z" } }, "outputs": [], "source": [ "import mindspore as ms\n", "from mindspore import nn\n", "from mindspore.train import Model\n", "# Model\n", "loss_scale_manager = ms.FixedLossScaleManager(drop_overflow_update=False) # 静态loss scale\n", "# loss_scale_manager = ms.DynamicLossScaleManager() # 动态loss scale\n", "\n", "# 1. 一般流程\n", "loss = nn.MSELoss()\n", "opt = nn.Adam(params=msnet.trainable_params(), learning_rate=0.01)\n", "model = Model(network=msnet, loss_fn=loss, optimizer=opt, loss_scale_manager=loss_scale_manager)\n", "\n", "# 2. 自已包装正向网络和loss函数\n", "msnet.to_float(ms.float16)\n", "loss.to_float(ms.float32)\n", "net_with_loss = nn.WithLossCell(msnet, loss)\n", "# 用Model的混合精度最好要有loss_fn,否则loss部分会使用float16计算,容易溢出\n", "model = Model(network=net_with_loss, optimizer=opt)\n", "\n", "# 3. 自己包装训练流程\n", "scale_sense = nn.FixedLossScaleUpdateCell(1)#(config.loss_scale) # 静态loss scale\n", "# scale_sense = nn.DynamicLossScaleUpdateCell(loss_scale_value=config.loss_scale,\n", "# scale_factor=2, scale_window=1000) # 动态loss scale\n", "train_net = nn.TrainOneStepWithLossScaleCell(net_with_loss, optimizer=opt, scale_sense=scale_sense)\n", "model = Model(network=train_net)" ] }, { "cell_type": "markdown", "id": "ca710879", "metadata": {}, "source": [ "- 排查是否溢出,添加loss scale时,默认会加上溢出检测,可以将是否溢出的结果进行监测,如果持续溢出的话建议优先排查为什么溢出,建议使用MindInsight的[调试器](https://www.mindspore.cn/mindinsight/docs/zh-CN/r2.0.0-alpha/debugger.html)或者[dump数据](https://mindspore.cn/tutorials/experts/zh-CN/r2.0.0-alpha/debug/dump.html)。" ] }, { "cell_type": "code", "execution_count": 4, "id": "ce64dad6", "metadata": { "ExecuteTime": { "end_time": "2022-09-14T08:27:22.665319Z", "start_time": "2022-09-14T08:27:20.758474Z" } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "step: 0, loss: 138.42825, overflow:False, scale:1.0\n", "step: 1, loss: 118.172104, overflow:False, scale:1.0\n", "step: 2, loss: 159.14542, overflow:False, scale:1.0\n", "step: 3, loss: 150.65671, overflow:False, scale:1.0\n", "... ...\n", "step: 97, loss: 69.513245, overflow:False, scale:1.0\n", "step: 98, loss: 51.903114, overflow:False, scale:1.0\n", "step: 99, loss: 42.250656, overflow:False, scale:1.0\n" ] } ], "source": [ "import numpy as np\n", "from mindspore import dataset as ds\n", "\n", "def get_data(num, w=2.0, b=3.0):\n", " for _ in range(num):\n", " x = np.random.uniform(-10.0, 10.0)\n", " noise = np.random.normal(0, 1)\n", " y = x * w + b + noise\n", " yield np.array([x]).astype(np.float32), np.array([y]).astype(np.float32)\n", "\n", "\n", "def create_dataset(num_data, batch_size=16, repeat_size=1):\n", " input_data = ds.GeneratorDataset(list(get_data(num_data)), column_names=['data', 'label'])\n", " input_data = input_data.batch(batch_size, drop_remainder=True)\n", " input_data = input_data.repeat(repeat_size)\n", " return input_data\n", "\n", "train_net.set_train()\n", "dataset = create_dataset(1600)\n", "iterator = dataset.create_tuple_iterator()\n", "for i, data in enumerate(iterator):\n", " loss, overflow, scaling_sens = train_net(*data)\n", " print(\"step: {}, loss: {}, overflow:{}, scale:{}\".format(i, loss, overflow, scaling_sens))" ] }, { "cell_type": "markdown", "id": "18a1408b", "metadata": {}, "source": [ "- 排查优化器、loss和参数初始化,整个训练过程除了模型、数据集外新加的部分只有优化器、loss和参数初始化,训练有问题时需要重点排查。尤其是loss和参数初始化,出现问题的概率较大。\n", "- 多卡确认是否加seed保证多卡初始化一致,[自定义训练](https://www.mindspore.cn/docs/zh-CN/r2.0.0-alpha/migration_guide/model_development/training_and_gradient.html#自定义训练cell)确认是否进行梯度聚合。" ] }, { "cell_type": "code", "execution_count": 5, "id": "a0c6fa58", "metadata": { "ExecuteTime": { "end_time": "2022-09-14T08:27:22.667416Z", "start_time": "2022-09-14T08:27:22.667416Z" } }, "outputs": [], "source": [ "import mindspore as ms\n", "ms.set_seed(1) # 会固定MindSpore的,numpy的,dataset的随机种子,API内部的需要在API属性设置" ] }, { "cell_type": "markdown", "id": "fe6ae175", "metadata": {}, "source": [ "- 排查数据处理,通过可视化等方法查看数据处理是否符合预期,重点查看数据shuffle,是否有数据不匹配的情况。\n", "\n", "更多精度调试策略请参考[精度调试](https://mindspore.cn/mindinsight/docs/zh-CN/r2.0.0-alpha/accuracy_problem_preliminary_location.html)。\n", "\n", "## 性能调优\n", "\n", "性能优化方向主要包含:\n", "\n", "1. 算子性能优化\n", "2. 框架使能性能优化\n", "3. 多机同步性能优化\n", "4. 数据处理性能优化\n", "\n", "可以参考[resnet网络迁移](https://www.mindspore.cn/docs/zh-CN/r2.0.0-alpha/migration_guide/sample_code.html)串通整个过程。\n", "\n", "> 有的网络很大或者有很多[流程控制语句](https://mindspore.cn/tutorials/experts/zh-CN/r2.0.0-alpha/network/control_flow.html),这种情况在图模式下编译会很慢。在性能调优过程请区分图编译和网络执行,本节主要介绍网络执行阶段的性能调优策略,如果确认是图编译慢请尝试[算子增量编译](https://mindspore.cn/tutorials/experts/zh-CN/r2.0.0-alpha/debug/op_compilation.html)或者联系 [MindSpore社区](https://gitee.com/mindspore/mindspore/issues) 反馈。\n", "\n", "### 算子性能优化\n", "\n", "#### 算子性能问题\n", "\n", "单算子耗时久、对于同一种算子在不同shape或者不同 datatype 下性能差异较大的情况主要是由算子性能问题引起,通常有以下解决思路:\n", "\n", "1. 使用计算量更小的数据类型。例如,同一个算子在 float16 和 float32 下精度无明显差别,可使用计算量更小的 float16 格式。\n", "2. 使用算法相同的其他算子规避。\n", "3. Ascend环境上注意16对齐。由于昇腾芯片的设计,在AICore上的计算最好是16对齐的(shape中的每一维都是16的倍数)。\n", "4. [算子调优](https://mindspore.cn/tutorials/experts/zh-CN/r2.0.0-alpha/debug/auto_tune.html)。\n", "\n", "如果您发现有性能较差的算子时,建议联系 [MindSpore社区](https://gitee.com/mindspore/mindspore/issues) 反馈,我们确认为性能问题后会及时优化。\n", "\n", "### 框架使能性能优化\n", "\n", "#### 使用静态图模式\n", "\n", "MindSpore一般在静态图模式下比PYNATIVE模式下快很多,最好能在静态图模式下进行训练和推理,具体原理请参考[动静态图结合](https://www.mindspore.cn/docs/zh-CN/r2.0.0-alpha/design/dynamic_graph_and_static_graph.html)。\n", "\n", "#### on-device执行\n", "\n", "MindSpore提供了一种[on-device执行](https://www.mindspore.cn/docs/zh-CN/r2.0.0-alpha/design/overview.html#面向昇腾硬件的竞争力优化)的方法将数据处理和网络在device上的执行并行起来,只需要在`model.train`中设置`dataset_sink_mode=True`即可,注意这个配置默认是`True`当打开这个配置时,一个epoch只会返回一个网络的结果,当进行调试时建议先将这个值改成`False`。\n", "\n", "#### 使用自动混合精度\n", "\n", "混合精度训练方法是通过混合使用单精度和半精度数据格式来加速深度神经网络训练的过程,同时保持了单精度训练所能达到的网络精度。混合精度训练能够加速计算过程,同时减少内存使用和存取,并使得在特定的硬件上可以训练更大的模型或 batch size。\n", "\n", "具体可参考 [混合精度教程](https://www.mindspore.cn/tutorials/zh-CN/r2.0.0-alpha/advanced/mixed_precision.html)。\n", "\n", "#### 使能图算融合\n", "\n", "图算融合是 MindSpore 特有的网络性能优化技术。它可以通过自动分析和优化现有网络计算图逻辑,并结合目标硬件能力,对计算图进行计算化简和替代、算子拆分和融合、算子特例化编译等优化,以提升设备计算资源利用率,实现对网络性能的整体优化。相比传统优化技术,图算融合具有多算子跨边界联合优化、与算子编译跨层协同、基于Polyhedral的算子即时编译等独特优势。另外,图算融合只需要用户打开对应配置后,整个优化过程即可自动完成,不需要网络开发人员进行其它额外感知,使得用户可以聚焦网络算法实现。\n", "\n", "图算融合的适用场景包括:对网络执行时间具有较高性能要求的场景;通过拼接基本算子实现自定义组合算子,并希望对这些基本算子进行自动融合,以提升自定义组合算子性能的场景。\n", "\n", "具体可参考 [图算融合教程](https://www.mindspore.cn/docs/zh-CN/r2.0.0-alpha/design/graph_fusion_engine.html)。\n", "\n", "#### 其他\n", "\n", "转换算子过多(TransData、Cast类算子)且耗时明显时,如果是我们手动加入的Cast算子,可分析其必要性,如果对精度没有影响,可去掉冗余的Cast、TransData算子。\n", "\n", "如果是MindSpore自动生成的转换算子过多,可能是MindSpore框架针对某些特殊情况没有充分优化,可联系 [MindSpore社区](https://gitee.com/mindspore/mindspore/issues) 反馈。\n", "\n", "[动态shape场景](https://www.mindspore.cn/docs/zh-CN/r2.0.0-alpha/migration_guide/analysis_and_preparation.html#动态shape)目前需要不断的编图,可能会造成端到端的训练时间较长,建议优先[规避动态shape](https://www.mindspore.cn/docs/zh-CN/r2.0.0-alpha/migration_guide/model_development/model_and_loss.html#动态shape规避策略)。\n", "\n", "### 多机同步性能优化\n", "\n", "当进行分布式训练时,在一个Step的训练过程中,完成前向传播和梯度计算后,各个机器开始进行AllReduce梯度同步,AllReduce同步时间主要受权重数量、机器数量影响,对于越复杂、机器规模越大的网络,其 AllReduce 梯度更新时间也越久,此时我们可以进行AllReduce 切分来优化这部分耗时。\n", "\n", "正常情况下,AllReduce 梯度同步会等所有反向算子执行结束,也就是对所有权重都计算出梯度后再一次性同步所有机器的梯度,而使用AllReduce切分后,我们可以在计算出一部分权重的梯度后,就立刻进行这部分权重的梯度同步,这样梯度同步和剩余算子的梯度计算可以并行执行,也就隐藏了这部分 AllReduce 梯度同步时间。切分策略通常是手动尝试,寻找一个最优的方案(支持切分大于两段)。\n", "以 [ResNet50网络](https://gitee.com/mindspore/models/blob/r2.0.0-alpha/official/cv/ResNet/train.py) 为例,该网络共有 160 个 权重, [85, 160] 表示第 0 至 85个权重计算完梯度后立刻进行梯度同步,第 86 至 160 个 权重计算完后再进行梯度同步,这里共切分两段,因此需要进行两次梯度同步。代码实现如下:" ] }, { "cell_type": "markdown", "id": "bfc50b84-a9c5-40dd-ac43-544365dc30fd", "metadata": { "ExecuteTime": { "end_time": "2022-09-14T08:27:22.670026Z", "start_time": "2022-09-14T08:27:22.670026Z" } }, "source": [ "```python\n", "import os\n", "import mindspore as ms\n", "from mindspore.communication import init\n", "\n", "device_id = int(os.getenv('DEVICE_ID', '0'))\n", "rank_size = int(os.getenv('RANK_SIZE', '1'))\n", "rank_id = int(os.getenv('RANK_ID', '0'))\n", "\n", "# init context\n", "ms.set_context(mode=ms.GRAPH_MODE, device_target='Ascend', device_id=device_id)\n", "if rank_size > 1:\n", " ms.set_auto_parallel_context(device_num=rank_size, parallel_mode=ms.ParallelMode.DATA_PARALLEL,\n", " gradients_mean=True)\n", " ms.set_auto_parallel_context(all_reduce_fusion_config=[85, 160])\n", " init()\n", "```" ] }, { "cell_type": "markdown", "id": "172d2a53", "metadata": {}, "source": [ "更多请参考[集群性能调试](https://www.mindspore.cn/mindinsight/docs/zh-CN/r2.0.0-alpha/performance_profiling_of_cluster.html)。\n", "\n", "### 数据处理性能优化\n", "\n", "单Step性能抖动、数据队列一段时间内持续为空的情况都是由于数据预处理部分性能较差,使得数据处理速度跟不上单Step迭代速度导致,这两个现象通常成对出现。\n", "\n", "当数据处理速度较慢时,队列从最开始的满队列情况逐渐消耗为空队列,训练进程会开始等待空队列填入数据,一旦有新的数据填入,网络才会继续进行单Step训练。由于数据处理没有队列作为缓冲,数据处理的性能抖动直接体现在单Step的性能上,因此还会造成单Step性能抖动。\n", "\n", "关于数据的性能问题,可以参考 MindInsight 组件的 [数据准备性能分析](https://www.mindspore.cn/mindinsight/docs/zh-CN/r2.0.0-alpha/performance_profiling_ascend.html#数据准备性能分析),其给出了数据性能的常见问题及解决方法。\n", "\n", "更多性能调试方法请参考[性能调优](https://www.mindspore.cn/tutorials/experts/zh-CN/r2.0.0-alpha/debug/performance_optimization.html)。" ] } ], "metadata": { "kernelspec": { "display_name": "MindSpore", "language": "python", "name": "mindspore" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.5" } }, "nbformat": 4, "nbformat_minor": 5 }