{ "cells": [ { "cell_type": "markdown", "id": "487f7a6d-506a-4824-94b1-a96b4866a447", "metadata": {}, "source": [ "[![在线运行](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.0/resource/_static/logo_modelarts.png)](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9vYnMuZHVhbHN0YWNrLmNuLW5vcnRoLTQubXlodWF3ZWljbG91ZC5jb20vbWluZHNwb3JlLXdlYnNpdGUvbm90ZWJvb2svcjIuMC90dXRvcmlhbHMvemhfY24vYWR2YW5jZWQvbW9kdWxlcy9taW5kc3BvcmVfb3B0aW1pemVyLmlweW5i=&imageid=e225a9aa-230a-4ea5-a538-b5faed64a6a6) [![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.0/resource/_static/logo_notebook.png)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.0/tutorials/zh_cn/advanced/modules/mindspore_optimizer.ipynb) [![下载样例代码](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.0/resource/_static/logo_download_code.png)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.0/tutorials/zh_cn/advanced/modules/mindspore_optimizer.py) [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.0/resource/_static/logo_source.png)](https://gitee.com/mindspore/docs/blob/r2.0/tutorials/source_zh_cn/advanced/modules/optimizer.ipynb)" ] }, { "cell_type": "markdown", "id": "e9fc48e8", "metadata": {}, "source": [ "# 优化器\n", "\n", "模型训练过程中,使用优化器更新网络参数,合适的优化器可以有效减少训练时间,提高模型性能。\n", "\n", "最基本的优化器是随机梯度下降算法(SGD),很多优化器在SGD的基础上进行了改进,以实现目标函数能更快速更有效地收敛到全局最优点。MindSpore中的`nn`模块提供了常用的优化器,如`nn.SGD`、`nn.Adam`、`nn.Momentum`等。本章主要介绍如何配置MindSpore提供的优化器以及如何自定义优化器。\n", "\n", "![learningrate.png](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.0/tutorials/source_zh_cn/advanced/modules/images/learning_rate.png)\n", "\n", "> MindSpore提供的优化器详细内容参见[优化器API](https://www.mindspore.cn/docs/zh-CN/r2.0/api_python/mindspore.nn.html#优化器)。\n", "\n", "## 配置优化器\n", "\n", "使用MindSpore提供的优化器时,首先需要指定待优化的网络参数`params`,然后设置优化器的其他主要参数,如学习率`learning_rate`和权重衰减`weight_decay`等。\n", "\n", "若要为不同网络参数单独设置选项,如对卷积参数和非卷积参数设置不同的学习率,则可使用参数分组的方法来设置优化器。\n", "\n", "### 参数配置\n", "\n", "在构建优化器实例时,需要通过优化器参数`params`配置模型网络中要训练和更新的权重。`Parameter`中包含了一个`requires_grad`的布尔型的类属性,用于表示模型中的网络参数是否需要进行更新。\n", "\n", "网络中大部分参数的`requires_grad`默认值为True,少部分默认值为False,例如BatchNorm中的`moving_mean`和`moving_variance`。\n", "\n", "MindSpore中的`trainable_params`方法会屏蔽掉`Parameter`中`requires_grad`为False的属性,在为优化器配置 `params` 入参时,可使用`net.trainable_params()`方法来指定需要优化和更新的网络参数。" ] }, { "cell_type": "code", "execution_count": 1, "id": "d21d5a57", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[Parameter (name=param, shape=(1,), dtype=Float32, requires_grad=True), Parameter (name=conv.weight, shape=(6, 1, 5, 5), dtype=Float32, requires_grad=True)]\n" ] } ], "source": [ "import numpy as np\n", "import mindspore\n", "from mindspore import nn, ops\n", "from mindspore import Tensor, Parameter\n", "\n", "class Net(nn.Cell):\n", " def __init__(self):\n", " super().__init__()\n", " self.conv = nn.Conv2d(1, 6, 5, pad_mode=\"valid\")\n", " self.param = Parameter(Tensor(np.array([1.0], np.float32)), 'param')\n", "\n", " def construct(self, x):\n", " x = self.conv(x)\n", " x = x * self.param\n", " out = ops.matmul(x, x)\n", " return out\n", "\n", "net = Net()\n", "\n", "# 配置优化器需要更新的参数\n", "optim = nn.Adam(params=net.trainable_params())\n", "print(net.trainable_params())" ] }, { "cell_type": "markdown", "id": "b90ab617", "metadata": {}, "source": [ "用户可以手动修改网络权重中 `Parameter` 的 `requires_grad` 属性的默认值,来决定哪些参数需要更新。\n", "\n", "如下例所示,使用 `net.get_parameters()` 方法获取网络中所有参数,并手动修改巻积参数的 `requires_grad` 属性为False,训练过程中将只对非卷积参数进行更新。" ] }, { "cell_type": "code", "execution_count": 2, "id": "61282381", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[Parameter (name=param, shape=(1,), dtype=Float32, requires_grad=True)]\n" ] } ], "source": [ "conv_params = [param for param in net.get_parameters() if 'conv' in param.name]\n", "for conv_param in conv_params:\n", " conv_param.requires_grad = False\n", "print(net.trainable_params())\n", "optim = nn.Adam(params=net.trainable_params())" ] }, { "cell_type": "markdown", "id": "b3261183", "metadata": {}, "source": [ "### 学习率\n", "\n", "学习率作为机器学习及深度学习中常见的超参,对目标函数能否收敛到局部最小值及何时收敛到最小值有重要影响。学习率过大容易导致目标函数波动较大,难以收敛到最优值,太小则会导致收敛过程耗时过长。除了设置固定学习率,MindSpore还支持设置动态学习率,这些方法在深度学习网络中能明显提升收敛效率。\n", "\n", "#### 固定学习率\n", "\n", "使用固定学习率时,优化器传入的`learning_rate`为浮点类型或标量Tensor。\n", "\n", "以`nn.Momentum`为例,固定学习率为0.01,示例如下:" ] }, { "cell_type": "code", "execution_count": 3, "id": "b3906fa2", "metadata": {}, "outputs": [], "source": [ "# 设置学习率为0.01\n", "optim = nn.Momentum(params=net.trainable_params(), learning_rate=0.01, momentum=0.9)" ] }, { "cell_type": "markdown", "id": "7e458505", "metadata": {}, "source": [ "#### 动态学习率\n", "\n", "`mindspore.nn`提供了动态学习率的模块,分为Dynamic LR函数和LearningRateSchedule类。其中Dynamic LR函数会预先生成长度为`total_step`的学习率列表,将列表传入优化器中使用,训练过程中,第i步使用第i个学习率的值作为当前step的学习率,其中`total_step`的设置值不能小于训练的总步数;LearningRateSchedule类将实例传递给优化器,优化器根据当前step计算得到当前的学习率。\n", "\n", "- Dynamic LR函数\n", "\n", " [Dynamic LR](https://www.mindspore.cn/docs/zh-CN/r2.0/api_python/mindspore.nn.html#dynamic-lr%E5%87%BD%E6%95%B0)函数目前有基于余弦衰减函数计算学习率(`nn.cosine_decay_lr`)、基于指数衰减函数计算学习率(`nn.exponential_decay_lr`)、基于逆时衰减函数计算学习率(`nn.inverse_decay_lr`)、基于自然指数衰减函数计算学习率(`nn.natural_exp_decay_lr`)、获取分段常量学习率(`nn.piecewise_constant_lr`)、基于多项式衰减函数计算学习率(`nn.polynomial_decay_lr`)和预热学习率(`nn.warmup_lr`)。\n", "\n", "下例以分段常量学习率`nn.piecewise_constant_lr`为例:" ] }, { "cell_type": "code", "execution_count": 4, "id": "59c06ad2", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "[0.1, 0.05, 0.05, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01, 0.01]\n" ] } ], "source": [ "milestone = [1, 3, 10]\n", "learning_rates = [0.1, 0.05, 0.01]\n", "lr = nn.piecewise_constant_lr(milestone, learning_rates)\n", "\n", "# 打印学习率\n", "print(lr)\n", "\n", "net = Net()\n", "# 优化器设置待优化的网络参数和分段常量学习率\n", "optim = nn.SGD(net.trainable_params(), learning_rate=lr)" ] }, { "cell_type": "markdown", "id": "fda06fa3", "metadata": {}, "source": [ "- LearningRateSchedule类\n", "\n", " [LearningRateSchedule类](https://www.mindspore.cn/docs/zh-CN/r2.0/api_python/mindspore.nn.html#learningrateschedule%E7%B1%BB)目前有基于余弦衰减函数计算学习率(`nn.CosineDecayLR`)、基于指数衰减函数计算学习率(`nn.ExponentialDecayLR`)、基于逆时衰减函数计算学习率(`nn.InverseDecayLR`)、基于自然指数衰减函数计算学习率(`nn.NaturalExpDecayLR`)、基于多项式衰减函数计算学习率(`nn.PolynomialDecayLR`)和预热学习率(`nn.WarmUpLR`)。\n", "\n", "下例基于指数衰减函数计算学习率`nn.ExponentialDecayLR`为例:" ] }, { "cell_type": "code", "execution_count": 5, "id": "5b440221", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "step1, lr:0.1\n", "step2, lr:0.097400375\n", "step3, lr:0.094868325\n", "step4, lr:0.09240211\n" ] } ], "source": [ "learning_rate = 0.1 # 学习率的初始值\n", "decay_rate = 0.9 # 衰减率\n", "decay_steps = 4 # 衰减的step数\n", "step_per_epoch = 2\n", "\n", "exponential_decay_lr = nn.ExponentialDecayLR(learning_rate, decay_rate, decay_steps)\n", "\n", "for i in range(decay_steps):\n", " step = Tensor(i, mindspore.int32)\n", " result = exponential_decay_lr(step)\n", " print(f\"step{i+1}, lr:{result}\")\n", "\n", "net = Net()\n", "\n", "# 优化器设置学习率为基于指数衰减函数计算学习率\n", "optim = nn.Momentum(net.trainable_params(), learning_rate=exponential_decay_lr, momentum=0.9)" ] }, { "cell_type": "markdown", "id": "1934aabe", "metadata": {}, "source": [ "### 权重衰减\n", "\n", "权重衰减(weight decay),通常也被称为L2正则化,是一种缓解深度神经网络过拟合的方法。\n", "\n", "一般情况下,`weight_decay`取值范围为$[0, 1)$,其默认值为0.0,此时不使用权重衰减策略。" ] }, { "cell_type": "code", "execution_count": 6, "id": "e0a8d738", "metadata": {}, "outputs": [], "source": [ "net = Net()\n", "optimizer = nn.Momentum(net.trainable_params(), learning_rate=0.01,\n", " momentum=0.9, weight_decay=0.9)" ] }, { "cell_type": "markdown", "id": "4a7dbb89", "metadata": {}, "source": [ "此外,MindSpore还支持动态weight decay。此时weight_decay是用户自定义的一个Cell。在训练过程中,优化器内部会调用该Cell的实例,传入global_step计算当前step的weight_decay值。其中global_step是内部维护的变量,每训练一个step,global_step都会自加1。如下是weight_decay在训练过程中进行指数衰减的一个示例。" ] }, { "cell_type": "code", "execution_count": 7, "id": "639bef5b", "metadata": {}, "outputs": [], "source": [ "from mindspore.nn import Cell\n", "from mindspore import ops, nn\n", "import mindspore as ms\n", "\n", "class ExponentialWeightDecay(Cell):\n", "\n", " def __init__(self, weight_decay, decay_rate, decay_steps):\n", " super(ExponentialWeightDecay, self).__init__()\n", " self.weight_decay = weight_decay\n", " self.decay_rate = decay_rate\n", " self.decay_steps = decay_steps\n", "\n", " def construct(self, global_step):\n", " # construct只能有一个输入,训练过程中,会自动传入global step进行计算\n", " p = global_step / self.decay_steps\n", " return self.weight_decay * ops.pow(self.decay_rate, p)\n", "\n", "net = Net()\n", "\n", "weight_decay = ExponentialWeightDecay(weight_decay=0.0001, decay_rate=0.1, decay_steps=10000)\n", "optimizer = nn.Momentum(net.trainable_params(), learning_rate=0.01,\n", " momentum=0.9, weight_decay=weight_decay)" ] }, { "cell_type": "markdown", "id": "2e3cd2e8", "metadata": {}, "source": [ "### 超参分组\n", "\n", "优化器也支持为不同参数单独设置选项,此时不直接传入变量,而是传入一个字典列表,每个字典都对应一组参数的设置值,字典内可用的key有`params`、`lr`、`weight_decay`和`grad_centralizaiton`,value为对应的设定值。\n", "\n", "其中,`params`必须配置,其余参数可选择性配置,未配置参数项将采用定义优化器时设置的参数值。分组时,学习率既可使用固定学习率,又可使用动态学习率,`weight_decay`可使用固定值。\n", "\n", "下例分别对卷积参数和非卷积参数设置不同的学习率和权重衰减参数。" ] }, { "cell_type": "code", "execution_count": 8, "id": "616fd8a2", "metadata": {}, "outputs": [], "source": [ "net = Net()\n", "\n", "# 卷积参数\n", "conv_params = list(filter(lambda x: 'conv' in x.name, net.trainable_params()))\n", "# 非卷积参数\n", "no_conv_params = list(filter(lambda x: 'conv' not in x.name, net.trainable_params()))\n", "\n", "# 固定学习率\n", "fix_lr = 0.01\n", "\n", "# 基于多项式衰减函数计算学习率\n", "polynomial_decay_lr = nn.PolynomialDecayLR(learning_rate=0.1, # 学习率初始值\n", " end_learning_rate=0.01, # 学习率最终值\n", " decay_steps=4, # 衰减的step数\n", " power=0.5) # 多项式幂\n", "\n", "# 卷积参数使用固定学习率0.001,权重衰减为0.01\n", "# 非卷积参数使用动态学习率,权重衰减为0.0\n", "group_params = [{'params': conv_params, 'weight_decay': 0.01, 'lr': fix_lr},\n", " {'params': no_conv_params, 'lr': polynomial_decay_lr}]\n", "\n", "optim = nn.Momentum(group_params, learning_rate=0.1, momentum=0.9, weight_decay=0.0)" ] }, { "cell_type": "markdown", "id": "193262b5", "metadata": {}, "source": [ "> 当前MindSpore除个别优化器外(例如AdaFactor,FTRL),均支持对学习率进行分组,详情参考[优化器API](https://www.mindspore.cn/docs/zh-CN/r2.0/api_python/mindspore.nn.html#优化器)。\n", "\n", "## 自定义优化器\n", "\n", "除使用MindSpore提供的优化器外,用户还可自定义优化器。\n", "\n", "自定义优化器时需继承优化器基类[nn.Optimizer](https://www.mindspore.cn/docs/zh-CN/r2.0/api_python/nn/mindspore.nn.Optimizer.html#mindspore.nn.Optimizer),并重写`__init__`方法和`construct`方法以自行设定参数更新策略。\n", "\n", "下例实现自定义优化器Momentum(带有动量的SGD算法):\n", "\n", "$$ v_{t+1} = v_t×u+grad \\tag{1} $$\n", "\n", "$$p_{t+1} = p_t - lr*v_{t+1} \\tag{2} $$\n", "\n", "其中,$grad$ 、$lr$ 、$p$ 、$v$ 和 $u$ 分别表示梯度、学习率、权重参数、动量参数(Momentum)和初始速度。" ] }, { "cell_type": "code", "execution_count": 9, "id": "c40a31dd", "metadata": {}, "outputs": [], "source": [ "class Momentum(nn.Optimizer):\n", " \"\"\"定义优化器\"\"\"\n", " def __init__(self, params, learning_rate, momentum=0.9):\n", " super(Momentum, self).__init__(learning_rate, params)\n", " self.momentum = Parameter(Tensor(momentum, ms.float32), name=\"momentum\")\n", " self.moments = self.parameters.clone(prefix=\"moments\", init=\"zeros\")\n", "\n", " def construct(self, gradients):\n", " \"\"\"construct输入为梯度,在训练中自动传入梯度gradients\"\"\"\n", " lr = self.get_lr()\n", " params = self.parameters # 待更新的权重参数\n", "\n", " for i in range(len(params)):\n", " # 更新moments值\n", " ops.assign(self.moments[i], self.moments[i] * self.momentum + gradients[i])\n", " update = params[i] - self.moments[i] * lr #带有动量的SGD算法\n", " ops.assign(params[i], update)\n", " return params\n", "\n", "net = Net()\n", "# 设置优化器待优化的参数和学习率为0.01\n", "opt = Momentum(net.trainable_params(), 0.01)" ] }, { "cell_type": "markdown", "id": "cdf281c6", "metadata": {}, "source": [ "`mindSpore.ops`也封装了优化器算子供用户自行定义优化器,如`ops.ApplyCenteredRMSProp`、 `ops.ApplyMomentum`和`ops.ApplyRMSProp`等。下例使用`ApplyMomentum`算子自定义优化器Momentum:" ] }, { "cell_type": "code", "execution_count": 10, "id": "82602027", "metadata": {}, "outputs": [], "source": [ "class Momentum(nn.Optimizer):\n", " \"\"\"定义优化器\"\"\"\n", " def __init__(self, params, learning_rate, momentum=0.9):\n", " super(Momentum, self).__init__(learning_rate, params)\n", " self.moments = self.parameters.clone(prefix=\"moments\", init=\"zeros\")\n", " self.momentum = momentum\n", " self.opt = ops.ApplyMomentum()\n", "\n", " def construct(self, gradients):\n", " # 待更新的权重参数\n", " params = self.parameters\n", " success = None\n", " for param, mom, grad in zip(params, self.moments, gradients):\n", " success = self.opt(param, mom, self.learning_rate, grad, self.momentum)\n", " return success\n", "\n", "net = Net()\n", "# 设置优化器待优化的参数和学习率为0.01\n", "opt = Momentum(net.trainable_params(), 0.01)" ] } ], "metadata": { "kernelspec": { "display_name": "MindSpore", "language": "python", "name": "mindspore" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.5" } }, "nbformat": 4, "nbformat_minor": 5 }