{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 优化模型参数\n", "\n", "[![](https://gitee.com/mindspore/docs/raw/r1.2/docs/programming_guide/source_zh_cn/_static/logo_source.png)](https://gitee.com/mindspore/docs/blob/r1.2/tutorials/source_zh_cn/optimization.ipynb) [![](https://gitee.com/mindspore/docs/raw/r1.2/resource/_static/logo_notebook.png)](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/r1.2/quick_start/mindspore_optimization.ipynb) \n", "\n", "通过上面章节的学习,我们已经学会如何创建模型和构建数据集,现在开始学习如何设置超参和优化模型参数。\n", "\n", "## 超参\n", "\n", "超参是可以调整的参数,可以控制模型训练优化的过程,不同的超参数值可能会影响模型训练和收敛速度。\n", "\n", "一般会定义以下用于训练的超参:\n", "\n", "- 训练轮次(epoch):训练时遍历数据集的次数。\n", "- 批次大小(batch size):数据集进行分批读取训练,设定每个批次数据的大小。\n", "- 学习率(learning rate):如果学习率偏小,会导致收敛的速度变慢,如果学习率偏大则可能会导致训练不收敛等不可预测的结果。" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "epochs = 5\n", "batch_size = 64\n", "learning_rate = 1e-3" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 损失函数\n", "\n", "**损失函数**用来评价模型的**预测值**和**真实值**不一样的程度,在这里,使用绝对误差损失函数`L1Loss`。`mindspore.nn.loss`也提供了许多其他常用的损失函数,如`SoftmaxCrossEntropyWithLogits`、`MSELoss`、`SmoothL1Loss`等。\n", "\n", "我们给定输出值和目标值,计算损失值,使用方法如下所示:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "1.5\n" ] } ], "source": [ "import numpy as np\n", "import mindspore.nn as nn\n", "from mindspore import Tensor\n", "\n", "loss = nn.L1Loss()\n", "output_data = Tensor(np.array([[1, 2, 3], [2, 3, 4]]).astype(np.float32))\n", "target_data = Tensor(np.array([[0, 2, 5], [3, 1, 1]]).astype(np.float32))\n", "print(loss(output_data, target_data))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "\n", "## 优化器\n", "\n", "优化器用于计算和更新梯度,模型优化算法的选择直接关系到最终模型的性能,如果有时候效果不好,未必是特征或者模型设计的问题,很有可能是优化算法的问题。MindSpore所有优化逻辑都封装在`Optimizer`对象中,在这里,我们使用SGD优化器。`mindspore.nn.optim`也提供了许多其他常用的优化器,如`ADAM`、`Momentum`。\n", "\n", "使用`mindspore.nn.optim`,我们需要构建一个`Optimizer`对象,这个对象能够保持当前参数状态并基于计算得到的梯度进行参数更新。\n", "\n", "为了构建一个`Optimizer`,我们需要给它一个包含可需要优化的参数(必须是Variable对象)的迭代器,如网络中所有可以训练的`parameter`,将`params`设置为`net.trainable_params()`即可。然后,你可以设置Optimizer的参数选项,比如学习率、权重衰减等等。\n", "\n", "代码样例如下:\n", "\n", "```python\n", "from mindspore import nn\n", "\n", "optim = nn.SGD(params=net.trainable_params(), learning_rate=0.1, weight_decay=0.0)\n", "```\n", "\n", "## 训练\n", "\n", "在模型训练过程中,一般分为四个步骤。\n", "\n", "1. 定义神经网络。\n", "2. 构建数据集。\n", "3. 定义超参、损失函数及优化器。\n", "4. 输入训练轮次和数据集进行训练。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "模型训练示例代码如下所示:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [], "source": [ "import mindspore.dataset as ds\n", "import mindspore.dataset.transforms.c_transforms as C\n", "import mindspore.dataset.vision.c_transforms as CV\n", "from mindspore import nn, Tensor, Model\n", "from mindspore import dtype as mstype\n", "\n", "DATA_DIR = \"./datasets/cifar-10-batches-bin/train\"\n", "\n", "# 定义神经网络\n", "class Net(nn.Cell):\n", " def __init__(self, num_class=10, num_channel=3):\n", " super(Net, self).__init__()\n", " self.conv1 = nn.Conv2d(num_channel, 6, 5, pad_mode='valid')\n", " self.conv2 = nn.Conv2d(6, 16, 5, pad_mode='valid')\n", " self.fc1 = nn.Dense(16 * 5 * 5, 120)\n", " self.fc2 = nn.Dense(120, 84)\n", " self.fc3 = nn.Dense(84, num_class)\n", " self.relu = nn.ReLU()\n", " self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)\n", " self.flatten = nn.Flatten()\n", "\n", " def construct(self, x):\n", " x = self.conv1(x)\n", " x = self.relu(x)\n", " x = self.max_pool2d(x)\n", " x = self.conv2(x)\n", " x = self.relu(x)\n", " x = self.max_pool2d(x)\n", " x = self.flatten(x)\n", " x = self.fc1(x)\n", " x = self.relu(x)\n", " x = self.fc2(x)\n", " x = self.relu(x)\n", " x = self.fc3(x)\n", " return x\n", "\n", "net = Net()\n", "epochs = 5\n", "batch_size = 64\n", "learning_rate = 1e-3\n", "\n", "# 构建数据集\n", "sampler = ds.SequentialSampler(num_samples=128)\n", "dataset = ds.Cifar10Dataset(DATA_DIR, sampler=sampler)\n", "\n", "# 数据类型转换\n", "type_cast_op_image = C.TypeCast(mstype.float32)\n", "type_cast_op_label = C.TypeCast(mstype.int32)\n", "HWC2CHW = CV.HWC2CHW()\n", "dataset = dataset.map(operations=[type_cast_op_image, HWC2CHW], input_columns=\"image\")\n", "dataset = dataset.map(operations=type_cast_op_label, input_columns=\"label\")\n", "dataset = dataset.batch(batch_size)\n", "\n", "# 定义超参、损失函数及优化器\n", "optim = nn.SGD(params=net.trainable_params(), learning_rate=learning_rate)\n", "loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean')\n", "\n", "# 输入训练轮次和数据集进行训练\n", "model = Model(net, loss_fn=loss, optimizer=optim)\n", "model.train(epoch=epochs, train_dataset=dataset)" ] } ], "metadata": { "kernelspec": { "display_name": "MindSpore-1.1.1", "language": "python", "name": "mindspore-1.1.1" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.5" } }, "nbformat": 4, "nbformat_minor": 4 }