{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.4.10/resource/_static/logo_notebook.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.4.10/tutorials/zh_cn/beginner/mindspore_train.ipynb) [![下载样例代码](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.4.10/resource/_static/logo_download_code.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.4.10/tutorials/zh_cn/beginner/mindspore_train.py) [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.4.10/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.4.10/tutorials/source_zh_cn/beginner/train.ipynb)\n",
    "\n",
    "[基本介绍](https://www.mindspore.cn/tutorials/zh-CN/r2.4.10/beginner/introduction.html) || [快速入门](https://www.mindspore.cn/tutorials/zh-CN/r2.4.10/beginner/quick_start.html) || [张量 Tensor](https://www.mindspore.cn/tutorials/zh-CN/r2.4.10/beginner/tensor.html) || [数据加载与处理](https://www.mindspore.cn/tutorials/zh-CN/r2.4.10/beginner/dataset.html) || [网络构建](https://www.mindspore.cn/tutorials/zh-CN/r2.4.10/beginner/model.html) || [函数式自动微分](https://www.mindspore.cn/tutorials/zh-CN/r2.4.10/beginner/autograd.html) || **模型训练** || [保存与加载](https://www.mindspore.cn/tutorials/zh-CN/r2.4.10/beginner/save_load.html) || [使用静态图加速](https://www.mindspore.cn/tutorials/zh-CN/r2.4.10/beginner/accelerate_with_static_graph.html) || [自动混合精度](https://www.mindspore.cn/tutorials/zh-CN/r2.4.10/beginner/mixed_precision.html) ||"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "# 模型训练\n",
    "\n",
    "模型训练一般分为四个步骤:\n",
    "\n",
    "1. 构建数据集。\n",
    "2. 定义神经网络模型。\n",
    "3. 定义超参、损失函数及优化器。\n",
    "4. 输入数据集进行训练与评估。\n",
    "\n",
    "现在我们有了数据集和模型后,可以进行模型的训练与评估。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## 构建数据集\n",
    "\n",
    "首先从[数据加载与处理](https://www.mindspore.cn/tutorials/zh-CN/r2.4.10/beginner/dataset.html)加载代码,构建数据集。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Downloading data from https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/datasets/MNIST_Data.zip (10.3 MB)\n",
      "\n",
      "file_sizes: 100%|██████████████████████████| 10.8M/10.8M [00:05<00:00, 2.07MB/s]\n",
      "Extracting zip file...\n",
      "Successfully downloaded / unzipped to ./\n"
     ]
    }
   ],
   "source": [
    "import mindspore\n",
    "from mindspore import nn\n",
    "from mindspore.dataset import vision, transforms\n",
    "from mindspore.dataset import MnistDataset\n",
    "\n",
    "# Download data from open datasets\n",
    "from download import download\n",
    "\n",
    "url = \"https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/\" \\\n",
    "      \"notebook/datasets/MNIST_Data.zip\"\n",
    "path = download(url, \"./\", kind=\"zip\", replace=True)\n",
    "\n",
    "\n",
    "def datapipe(path, batch_size):\n",
    "    image_transforms = [\n",
    "        vision.Rescale(1.0 / 255.0, 0),\n",
    "        vision.Normalize(mean=(0.1307,), std=(0.3081,)),\n",
    "        vision.HWC2CHW()\n",
    "    ]\n",
    "    label_transform = transforms.TypeCast(mindspore.int32)\n",
    "\n",
    "    dataset = MnistDataset(path)\n",
    "    dataset = dataset.map(image_transforms, 'image')\n",
    "    dataset = dataset.map(label_transform, 'label')\n",
    "    dataset = dataset.batch(batch_size)\n",
    "    return dataset\n",
    "\n",
    "train_dataset = datapipe('MNIST_Data/train', batch_size=64)\n",
    "test_dataset = datapipe('MNIST_Data/test', batch_size=64)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## 定义神经网络模型\n",
    "\n",
    "从[网络构建](https://www.mindspore.cn/tutorials/zh-CN/r2.4.10/beginner/model.html)中加载代码,构建一个神经网络模型。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "class Network(nn.Cell):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.flatten = nn.Flatten()\n",
    "        self.dense_relu_sequential = nn.SequentialCell(\n",
    "            nn.Dense(28*28, 512),\n",
    "            nn.ReLU(),\n",
    "            nn.Dense(512, 512),\n",
    "            nn.ReLU(),\n",
    "            nn.Dense(512, 10)\n",
    "        )\n",
    "\n",
    "    def construct(self, x):\n",
    "        x = self.flatten(x)\n",
    "        logits = self.dense_relu_sequential(x)\n",
    "        return logits\n",
    "\n",
    "model = Network()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## 定义超参、损失函数和优化器\n",
    "\n",
    "### 超参\n",
    "\n",
    "超参(Hyperparameters)是可以调整的参数,可以控制模型训练优化的过程,不同的超参数值可能会影响模型训练和收敛速度。目前深度学习模型多采用批量随机梯度下降算法进行优化,随机梯度下降算法的原理如下:\n",
    "\n",
    "$$w_{t+1}=w_{t}-\\eta \\frac{1}{n} \\sum_{x \\in \\mathcal{B}} \\nabla l\\left(x, w_{t}\\right)$$\n",
    "\n",
    "公式中,$n$是批量大小(batch size),$η$是学习率(learning rate)。另外,$w_{t}$为训练轮次$t$中的权重参数,$\\nabla l$为损失函数的导数。除了梯度本身,这两个因子直接决定了模型的权重更新,从优化本身来看,它们是影响模型性能收敛最重要的参数。一般会定义以下超参用于训练:\n",
    "\n",
    "- **训练轮次(epoch)**:训练时遍历数据集的次数。\n",
    "\n",
    "- **批次大小(batch size)**:数据集进行分批读取训练,设定每个批次数据的大小。batch size过小,花费时间多,同时梯度震荡严重,不利于收敛;batch size过大,不同batch的梯度方向没有任何变化,容易陷入局部极小值,因此需要选择合适的batch size,可以有效提高模型精度、全局收敛。\n",
    "\n",
    "- **学习率(learning rate)**:如果学习率偏小,会导致收敛的速度变慢,如果学习率偏大,则可能会导致训练不收敛等不可预测的结果。梯度下降法被广泛应用在最小化模型误差的参数优化算法上。梯度下降法通过多次迭代,并在每一步中最小化损失函数来预估模型的参数。学习率就是在迭代过程中,会控制模型的学习进度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "epochs = 3\n",
    "batch_size = 64\n",
    "learning_rate = 1e-2"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### 损失函数\n",
    "\n",
    "损失函数(loss function)用于评估模型的预测值(logits)和目标值(targets)之间的误差。训练模型时,随机初始化的神经网络模型开始时会预测出错误的结果。损失函数会评估预测结果与目标值的相异程度,模型训练的目标即为降低损失函数求得的误差。\n",
    "\n",
    "常见的损失函数包括用于回归任务的`nn.MSELoss`(均方误差)和用于分类的`nn.NLLLoss`(负对数似然)等。 `nn.CrossEntropyLoss` 结合了`nn.LogSoftmax`和`nn.NLLLoss`,可以对logits 进行归一化并计算预测误差。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "loss_fn = nn.CrossEntropyLoss()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### 优化器\n",
    "\n",
    "模型优化(Optimization)是在每个训练步骤中调整模型参数以减少模型误差的过程。MindSpore提供多种优化算法的实现,称之为优化器(Optimizer)。优化器内部定义了模型的参数优化过程(即梯度如何更新至模型参数),所有优化逻辑都封装在优化器对象中。在这里,我们使用SGD(Stochastic Gradient Descent)优化器。\n",
    "\n",
    "我们通过`model.trainable_params()`方法获得模型的可训练参数,并传入学习率超参来初始化优化器。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "optimizer = nn.SGD(model.trainable_params(), learning_rate=learning_rate)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "> 在训练过程中,通过微分函数可计算获得参数对应的梯度,将其传入优化器中即可实现参数优化,具体形态如下:\n",
    ">\n",
    "> grads = grad_fn(inputs)\n",
    ">\n",
    "> optimizer(grads)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## 训练与评估\n",
    "\n",
    "设置了超参、损失函数和优化器后,我们就可以循环输入数据来训练模型。一次数据集的完整迭代循环称为一轮(epoch)。每轮执行训练时包括两个步骤:\n",
    "\n",
    "1. 训练:迭代训练数据集,并尝试收敛到最佳参数。\n",
    "2. 验证/测试:迭代测试数据集,以检查模型性能是否提升。\n",
    "\n",
    "接下来我们定义用于训练的`train_loop`函数和用于测试的`test_loop`函数。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "使用函数式自动微分,需先定义正向函数`forward_fn`,使用[value_and_grad](https://www.mindspore.cn/docs/zh-CN/r2.4.10/api_python/mindspore/mindspore.value_and_grad.html)获得微分函数`grad_fn`。然后,我们将微分函数和优化器的执行封装为`train_step`函数,接下来循环迭代数据集进行训练即可。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "# Define forward function\n",
    "def forward_fn(data, label):\n",
    "    logits = model(data)\n",
    "    loss = loss_fn(logits, label)\n",
    "    return loss, logits\n",
    "\n",
    "# Get gradient function\n",
    "grad_fn = mindspore.value_and_grad(forward_fn, None, optimizer.parameters, has_aux=True)\n",
    "\n",
    "# Define function of one-step training\n",
    "def train_step(data, label):\n",
    "    (loss, _), grads = grad_fn(data, label)\n",
    "    optimizer(grads)\n",
    "    return loss\n",
    "\n",
    "def train_loop(model, dataset):\n",
    "    size = dataset.get_dataset_size()\n",
    "    model.set_train()\n",
    "    for batch, (data, label) in enumerate(dataset.create_tuple_iterator()):\n",
    "        loss = train_step(data, label)\n",
    "\n",
    "        if batch % 100 == 0:\n",
    "            loss, current = loss.asnumpy(), batch\n",
    "            print(f\"loss: {loss:>7f}  [{current:>3d}/{size:>3d}]\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "`test_loop`函数同样需循环遍历数据集,调用模型计算loss和Accuray并返回最终结果。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "def test_loop(model, dataset, loss_fn):\n",
    "    num_batches = dataset.get_dataset_size()\n",
    "    model.set_train(False)\n",
    "    total, test_loss, correct = 0, 0, 0\n",
    "    for data, label in dataset.create_tuple_iterator():\n",
    "        pred = model(data)\n",
    "        total += len(data)\n",
    "        test_loss += loss_fn(pred, label).asnumpy()\n",
    "        correct += (pred.argmax(1) == label).asnumpy().sum()\n",
    "    test_loss /= num_batches\n",
    "    correct /= total\n",
    "    print(f\"Test: \\n Accuracy: {(100*correct):>0.1f}%, Avg loss: {test_loss:>8f} \\n\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "我们将实例化的损失函数和优化器传入`train_loop`和`test_loop`中。训练3轮并输出loss和Accuracy,查看性能变化。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1\n",
      "-------------------------------\n",
      "loss: 2.302806  [  0/938]\n",
      "loss: 2.285086  [100/938]\n",
      "loss: 2.264712  [200/938]\n",
      "loss: 2.174010  [300/938]\n",
      "loss: 1.931853  [400/938]\n",
      "loss: 1.340721  [500/938]\n",
      "loss: 0.953515  [600/938]\n",
      "loss: 0.756860  [700/938]\n",
      "loss: 0.756263  [800/938]\n",
      "loss: 0.463846  [900/938]\n",
      "Test: \n",
      " Accuracy: 84.7%, Avg loss: 0.527155 \n",
      "\n",
      "Epoch 2\n",
      "-------------------------------\n",
      "loss: 0.479126  [  0/938]\n",
      "loss: 0.437443  [100/938]\n",
      "loss: 0.685504  [200/938]\n",
      "loss: 0.395121  [300/938]\n",
      "loss: 0.550566  [400/938]\n",
      "loss: 0.459457  [500/938]\n",
      "loss: 0.293049  [600/938]\n",
      "loss: 0.422102  [700/938]\n",
      "loss: 0.333153  [800/938]\n",
      "loss: 0.412182  [900/938]\n",
      "Test: \n",
      " Accuracy: 90.5%, Avg loss: 0.335083 \n",
      "\n",
      "Epoch 3\n",
      "-------------------------------\n",
      "loss: 0.207366  [  0/938]\n",
      "loss: 0.343559  [100/938]\n",
      "loss: 0.391145  [200/938]\n",
      "loss: 0.317566  [300/938]\n",
      "loss: 0.200746  [400/938]\n",
      "loss: 0.445798  [500/938]\n",
      "loss: 0.603720  [600/938]\n",
      "loss: 0.170811  [700/938]\n",
      "loss: 0.411954  [800/938]\n",
      "loss: 0.315902  [900/938]\n",
      "Test: \n",
      " Accuracy: 91.9%, Avg loss: 0.279034 \n",
      "\n",
      "Done!\n"
     ]
    }
   ],
   "source": [
    "loss_fn = nn.CrossEntropyLoss()\n",
    "optimizer = nn.SGD(model.trainable_params(), learning_rate=learning_rate)\n",
    "\n",
    "for t in range(epochs):\n",
    "    print(f\"Epoch {t+1}\\n-------------------------------\")\n",
    "    train_loop(model, train_dataset)\n",
    "    test_loop(model, test_dataset, loss_fn)\n",
    "print(\"Done!\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "MindSpore",
   "language": "python",
   "name": "mindspore"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.5 (default, Oct 25 2019, 15:51:11) \n[GCC 7.3.0]"
  },
  "vscode": {
   "interpreter": {
    "hash": "8c9da313289c39257cb28b126d2dadd33153d4da4d524f730c81a4aaccbd2ca7"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}