{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "# Vision Transformer图像分类\n",
    "\n",
    "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/resource/_static/logo_notebook.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.3.q1/tutorials/application/zh_cn/cv/mindspore_vit.ipynb) [![下载样例代码](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/resource/_static/logo_download_code.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.3.q1/tutorials/application/zh_cn/cv/mindspore_vit.py) [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.3.q1/tutorials/application/source_zh_cn/cv/vit.ipynb)\n",
    "\n",
    "感谢[ZOMI酱](https://gitee.com/sanjaychan)对本文的贡献。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## Vision Transformer(ViT)简介\n",
    "\n",
    "近些年,随着基于自注意(Self-Attention)结构的模型的发展,特别是Transformer模型的提出,极大地促进了自然语言处理模型的发展。由于Transformer的计算效率和可扩展性,它已经能够训练具有超过100B参数的空前规模的模型。\n",
    "\n",
    "ViT则是自然语言处理和计算机视觉两个领域的融合结晶。在不依赖卷积操作的情况下,依然可以在图像分类任务上达到很好的效果。\n",
    "\n",
    "### 模型结构\n",
    "\n",
    "ViT模型的主体结构是基于Transformer模型的Encoder部分(部分结构顺序有调整,如:Normalization的位置与标准Transformer不同),其结构图[1]如下:\n",
    "\n",
    "![vit-architecture](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/tutorials/application/source_zh_cn/cv/images/vit_architecture.png)\n",
    "\n",
    "### 模型特点\n",
    "\n",
    "ViT模型主要应用于图像分类领域。因此,其模型结构相较于传统的Transformer有以下几个特点:\n",
    "\n",
    "1. 数据集的原图像被划分为多个patch(图像块)后,将二维patch(不考虑channel)转换为一维向量,再加上类别向量与位置向量作为模型输入。\n",
    "2. 模型主体的Block结构是基于Transformer的Encoder结构,但是调整了Normalization的位置,其中,最主要的结构依然是Multi-head Attention结构。\n",
    "3. 模型在Blocks堆叠后接全连接层,接受类别向量的输出作为输入并用于分类。通常情况下,我们将最后的全连接层称为Head,Transformer Encoder部分为backbone。\n",
    "\n",
    "下面将通过代码实例来详细解释基于ViT实现ImageNet分类任务。\n",
    "\n",
    "> 注意,本教程在CPU上运行时间过长,不建议使用CPU运行。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## 环境准备与数据读取\n",
    "\n",
    "开始实验之前,请确保本地已经安装了Python环境并安装了MindSpore。\n",
    "\n",
    "首先我们需要下载本案例的数据集,可通过<http://image-net.org>下载完整的ImageNet数据集,本案例应用的数据集是从ImageNet中筛选出来的子集。\n",
    "\n",
    "运行第一段代码时会自动下载并解压,请确保你的数据集路径如以下结构。\n",
    "\n",
    "```text\n",
    ".dataset/\n",
    "    ├── ILSVRC2012_devkit_t12.tar.gz\n",
    "    ├── train/\n",
    "    ├── infer/\n",
    "    └── val/\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Downloading data from https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/datasets/vit_imagenet_dataset.zip (489.1 MB)\n",
      "\n",
      "file_sizes: 100%|████████████████████████████| 513M/513M [00:09<00:00, 52.3MB/s]\n",
      "Extracting zip file...\n",
      "Successfully downloaded / unzipped to ./\n"
     ]
    }
   ],
   "source": [
    "from download import download\n",
    "\n",
    "dataset_url = \"https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/datasets/vit_imagenet_dataset.zip\"\n",
    "path = \"./\"\n",
    "\n",
    "path = download(dataset_url, path, kind=\"zip\", replace=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "import os\n",
    "\n",
    "import mindspore as ms\n",
    "from mindspore.dataset import ImageFolderDataset\n",
    "import mindspore.dataset.vision as transforms\n",
    "\n",
    "\n",
    "data_path = './dataset/'\n",
    "mean = [0.485 * 255, 0.456 * 255, 0.406 * 255]\n",
    "std = [0.229 * 255, 0.224 * 255, 0.225 * 255]\n",
    "\n",
    "dataset_train = ImageFolderDataset(os.path.join(data_path, \"train\"), shuffle=True)\n",
    "\n",
    "trans_train = [\n",
    "    transforms.RandomCropDecodeResize(size=224,\n",
    "                                      scale=(0.08, 1.0),\n",
    "                                      ratio=(0.75, 1.333)),\n",
    "    transforms.RandomHorizontalFlip(prob=0.5),\n",
    "    transforms.Normalize(mean=mean, std=std),\n",
    "    transforms.HWC2CHW()\n",
    "]\n",
    "\n",
    "dataset_train = dataset_train.map(operations=trans_train, input_columns=[\"image\"])\n",
    "dataset_train = dataset_train.batch(batch_size=16, drop_remainder=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## 模型解析\n",
    "\n",
    "下面将通过代码来细致剖析ViT模型的内部结构。\n",
    "\n",
    "### Transformer基本原理\n",
    "\n",
    "Transformer模型源于2017年的一篇文章[2]。在这篇文章中提出的基于Attention机制的编码器-解码器型结构在自然语言处理领域获得了巨大的成功。模型结构如下图所示:\n",
    "\n",
    "![transformer-architecture](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/tutorials/application/source_zh_cn/cv/images/transformer_architecture.png)\n",
    "\n",
    "其主要结构为多个Encoder和Decoder模块所组成,其中Encoder和Decoder的详细结构如下图[2]所示:\n",
    "\n",
    "![encoder-decoder](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/tutorials/application/source_zh_cn/cv/images/encoder_decoder.png)\n",
    "\n",
    "Encoder与Decoder由许多结构组成,如:多头注意力(Multi-Head Attention)层,Feed Forward层,Normaliztion层,甚至残差连接(Residual Connection,图中的“Add”)。不过,其中最重要的结构是多头注意力(Multi-Head Attention)结构,该结构基于自注意力(Self-Attention)机制,是多个Self-Attention的并行组成。\n",
    "\n",
    "所以,理解了Self-Attention就抓住了Transformer的核心。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "#### Attention模块\n",
    "\n",
    "以下是Self-Attention的解释,其核心内容是为输入向量的每个单词学习一个权重。通过给定一个任务相关的查询向量Query向量,计算Query和各个Key的相似性或者相关性得到注意力分布,即得到每个Key对应Value的权重系数,然后对Value进行加权求和得到最终的Attention数值。\n",
    "\n",
    "在Self-Attention中:\n",
    "\n",
    "1. 最初的输入向量首先会经过Embedding层映射成Q(Query),K(Key),V(Value)三个向量,由于是并行操作,所以代码中是映射成为dim x 3的向量然后进行分割,换言之,如果你的输入向量为一个向量序列($x_1$,$x_2$,$x_3$),其中的$x_1$,$x_2$,$x_3$都是一维向量,那么每一个一维向量都会经过Embedding层映射出Q,K,V三个向量,只是Embedding矩阵不同,矩阵参数也是通过学习得到的。**这里大家可以认为,Q,K,V三个矩阵是发现向量之间关联信息的一种手段,需要经过学习得到,至于为什么是Q,K,V三个,主要是因为需要两个向量点乘以获得权重,又需要另一个向量来承载权重向加的结果,所以,最少需要3个矩阵。**\n",
    "\n",
    "$$\n",
    "\\begin{cases}\n",
    "q_i = W_q \\cdot x_i & \\\\\n",
    "k_i = W_k \\cdot x_i,\\hspace{1em} &i = 1,2,3 \\ldots \\\\\n",
    "v_i = W_v \\cdot x_i &\n",
    "\\end{cases}\n",
    "\\tag{1}\n",
    "$$\n",
    "\n",
    "![self-attention1](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/tutorials/application/source_zh_cn/cv/images/self_attention_1.png)\n",
    "\n",
    "2. 自注意力机制的自注意主要体现在它的Q,K,V都来源于其自身,也就是该过程是在提取输入的不同顺序的向量的联系与特征,最终通过不同顺序向量之间的联系紧密性(Q与K乘积经过Softmax的结果)来表现出来。**Q,K,V得到后就需要获取向量间权重,需要对Q和K进行点乘并除以维度的平方根,对所有向量的结果进行Softmax处理,通过公式(2)的操作,我们获得了向量之间的关系权重。**\n",
    "\n",
    "$$\n",
    "\\begin{cases}\n",
    "a_{1,1} = q_1 \\cdot k_1 / \\sqrt d \\\\\n",
    "a_{1,2} = q_1 \\cdot k_2 / \\sqrt d \\\\\n",
    "a_{1,3} = q_1 \\cdot k_3 / \\sqrt d\n",
    "\\end{cases}\n",
    "\\tag{2}\n",
    "$$\n",
    "\n",
    "![self-attention3](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/tutorials/application/source_zh_cn/cv/images/self_attention_3.png)\n",
    "\n",
    "$$ Softmax: \\hat a_{1,i} = exp(a_{1,i}) / \\sum_j exp(a_{1,j}),\\hspace{1em} j = 1,2,3 \\ldots \\tag{3}$$\n",
    "\n",
    "![self-attention2](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/tutorials/application/source_zh_cn/cv/images/self_attention_2.png)\n",
    "\n",
    "3. 其最终输出则是通过V这个映射后的向量与Q,K经过Softmax结果进行weight sum获得,这个过程可以理解为在全局上进行自注意表示。**每一组Q,K,V最后都有一个V输出,这是Self-Attention得到的最终结果,是当前向量在结合了它与其他向量关联权重后得到的结果。**\n",
    "\n",
    "$$\n",
    "b_1 = \\sum_i \\hat a_{1,i}v_i,\\hspace{1em} i = 1,2,3...\n",
    "\\tag{4}\n",
    "$$\n",
    "\n",
    "通过下图可以整体把握Self-Attention的全部过程。\n",
    "\n",
    "![self-attention](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/tutorials/application/source_zh_cn/cv/images/self_attention_process.png)\n",
    "\n",
    "多头注意力机制就是将原本self-Attention处理的向量分割为多个Head进行处理,这一点也可以从代码中体现,这也是attention结构可以进行并行加速的一个方面。\n",
    "\n",
    "总结来说,多头注意力机制在保持参数总量不变的情况下,将同样的query, key和value映射到原来的高维空间(Q,K,V)的不同子空间(Q_0,K_0,V_0)中进行自注意力的计算,最后再合并不同子空间中的注意力信息。\n",
    "\n",
    "所以,对于同一个输入向量,多个注意力机制可以同时对其进行处理,即利用并行计算加速处理过程,又在处理的时候更充分的分析和利用了向量特征。下图展示了多头注意力机制,其并行能力的主要体现在下图中的$a_1$和$a_2$是同一个向量进行分割获得的。\n",
    "\n",
    "![multi-head-attention](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/tutorials/application/source_zh_cn/cv/images/multi_head_attention.png)\n",
    "\n",
    "以下是Multi-Head Attention代码,结合上文的解释,代码清晰的展现了这一过程。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from mindspore import nn, ops\n",
    "\n",
    "\n",
    "class Attention(nn.Cell):\n",
    "    def __init__(self,\n",
    "                 dim: int,\n",
    "                 num_heads: int = 8,\n",
    "                 keep_prob: float = 1.0,\n",
    "                 attention_keep_prob: float = 1.0):\n",
    "        super(Attention, self).__init__()\n",
    "\n",
    "        self.num_heads = num_heads\n",
    "        head_dim = dim // num_heads\n",
    "        self.scale = ms.Tensor(head_dim ** -0.5)\n",
    "\n",
    "        self.qkv = nn.Dense(dim, dim * 3)\n",
    "        self.attn_drop = nn.Dropout(p=1.0-attention_keep_prob)\n",
    "        self.out = nn.Dense(dim, dim)\n",
    "        self.out_drop = nn.Dropout(p=1.0-keep_prob)\n",
    "        self.attn_matmul_v = ops.BatchMatMul()\n",
    "        self.q_matmul_k = ops.BatchMatMul(transpose_b=True)\n",
    "        self.softmax = nn.Softmax(axis=-1)\n",
    "\n",
    "    def construct(self, x):\n",
    "        \"\"\"Attention construct.\"\"\"\n",
    "        b, n, c = x.shape\n",
    "        qkv = self.qkv(x)\n",
    "        qkv = ops.reshape(qkv, (b, n, 3, self.num_heads, c // self.num_heads))\n",
    "        qkv = ops.transpose(qkv, (2, 0, 3, 1, 4))\n",
    "        q, k, v = ops.unstack(qkv, axis=0)\n",
    "        attn = self.q_matmul_k(q, k)\n",
    "        attn = ops.mul(attn, self.scale)\n",
    "        attn = self.softmax(attn)\n",
    "        attn = self.attn_drop(attn)\n",
    "        out = self.attn_matmul_v(attn, v)\n",
    "        out = ops.transpose(out, (0, 2, 1, 3))\n",
    "        out = ops.reshape(out, (b, n, c))\n",
    "        out = self.out(out)\n",
    "        out = self.out_drop(out)\n",
    "\n",
    "        return out"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### Transformer Encoder\n",
    "\n",
    "在了解了Self-Attention结构之后,通过与Feed Forward,Residual Connection等结构的拼接就可以形成Transformer的基础结构,下面代码实现了Feed Forward,Residual Connection结构。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from typing import Optional, Dict\n",
    "\n",
    "\n",
    "class FeedForward(nn.Cell):\n",
    "    def __init__(self,\n",
    "                 in_features: int,\n",
    "                 hidden_features: Optional[int] = None,\n",
    "                 out_features: Optional[int] = None,\n",
    "                 activation: nn.Cell = nn.GELU,\n",
    "                 keep_prob: float = 1.0):\n",
    "        super(FeedForward, self).__init__()\n",
    "        out_features = out_features or in_features\n",
    "        hidden_features = hidden_features or in_features\n",
    "        self.dense1 = nn.Dense(in_features, hidden_features)\n",
    "        self.activation = activation()\n",
    "        self.dense2 = nn.Dense(hidden_features, out_features)\n",
    "        self.dropout = nn.Dropout(p=1.0-keep_prob)\n",
    "\n",
    "    def construct(self, x):\n",
    "        \"\"\"Feed Forward construct.\"\"\"\n",
    "        x = self.dense1(x)\n",
    "        x = self.activation(x)\n",
    "        x = self.dropout(x)\n",
    "        x = self.dense2(x)\n",
    "        x = self.dropout(x)\n",
    "\n",
    "        return x\n",
    "\n",
    "\n",
    "class ResidualCell(nn.Cell):\n",
    "    def __init__(self, cell):\n",
    "        super(ResidualCell, self).__init__()\n",
    "        self.cell = cell\n",
    "\n",
    "    def construct(self, x):\n",
    "        \"\"\"ResidualCell construct.\"\"\"\n",
    "        return self.cell(x) + x"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "接下来就利用Self-Attention来构建ViT模型中的TransformerEncoder部分,类似于构建了一个Transformer的编码器部分,如下图[1]所示:\n",
    "\n",
    "![vit-encoder](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/tutorials/application/source_zh_cn/cv/images/vit_encoder.png)\n",
    "\n",
    "1. ViT模型中的基础结构与标准Transformer有所不同,主要在于Normalization的位置是放在Self-Attention和Feed Forward之前,其他结构如Residual Connection,Feed Forward,Normalization都如Transformer中所设计。\n",
    "\n",
    "2. 从Transformer结构的图片可以发现,多个子encoder的堆叠就完成了模型编码器的构建,在ViT模型中,依然沿用这个思路,通过配置超参数num_layers,就可以确定堆叠层数。\n",
    "\n",
    "3. Residual Connection,Normalization的结构可以保证模型有很强的扩展性(保证信息经过深层处理不会出现退化的现象,这是Residual Connection的作用),Normalization和dropout的应用可以增强模型泛化能力。\n",
    "\n",
    "从以下源码中就可以清晰看到Transformer的结构。将TransformerEncoder结构和一个多层感知器(MLP)结合,就构成了ViT模型的backbone部分。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "class TransformerEncoder(nn.Cell):\n",
    "    def __init__(self,\n",
    "                 dim: int,\n",
    "                 num_layers: int,\n",
    "                 num_heads: int,\n",
    "                 mlp_dim: int,\n",
    "                 keep_prob: float = 1.,\n",
    "                 attention_keep_prob: float = 1.0,\n",
    "                 drop_path_keep_prob: float = 1.0,\n",
    "                 activation: nn.Cell = nn.GELU,\n",
    "                 norm: nn.Cell = nn.LayerNorm):\n",
    "        super(TransformerEncoder, self).__init__()\n",
    "        layers = []\n",
    "\n",
    "        for _ in range(num_layers):\n",
    "            normalization1 = norm((dim,))\n",
    "            normalization2 = norm((dim,))\n",
    "            attention = Attention(dim=dim,\n",
    "                                  num_heads=num_heads,\n",
    "                                  keep_prob=keep_prob,\n",
    "                                  attention_keep_prob=attention_keep_prob)\n",
    "\n",
    "            feedforward = FeedForward(in_features=dim,\n",
    "                                      hidden_features=mlp_dim,\n",
    "                                      activation=activation,\n",
    "                                      keep_prob=keep_prob)\n",
    "\n",
    "            layers.append(\n",
    "                nn.SequentialCell([\n",
    "                    ResidualCell(nn.SequentialCell([normalization1, attention])),\n",
    "                    ResidualCell(nn.SequentialCell([normalization2, feedforward]))\n",
    "                ])\n",
    "            )\n",
    "        self.layers = nn.SequentialCell(layers)\n",
    "\n",
    "    def construct(self, x):\n",
    "        \"\"\"Transformer construct.\"\"\"\n",
    "        return self.layers(x)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### ViT模型的输入\n",
    "\n",
    "传统的Transformer结构主要用于处理自然语言领域的词向量(Word Embedding or Word Vector),词向量与传统图像数据的主要区别在于,词向量通常是一维向量进行堆叠,而图片则是二维矩阵的堆叠,多头注意力机制在处理一维词向量的堆叠时会提取词向量之间的联系也就是上下文语义,这使得Transformer在自然语言处理领域非常好用,而二维图片矩阵如何与一维词向量进行转化就成为了Transformer进军图像处理领域的一个小门槛。\n",
    "\n",
    "在ViT模型中:\n",
    "\n",
    "1. 通过将输入图像在每个channel上划分为16 x 16个patch,这一步是通过卷积操作来完成的,当然也可以人工进行划分,但卷积操作也可以达到目的同时还可以进行一次额外的数据处理;**例如一幅输入224 x 224的图像,首先经过卷积处理得到16 x 16个patch,那么每一个patch的大小就是14 x 14。**\n",
    "\n",
    "2. 再将每一个patch的矩阵拉伸成为一个一维向量,从而获得了近似词向量堆叠的效果。**上一步得到的14 x 14的patch就转换为长度为196的向量。**\n",
    "\n",
    "这是图像输入网络经过的第一步处理。具体Patch Embedding的代码如下所示:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "class PatchEmbedding(nn.Cell):\n",
    "    MIN_NUM_PATCHES = 4\n",
    "\n",
    "    def __init__(self,\n",
    "                 image_size: int = 224,\n",
    "                 patch_size: int = 16,\n",
    "                 embed_dim: int = 768,\n",
    "                 input_channels: int = 3):\n",
    "        super(PatchEmbedding, self).__init__()\n",
    "\n",
    "        self.image_size = image_size\n",
    "        self.patch_size = patch_size\n",
    "        self.num_patches = (image_size // patch_size) ** 2\n",
    "        self.conv = nn.Conv2d(input_channels, embed_dim, kernel_size=patch_size, stride=patch_size, has_bias=True)\n",
    "\n",
    "    def construct(self, x):\n",
    "        \"\"\"Path Embedding construct.\"\"\"\n",
    "        x = self.conv(x)\n",
    "        b, c, h, w = x.shape\n",
    "        x = ops.reshape(x, (b, c, h * w))\n",
    "        x = ops.transpose(x, (0, 2, 1))\n",
    "\n",
    "        return x"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "输入图像在划分为patch之后,会经过pos_embedding 和 class_embedding两个过程。\n",
    "\n",
    "1. class_embedding主要借鉴了BERT模型的用于文本分类时的思想,在每一个word vector之前增加一个类别值,通常是加在向量的第一位,**上一步得到的196维的向量加上class_embedding后变为197维。**\n",
    "\n",
    "2. 增加的class_embedding是一个可以学习的参数,经过网络的不断训练,最终以输出向量的第一个维度的输出来决定最后的输出类别;**由于输入是16 x 16个patch,所以输出进行分类时是取 16 x 16个class_embedding进行分类。**\n",
    "\n",
    "3. pos_embedding也是一组可以学习的参数,会被加入到经过处理的patch矩阵中。\n",
    "\n",
    "4. 由于pos_embedding也是可以学习的参数,所以它的加入类似于全链接网络和卷积的bias。**这一步就是创造一个长度维197的可训练向量加入到经过class_embedding的向量中。**\n",
    "\n",
    "实际上,pos_embedding总共有4种方案。但是经过作者的论证,只有加上pos_embedding和不加pos_embedding有明显影响,至于pos_embedding是一维还是二维对分类结果影响不大,所以,在我们的代码中,也是采用了一维的pos_embedding,由于class_embedding是加在pos_embedding之前,所以pos_embedding的维度会比patch拉伸后的维度加1。\n",
    "\n",
    "总的而言,ViT模型还是利用了Transformer模型在处理上下文语义时的优势,将图像转换为一种“变种词向量”然后进行处理,而这样转换的意义在于,多个patch之间本身具有空间联系,这类似于一种“空间语义”,从而获得了比较好的处理效果。\n",
    "\n",
    "### 整体构建ViT\n",
    "\n",
    "以下代码构建了一个完整的ViT模型。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "from mindspore.common.initializer import Normal\n",
    "from mindspore.common.initializer import initializer\n",
    "from mindspore import Parameter\n",
    "\n",
    "\n",
    "def init(init_type, shape, dtype, name, requires_grad):\n",
    "    \"\"\"Init.\"\"\"\n",
    "    initial = initializer(init_type, shape, dtype).init_data()\n",
    "    return Parameter(initial, name=name, requires_grad=requires_grad)\n",
    "\n",
    "\n",
    "class ViT(nn.Cell):\n",
    "    def __init__(self,\n",
    "                 image_size: int = 224,\n",
    "                 input_channels: int = 3,\n",
    "                 patch_size: int = 16,\n",
    "                 embed_dim: int = 768,\n",
    "                 num_layers: int = 12,\n",
    "                 num_heads: int = 12,\n",
    "                 mlp_dim: int = 3072,\n",
    "                 keep_prob: float = 1.0,\n",
    "                 attention_keep_prob: float = 1.0,\n",
    "                 drop_path_keep_prob: float = 1.0,\n",
    "                 activation: nn.Cell = nn.GELU,\n",
    "                 norm: Optional[nn.Cell] = nn.LayerNorm,\n",
    "                 pool: str = 'cls') -> None:\n",
    "        super(ViT, self).__init__()\n",
    "\n",
    "        self.patch_embedding = PatchEmbedding(image_size=image_size,\n",
    "                                              patch_size=patch_size,\n",
    "                                              embed_dim=embed_dim,\n",
    "                                              input_channels=input_channels)\n",
    "        num_patches = self.patch_embedding.num_patches\n",
    "\n",
    "        self.cls_token = init(init_type=Normal(sigma=1.0),\n",
    "                              shape=(1, 1, embed_dim),\n",
    "                              dtype=ms.float32,\n",
    "                              name='cls',\n",
    "                              requires_grad=True)\n",
    "\n",
    "        self.pos_embedding = init(init_type=Normal(sigma=1.0),\n",
    "                                  shape=(1, num_patches + 1, embed_dim),\n",
    "                                  dtype=ms.float32,\n",
    "                                  name='pos_embedding',\n",
    "                                  requires_grad=True)\n",
    "\n",
    "        self.pool = pool\n",
    "        self.pos_dropout = nn.Dropout(p=1.0-keep_prob)\n",
    "        self.norm = norm((embed_dim,))\n",
    "        self.transformer = TransformerEncoder(dim=embed_dim,\n",
    "                                              num_layers=num_layers,\n",
    "                                              num_heads=num_heads,\n",
    "                                              mlp_dim=mlp_dim,\n",
    "                                              keep_prob=keep_prob,\n",
    "                                              attention_keep_prob=attention_keep_prob,\n",
    "                                              drop_path_keep_prob=drop_path_keep_prob,\n",
    "                                              activation=activation,\n",
    "                                              norm=norm)\n",
    "        self.dropout = nn.Dropout(p=1.0-keep_prob)\n",
    "        self.dense = nn.Dense(embed_dim, num_classes)\n",
    "\n",
    "    def construct(self, x):\n",
    "        \"\"\"ViT construct.\"\"\"\n",
    "        x = self.patch_embedding(x)\n",
    "        cls_tokens = ops.tile(self.cls_token.astype(x.dtype), (x.shape[0], 1, 1))\n",
    "        x = ops.concat((cls_tokens, x), axis=1)\n",
    "        x += self.pos_embedding\n",
    "\n",
    "        x = self.pos_dropout(x)\n",
    "        x = self.transformer(x)\n",
    "        x = self.norm(x)\n",
    "        x = x[:, 0]\n",
    "        if self.training:\n",
    "            x = self.dropout(x)\n",
    "        x = self.dense(x)\n",
    "\n",
    "        return x"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "整体流程图如下所示:\n",
    "\n",
    "![data-process](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/tutorials/application/source_zh_cn/cv/images/data_process.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## 模型训练与推理\n",
    "\n",
    "### 模型训练\n",
    "\n",
    "模型开始训练前,需要设定损失函数,优化器,回调函数等。\n",
    "\n",
    "完整训练ViT模型需要很长的时间,实际应用时建议根据项目需要调整epoch_size,当正常输出每个Epoch的step信息时,意味着训练正在进行,通过模型输出可以查看当前训练的loss值和时间等指标。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Downloading data from https://download.mindspore.cn/vision/classification/vit_b_16_224.ckpt (330.2 MB)\n",
      "\n",
      "file_sizes: 100%|████████████████████████████| 346M/346M [00:05<00:00, 59.5MB/s]\n",
      "Successfully downloaded file to ./ckpt/vit_b_16_224.ckpt\n",
      "epoch: 1 step: 125, loss is 1.903618335723877\n",
      "Train epoch time: 99857.517 ms, per step time: 798.860 ms\n",
      "epoch: 2 step: 125, loss is 1.448015570640564\n",
      "Train epoch time: 95555.111 ms, per step time: 764.441 ms\n",
      "epoch: 3 step: 125, loss is 1.2555729150772095\n",
      "Train epoch time: 95553.210 ms, per step time: 764.426 ms\n",
      "epoch: 4 step: 125, loss is 1.3787992000579834\n",
      "Train epoch time: 95569.835 ms, per step time: 764.559 ms\n",
      "epoch: 5 step: 125, loss is 1.7505667209625244\n",
      "Train epoch time: 95463.133 ms, per step time: 763.705 ms\n",
      "epoch: 6 step: 125, loss is 2.5462236404418945\n",
      "Train epoch time: 95428.906 ms, per step time: 763.431 ms\n",
      "epoch: 7 step: 125, loss is 1.5103509426116943\n",
      "Train epoch time: 95411.338 ms, per step time: 763.291 ms\n",
      "epoch: 8 step: 125, loss is 1.4719784259796143\n",
      "Train epoch time: 95644.054 ms, per step time: 765.152 ms\n",
      "epoch: 9 step: 125, loss is 1.2415032386779785\n",
      "Train epoch time: 95511.758 ms, per step time: 764.094 ms\n",
      "epoch: 10 step: 125, loss is 1.098097562789917\n",
      "Train epoch time: 95270.282 ms, per step time: 762.162 ms\n"
     ]
    }
   ],
   "source": [
    "from mindspore.nn import LossBase\n",
    "from mindspore.train import LossMonitor, TimeMonitor, CheckpointConfig, ModelCheckpoint\n",
    "from mindspore import train\n",
    "\n",
    "# define super parameter\n",
    "epoch_size = 10\n",
    "momentum = 0.9\n",
    "num_classes = 1000\n",
    "resize = 224\n",
    "step_size = dataset_train.get_dataset_size()\n",
    "\n",
    "# construct model\n",
    "network = ViT()\n",
    "\n",
    "# load ckpt\n",
    "vit_url = \"https://download.mindspore.cn/vision/classification/vit_b_16_224.ckpt\"\n",
    "path = \"./ckpt/vit_b_16_224.ckpt\"\n",
    "\n",
    "vit_path = download(vit_url, path, replace=True)\n",
    "param_dict = ms.load_checkpoint(vit_path)\n",
    "ms.load_param_into_net(network, param_dict)\n",
    "\n",
    "# define learning rate\n",
    "lr = nn.cosine_decay_lr(min_lr=float(0),\n",
    "                        max_lr=0.00005,\n",
    "                        total_step=epoch_size * step_size,\n",
    "                        step_per_epoch=step_size,\n",
    "                        decay_epoch=10)\n",
    "\n",
    "# define optimizer\n",
    "network_opt = nn.Adam(network.trainable_params(), lr, momentum)\n",
    "\n",
    "\n",
    "# define loss function\n",
    "class CrossEntropySmooth(LossBase):\n",
    "    \"\"\"CrossEntropy.\"\"\"\n",
    "\n",
    "    def __init__(self, sparse=True, reduction='mean', smooth_factor=0., num_classes=1000):\n",
    "        super(CrossEntropySmooth, self).__init__()\n",
    "        self.onehot = ops.OneHot()\n",
    "        self.sparse = sparse\n",
    "        self.on_value = ms.Tensor(1.0 - smooth_factor, ms.float32)\n",
    "        self.off_value = ms.Tensor(1.0 * smooth_factor / (num_classes - 1), ms.float32)\n",
    "        self.ce = nn.SoftmaxCrossEntropyWithLogits(reduction=reduction)\n",
    "\n",
    "    def construct(self, logit, label):\n",
    "        if self.sparse:\n",
    "            label = self.onehot(label, ops.shape(logit)[1], self.on_value, self.off_value)\n",
    "        loss = self.ce(logit, label)\n",
    "        return loss\n",
    "\n",
    "\n",
    "network_loss = CrossEntropySmooth(sparse=True,\n",
    "                                  reduction=\"mean\",\n",
    "                                  smooth_factor=0.1,\n",
    "                                  num_classes=num_classes)\n",
    "\n",
    "# set checkpoint\n",
    "ckpt_config = CheckpointConfig(save_checkpoint_steps=step_size, keep_checkpoint_max=100)\n",
    "ckpt_callback = ModelCheckpoint(prefix='vit_b_16', directory='./ViT', config=ckpt_config)\n",
    "\n",
    "# initialize model\n",
    "# \"Ascend + mixed precision\" can improve performance\n",
    "ascend_target = (ms.get_context(\"device_target\") == \"Ascend\")\n",
    "if ascend_target:\n",
    "    model = train.Model(network, loss_fn=network_loss, optimizer=network_opt, metrics={\"acc\"}, amp_level=\"O2\")\n",
    "else:\n",
    "    model = train.Model(network, loss_fn=network_loss, optimizer=network_opt, metrics={\"acc\"}, amp_level=\"O0\")\n",
    "\n",
    "# train model\n",
    "model.train(epoch_size,\n",
    "            dataset_train,\n",
    "            callbacks=[ckpt_callback, LossMonitor(125), TimeMonitor(125)],\n",
    "            dataset_sink_mode=False,)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### 模型验证\n",
    "\n",
    "模型验证过程主要应用了ImageFolderDataset,CrossEntropySmooth和Model等接口。\n",
    "\n",
    "ImageFolderDataset主要用于读取数据集。\n",
    "\n",
    "CrossEntropySmooth是损失函数实例化接口。\n",
    "\n",
    "Model主要用于编译模型。\n",
    "\n",
    "与训练过程相似,首先进行数据增强,然后定义ViT网络结构,加载预训练模型参数。随后设置损失函数,评价指标等,编译模型后进行验证。本案例采用了业界通用的评价标准Top_1_Accuracy和Top_5_Accuracy评价指标来评价模型表现。\n",
    "\n",
    "在本案例中,这两个指标代表了在输出的1000维向量中,以最大值或前5的输出值所代表的类别为预测结果时,模型预测的准确率。这两个指标的值越大,代表模型准确率越高。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'Top_1_Accuracy': 0.75, 'Top_5_Accuracy': 0.928}\n"
     ]
    }
    ],
   "source": [
    "dataset_val = ImageFolderDataset(os.path.join(data_path, \"val\"), shuffle=True)\n",
    "\n",
    "trans_val = [\n",
    "    transforms.Decode(),\n",
    "    transforms.Resize(224 + 32),\n",
    "    transforms.CenterCrop(224),\n",
    "    transforms.Normalize(mean=mean, std=std),\n",
    "    transforms.HWC2CHW()\n",
    "]\n",
    "\n",
    "dataset_val = dataset_val.map(operations=trans_val, input_columns=[\"image\"])\n",
    "dataset_val = dataset_val.batch(batch_size=16, drop_remainder=True)\n",
    "\n",
    "# construct model\n",
    "network = ViT()\n",
    "\n",
    "# load ckpt\n",
    "param_dict = ms.load_checkpoint(vit_path)\n",
    "ms.load_param_into_net(network, param_dict)\n",
    "\n",
    "network_loss = CrossEntropySmooth(sparse=True,\n",
    "                                  reduction=\"mean\",\n",
    "                                  smooth_factor=0.1,\n",
    "                                  num_classes=num_classes)\n",
    "\n",
    "# define metric\n",
    "eval_metrics = {'Top_1_Accuracy': train.Top1CategoricalAccuracy(),\n",
    "                'Top_5_Accuracy': train.Top5CategoricalAccuracy()}\n",
    "\n",
    "if ascend_target:\n",
    "    model = train.Model(network, loss_fn=network_loss, optimizer=network_opt, metrics=eval_metrics, amp_level=\"O2\")\n",
    "else:\n",
    "    model = train.Model(network, loss_fn=network_loss, optimizer=network_opt, metrics=eval_metrics, amp_level=\"O0\")\n",
    "\n",
    "# evaluate model\n",
    "result = model.eval(dataset_val)\n",
    "print(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "从结果可以看出,由于我们加载了预训练模型参数,模型的Top_1_Accuracy和Top_5_Accuracy达到了很高的水平,实际项目中也可以以此准确率为标准。如果未使用预训练模型参数,则需要更多的epoch来训练。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "### 模型推理\n",
    "\n",
    "在进行模型推理之前,首先要定义一个对推理图片进行数据预处理的方法。该方法可以对我们的推理图片进行resize和normalize处理,这样才能与我们训练时的输入数据匹配。\n",
    "\n",
    "本案例采用了一张Doberman的图片作为推理图片来测试模型表现,期望模型可以给出正确的预测结果。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [],
   "source": [
    "dataset_infer = ImageFolderDataset(os.path.join(data_path, \"infer\"), shuffle=True)\n",
    "\n",
    "trans_infer = [\n",
    "    transforms.Decode(),\n",
    "    transforms.Resize([224, 224]),\n",
    "    transforms.Normalize(mean=mean, std=std),\n",
    "    transforms.HWC2CHW()\n",
    "]\n",
    "\n",
    "dataset_infer = dataset_infer.map(operations=trans_infer,\n",
    "                                  input_columns=[\"image\"],\n",
    "                                  num_parallel_workers=1)\n",
    "dataset_infer = dataset_infer.batch(1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "接下来,我们将调用模型的predict方法进行模型推理。\n",
    "\n",
    "在推理过程中,通过index2label就可以获取对应标签,再通过自定义的show_result接口将结果写在对应图片上。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{236: 'Doberman'}\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "import pathlib\n",
    "import cv2\n",
    "import numpy as np\n",
    "from PIL import Image\n",
    "from enum import Enum\n",
    "from scipy import io\n",
    "\n",
    "\n",
    "class Color(Enum):\n",
    "    \"\"\"dedine enum color.\"\"\"\n",
    "    red = (0, 0, 255)\n",
    "    green = (0, 255, 0)\n",
    "    blue = (255, 0, 0)\n",
    "    cyan = (255, 255, 0)\n",
    "    yellow = (0, 255, 255)\n",
    "    magenta = (255, 0, 255)\n",
    "    white = (255, 255, 255)\n",
    "    black = (0, 0, 0)\n",
    "\n",
    "\n",
    "def check_file_exist(file_name: str):\n",
    "    \"\"\"check_file_exist.\"\"\"\n",
    "    if not os.path.isfile(file_name):\n",
    "        raise FileNotFoundError(f\"File `{file_name}` does not exist.\")\n",
    "\n",
    "\n",
    "def color_val(color):\n",
    "    \"\"\"color_val.\"\"\"\n",
    "    if isinstance(color, str):\n",
    "        return Color[color].value\n",
    "    if isinstance(color, Color):\n",
    "        return color.value\n",
    "    if isinstance(color, tuple):\n",
    "        assert len(color) == 3\n",
    "        for channel in color:\n",
    "            assert 0 <= channel <= 255\n",
    "        return color\n",
    "    if isinstance(color, int):\n",
    "        assert 0 <= color <= 255\n",
    "        return color, color, color\n",
    "    if isinstance(color, np.ndarray):\n",
    "        assert color.ndim == 1 and color.size == 3\n",
    "        assert np.all((color >= 0) & (color <= 255))\n",
    "        color = color.astype(np.uint8)\n",
    "        return tuple(color)\n",
    "    raise TypeError(f'Invalid type for color: {type(color)}')\n",
    "\n",
    "\n",
    "def imread(image, mode=None):\n",
    "    \"\"\"imread.\"\"\"\n",
    "    if isinstance(image, pathlib.Path):\n",
    "        image = str(image)\n",
    "\n",
    "    if isinstance(image, np.ndarray):\n",
    "        pass\n",
    "    elif isinstance(image, str):\n",
    "        check_file_exist(image)\n",
    "        image = Image.open(image)\n",
    "        if mode:\n",
    "            image = np.array(image.convert(mode))\n",
    "    else:\n",
    "        raise TypeError(\"Image must be a `ndarray`, `str` or Path object.\")\n",
    "\n",
    "    return image\n",
    "\n",
    "\n",
    "def imwrite(image, image_path, auto_mkdir=True):\n",
    "    \"\"\"imwrite.\"\"\"\n",
    "    if auto_mkdir:\n",
    "        dir_name = os.path.abspath(os.path.dirname(image_path))\n",
    "        if dir_name != '':\n",
    "            dir_name = os.path.expanduser(dir_name)\n",
    "            os.makedirs(dir_name, mode=777, exist_ok=True)\n",
    "\n",
    "    image = Image.fromarray(image)\n",
    "    image.save(image_path)\n",
    "\n",
    "\n",
    "def imshow(img, win_name='', wait_time=0):\n",
    "    \"\"\"imshow\"\"\"\n",
    "    cv2.imshow(win_name, imread(img))\n",
    "    if wait_time == 0:  # prevent from hanging if windows was closed\n",
    "        while True:\n",
    "            ret = cv2.waitKey(1)\n",
    "\n",
    "            closed = cv2.getWindowProperty(win_name, cv2.WND_PROP_VISIBLE) < 1\n",
    "            # if user closed window or if some key pressed\n",
    "            if closed or ret != -1:\n",
    "                break\n",
    "    else:\n",
    "        ret = cv2.waitKey(wait_time)\n",
    "\n",
    "\n",
    "def show_result(img: str,\n",
    "                result: Dict[int, float],\n",
    "                text_color: str = 'green',\n",
    "                font_scale: float = 0.5,\n",
    "                row_width: int = 20,\n",
    "                show: bool = False,\n",
    "                win_name: str = '',\n",
    "                wait_time: int = 0,\n",
    "                out_file: Optional[str] = None) -> None:\n",
    "    \"\"\"Mark the prediction results on the picture.\"\"\"\n",
    "    img = imread(img, mode=\"RGB\")\n",
    "    img = img.copy()\n",
    "    x, y = 0, row_width\n",
    "    text_color = color_val(text_color)\n",
    "    for k, v in result.items():\n",
    "        if isinstance(v, float):\n",
    "            v = f'{v:.2f}'\n",
    "        label_text = f'{k}: {v}'\n",
    "        cv2.putText(img, label_text, (x, y), cv2.FONT_HERSHEY_COMPLEX,\n",
    "                    font_scale, text_color)\n",
    "        y += row_width\n",
    "    if out_file:\n",
    "        show = False\n",
    "        imwrite(img, out_file)\n",
    "\n",
    "    if show:\n",
    "        imshow(img, win_name, wait_time)\n",
    "\n",
    "\n",
    "def index2label():\n",
    "    \"\"\"Dictionary output for image numbers and categories of the ImageNet dataset.\"\"\"\n",
    "    metafile = os.path.join(data_path, \"ILSVRC2012_devkit_t12/data/meta.mat\")\n",
    "    meta = io.loadmat(metafile, squeeze_me=True)['synsets']\n",
    "\n",
    "    nums_children = list(zip(*meta))[4]\n",
    "    meta = [meta[idx] for idx, num_children in enumerate(nums_children) if num_children == 0]\n",
    "\n",
    "    _, wnids, classes = list(zip(*meta))[:3]\n",
    "    clssname = [tuple(clss.split(', ')) for clss in classes]\n",
    "    wnid2class = {wnid: clss for wnid, clss in zip(wnids, clssname)}\n",
    "    wind2class_name = sorted(wnid2class.items(), key=lambda x: x[0])\n",
    "\n",
    "    mapping = {}\n",
    "    for index, (_, class_name) in enumerate(wind2class_name):\n",
    "        mapping[index] = class_name[0]\n",
    "    return mapping\n",
    "\n",
    "\n",
    "# Read data for inference\n",
    "for i, image in enumerate(dataset_infer.create_dict_iterator(output_numpy=True)):\n",
    "    image = image[\"image\"]\n",
    "    image = ms.Tensor(image)\n",
    "    prob = model.predict(image)\n",
    "    label = np.argmax(prob.asnumpy(), axis=1)\n",
    "    mapping = index2label()\n",
    "    output = {int(label): mapping[int(label)]}\n",
    "    print(output)\n",
    "    show_result(img=\"./dataset/infer/n01440764/ILSVRC2012_test_00000279.JPEG\",\n",
    "                result=output,\n",
    "                out_file=\"./dataset/infer/ILSVRC2012_test_00000279.JPEG\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "推理过程完成后,在推理文件夹下可以找到图片的推理结果,可以看出预测结果是Doberman,与期望结果相同,验证了模型的准确性。\n",
    "\n",
    "![infer-result](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.3.q1/tutorials/application/source_zh_cn/cv/images/infer_result.jpg)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "pycharm": {
     "name": "#%% md\n"
    }
   },
   "source": [
    "## 总结\n",
    "\n",
    "本案例完成了一个ViT模型在ImageNet数据上进行训练,验证和推理的过程,其中,对关键的ViT模型结构和原理作了讲解。通过学习本案例,理解源码可以帮助用户掌握Multi-Head Attention,TransformerEncoder,pos_embedding等关键概念,如果要详细理解ViT的模型原理,建议基于源码更深层次的详细阅读。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "MindSpore",
   "language": "python",
   "name": "mindspore"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.12"
  },
  "vscode": {
   "interpreter": {
    "hash": "e5788d777ab87e4ad6c77f1932229021d078b398b8ba8e5df202525f8cdf984e"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}