{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.4.1/resource/_static/logo_notebook.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.4.1/tutorials/zh_cn/beginner/mindspore_autograd.ipynb) [![下载样例代码](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.4.1/resource/_static/logo_download_code.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.4.1/tutorials/zh_cn/beginner/mindspore_autograd.py) [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.4.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.4.1/tutorials/source_zh_cn/beginner/autograd.ipynb)\n",
    "\n",
    "[基本介绍](https://www.mindspore.cn/tutorials/zh-CN/r2.4.1/beginner/introduction.html) || [快速入门](https://www.mindspore.cn/tutorials/zh-CN/r2.4.1/beginner/quick_start.html) || [张量 Tensor](https://www.mindspore.cn/tutorials/zh-CN/r2.4.1/beginner/tensor.html) || [数据加载与处理](https://www.mindspore.cn/tutorials/zh-CN/r2.4.1/beginner/dataset.html) || [网络构建](https://www.mindspore.cn/tutorials/zh-CN/r2.4.1/beginner/model.html) || **函数式自动微分** || [模型训练](https://www.mindspore.cn/tutorials/zh-CN/r2.4.1/beginner/train.html) || [保存与加载](https://www.mindspore.cn/tutorials/zh-CN/r2.4.1/beginner/save_load.html) || [使用静态图加速](https://www.mindspore.cn/tutorials/zh-CN/r2.4.1/beginner/accelerate_with_static_graph.html) || [自动混合精度](https://www.mindspore.cn/tutorials/zh-CN/r2.4.1/beginner/mixed_precision.html) ||"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 函数式自动微分\n",
    "\n",
    "神经网络的训练主要使用反向传播算法,模型预测值(logits)与正确标签(label)送入损失函数(loss function)获得loss,然后进行反向传播计算,求得梯度(gradients),最终更新至模型参数(parameters)。自动微分能够计算可导函数在某点处的导数值,是反向传播算法的一般化。自动微分主要解决的问题是将一个复杂的数学运算分解为一系列简单的基本运算,该功能对用户屏蔽了大量的求导细节和过程,大大降低了框架的使用门槛。\n",
    "\n",
    "MindSpore使用函数式自动微分的设计理念,提供更接近于数学语义的自动微分接口`grad`和`value_and_grad`。下面我们使用一个简单的单层线性变换模型进行介绍。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import mindspore\n",
    "from mindspore import nn\n",
    "from mindspore import ops\n",
    "from mindspore import Tensor, Parameter"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 函数与计算图\n",
    "\n",
    "计算图是用图论语言表示数学函数的一种方式,也是深度学习框架表达神经网络模型的统一方法。我们将根据下面的计算图构造计算函数和神经网络。\n",
    "\n",
    "![compute-graph](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.4.1/tutorials/source_zh_cn/beginner/images/comp-graph.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在这个模型中,$x$为输入,$y$为正确值,$w$和$b$是我们需要优化的参数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "x = ops.ones(5, mindspore.float32)  # input tensor\n",
    "y = ops.zeros(3, mindspore.float32)  # expected output\n",
    "w = Parameter(Tensor(np.random.randn(5, 3), mindspore.float32), name='w') # weight\n",
    "b = Parameter(Tensor(np.random.randn(3,), mindspore.float32), name='b') # bias"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们根据计算图描述的计算过程,构造计算函数。\n",
    "其中,[binary_cross_entropy_with_logits](https://www.mindspore.cn/docs/zh-CN/r2.4.1/api_python/ops/mindspore.ops.binary_cross_entropy_with_logits.html) 是一个损失函数,计算预测值和目标值之间的二值交叉熵损失。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "def function(x, y, w, b):\n",
    "    z = ops.matmul(x, w) + b\n",
    "    loss = ops.binary_cross_entropy_with_logits(z, y, ops.ones_like(z), ops.ones_like(z))\n",
    "    return loss"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "执行计算函数,可以获得计算的loss值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.914285"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "loss = function(x, y, w, b)\n",
    "print(loss)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 微分函数与梯度计算"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "为了优化模型参数,需要求参数对loss的导数:$\\frac{\\partial \\operatorname{loss}}{\\partial w}$和$\\frac{\\partial \\operatorname{loss}}{\\partial b}$,此时我们调用`mindspore.grad`函数,来获得`function`的微分函数。\n",
    "\n",
    "这里使用了`grad`函数的两个入参,分别为:\n",
    "\n",
    "- `fn`:待求导的函数。\n",
    "- `grad_position`:指定求导输入位置的索引。\n",
    "\n",
    "由于我们对$w$和$b$求导,因此配置其在`function`入参对应的位置`(2, 3)`。\n",
    "\n",
    "> 使用`grad`获得微分函数是一种函数变换,即输入为函数,输出也为函数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "grad_fn = mindspore.grad(function, (2, 3))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "执行微分函数,即可获得$w$、$b$对应的梯度。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(Tensor(shape=[5, 3], dtype=Float32, value=\n",
       " [[ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01],\n",
       "  [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01],\n",
       "  [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01],\n",
       "  [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01],\n",
       "  [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01]]),\n",
       " Tensor(shape=[3], dtype=Float32, value= [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01]))"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "grads = grad_fn(x, y, w, b)\n",
    "print(grads)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Stop Gradient"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "通常情况下,求导时会求loss对参数的导数,因此函数的输出只有loss一项。当我们希望函数输出多项时,微分函数会求所有输出项对参数的导数。此时如果想实现对某个输出项的梯度截断,或消除某个Tensor对梯度的影响,需要用到Stop Gradient操作。\n",
    "\n",
    "这里我们将`function`改为同时输出loss和z的`function_with_logits`,获得微分函数并执行。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "def function_with_logits(x, y, w, b):\n",
    "    z = ops.matmul(x, w) + b\n",
    "    loss = ops.binary_cross_entropy_with_logits(z, y, ops.ones_like(z), ops.ones_like(z))\n",
    "    return loss, z"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(Tensor(shape=[5, 3], dtype=Float32, value=\n",
       " [[ 1.06568694e+00,  1.05373347e+00,  1.30146706e+00],\n",
       "  [ 1.06568694e+00,  1.05373347e+00,  1.30146706e+00],\n",
       "  [ 1.06568694e+00,  1.05373347e+00,  1.30146706e+00],\n",
       "  [ 1.06568694e+00,  1.05373347e+00,  1.30146706e+00],\n",
       "  [ 1.06568694e+00,  1.05373347e+00,  1.30146706e+00]]),\n",
       " Tensor(shape=[3], dtype=Float32, value= [ 1.06568694e+00,  1.05373347e+00,  1.30146706e+00]))"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "grad_fn = mindspore.grad(function_with_logits, (2, 3))\n",
    "grads = grad_fn(x, y, w, b)\n",
    "print(grads)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以看到求得$w$、$b$对应的梯度值发生了变化。此时如果想要屏蔽掉z对梯度的影响,即仍只求参数对loss的导数,可以使用`ops.stop_gradient`接口,将梯度在此处截断。我们将`function`实现加入`stop_gradient`,并执行。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "def function_stop_gradient(x, y, w, b):\n",
    "    z = ops.matmul(x, w) + b\n",
    "    loss = ops.binary_cross_entropy_with_logits(z, y, ops.ones_like(z), ops.ones_like(z))\n",
    "    return loss, ops.stop_gradient(z)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(Tensor(shape=[5, 3], dtype=Float32, value=\n",
       " [[ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01],\n",
       "  [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01],\n",
       "  [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01],\n",
       "  [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01],\n",
       "  [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01]]),\n",
       " Tensor(shape=[3], dtype=Float32, value= [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01]))"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "grad_fn = mindspore.grad(function_stop_gradient, (2, 3))\n",
    "grads = grad_fn(x, y, w, b)\n",
    "print(grads)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以看到,求得$w$、$b$对应的梯度值与初始`function`求得的梯度值一致。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Auxiliary data\n",
    "\n",
    "Auxiliary data意为辅助数据,是函数除第一个输出项外的其他输出。通常我们会将函数的loss设置为函数的第一个输出,其他的输出即为辅助数据。\n",
    "\n",
    "`grad`和`value_and_grad`提供`has_aux`参数,当其设置为`True`时,可以自动实现前文手动添加`stop_gradient`的功能,满足返回辅助数据的同时不影响梯度计算的效果。\n",
    "\n",
    "下面仍使用`function_with_logits`,配置`has_aux=True`,并执行。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "grad_fn = mindspore.grad(function_with_logits, (2, 3), has_aux=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "((Tensor(shape=[5, 3], dtype=Float32, value=\n",
       "  [[ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01],\n",
       "   [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01],\n",
       "   [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01],\n",
       "   [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01],\n",
       "   [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01]]),\n",
       "  Tensor(shape=[3], dtype=Float32, value= [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01])),\n",
       " Tensor(shape=[3], dtype=Float32, value= [-1.40476596e+00, -1.64932394e+00,  2.24711204e+00]))"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "grads, (z,) = grad_fn(x, y, w, b)\n",
    "print(grads, z)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "可以看到,求得$w$、$b$对应的梯度值与初始`function`求得的梯度值一致,同时z能够作为微分函数的输出返回。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 神经网络梯度计算"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "前述章节主要根据计算图对应的函数介绍了MindSpore的函数式自动微分,但我们的神经网络构造是继承自面向对象编程范式的`nn.Cell`。接下来我们通过`Cell`构造同样的神经网络,利用函数式自动微分来实现反向传播。\n",
    "\n",
    "首先我们继承`nn.Cell`构造单层线性变换神经网络。这里我们直接使用前文的$w$、$b$作为模型参数,使用`mindspore.Parameter`进行包装后,作为内部属性,并在`construct`内实现相同的Tensor操作。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define model\n",
    "class Network(nn.Cell):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.w = w\n",
    "        self.b = b\n",
    "\n",
    "    def construct(self, x):\n",
    "        z = ops.matmul(x, self.w) + self.b\n",
    "        return z"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "接下来我们实例化模型和损失函数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Instantiate model\n",
    "model = Network()\n",
    "# Instantiate loss function\n",
    "loss_fn = nn.BCEWithLogitsLoss()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "完成后,由于需要使用函数式自动微分,需要将神经网络和损失函数的调用封装为一个前向计算函数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Define forward function\n",
    "def forward_fn(x, y):\n",
    "    z = model(x)\n",
    "    loss = loss_fn(z, y)\n",
    "    return loss"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "完成后,我们使用`value_and_grad`接口获得微分函数,用于计算梯度。\n",
    "\n",
    "由于使用Cell封装神经网络模型,模型参数为Cell的内部属性,此时我们不需要使用`grad_position`指定对函数输入求导,因此将其配置为`None`。对模型参数求导时,我们使用`weights`参数,使用`model.trainable_params()`方法从Cell中取出可以求导的参数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "grad_fn = mindspore.value_and_grad(forward_fn, None, weights=model.trainable_params())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(Tensor(shape=[5, 3], dtype=Float32, value=\n",
       " [[ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01],\n",
       "  [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01],\n",
       "  [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01],\n",
       "  [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01],\n",
       "  [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01]]),\n",
       " Tensor(shape=[3], dtype=Float32, value= [ 6.56869709e-02,  5.37334494e-02,  3.01467031e-01]))"
      ]
     },
     "execution_count": 17,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "loss, grads = grad_fn(x, y)\n",
    "print(grads)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "执行微分函数,可以看到梯度值和前文`function`求得的梯度值一致。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "MindSpore",
   "language": "python",
   "name": "mindspore"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.5"
  },
  "vscode": {
   "interpreter": {
    "hash": "8c9da313289c39257cb28b126d2dadd33153d4da4d524f730c81a4aaccbd2ca7"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}