{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "2579fa00-0dc6-4672-8a96-5a6bb315881f",
   "metadata": {},
   "source": [
    "# 函数式和对象式融合编程范式\n",
    "\n",
    "[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.1/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.1/docs/mindspore/source_zh_cn/design/programming_paradigm.ipynb)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "65f5d9f4-b38f-4c3a-bf28-5a4bc9510b6e",
   "metadata": {},
   "source": [
    "编程范式(Programming paradigm)是指编程语言的编程风格或编程方式。通常情况下,AI框架均依赖前端编程接口所使用的编程语言的编程范式进行神经网络的构造和训练。MindSpore作为AI+科学计算融合计算框架,分别面向AI、科学计算场景,提供了面向对象编程和函数式编程的支持。同时为提升框架使用的灵活性和易用性,提出了函数式+面向对象融合编程范式,有效地体现了函数式自动微分机制的优势。\n",
    "\n",
    "下面分别介绍MindSpore支持的三类编程范式及其简单示例。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "62dc35ab-780a-484d-a6bb-113b8e5af1d4",
   "metadata": {},
   "source": [
    "## 面向对象编程\n",
    "\n",
    "面向对象编程(Object-oriented programming, OOP),是指一种将程序分解为封装数据及相关操作的模块(类)而进行的编程方式,对象为类(class)的实例。面向对象编程将对象作为程序的基本单元,将程序和数据封装其中,以提高软件的重用性、灵活性和扩展性,对象里的程序可以访问及经常修改对象相关联的数据。"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "id": "d2f9816d-fa7a-4524-accb-4fa3c30a2dc4",
   "metadata": {},
   "source": [
    "在一般的编程场景中,代码(code)和数据(data)是两个核心构成部分。面向对象编程是针对特定对象(Object)来设计数据结构,定义类(Class)。类通常由以下两部分构成,分别对应了code和data:\n",
    "\n",
    "- 方法(Methods)\n",
    "- 属性(Attributes)\n",
    "\n",
    "对于同一个Class实例化(instantiation)后得到的不同对象而言,方法和属性相同,不同的是属性的值。不同的属性值决定了对象的内部状态,因此OOP能够很好地进行状态管理。\n",
    "\n",
    "下面为Python构造简单类的示例:\n",
    "\n",
    "```python\n",
    "class Sample: #class declaration\n",
    "    def __init__(self, name): # class constructor (code)\n",
    "        self.name = name # attribute (data)\n",
    "\n",
    "    def set_name(self, name): # method declaration (code)\n",
    "        self.name = name # method implementation (code)\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b5584f1a-2bf6-4ea1-a393-99aa015926d8",
   "metadata": {},
   "source": [
    "对于构造神经网络来说,首要的组件就是网络层(Layer),一个神经网络层包含以下部分:\n",
    "\n",
    "- Tensor操作 (Operation)\n",
    "- 权重 (Weights)\n",
    "\n",
    "此二者恰好与类的Methods和Attributes一一对应,同时权重本身就是神经网络层的内部状态,因此使用类来构造Layer天然符合其定义。此外,我们在编程时希望使用神经网络层进行堆叠,构造深度神经网络,使用OOP编程可以很容易地通过Layer对象组合构造新的Layer类。\n",
    "\n",
    "下面为使用MindSpore构造神经网络类的示例:\n",
    "\n",
    "```python\n",
    "from mindspore import nn, Parameter\n",
    "from mindspore.common.initializer import initializer\n",
    "\n",
    "class Linear(nn.Cell):\n",
    "    def __init__(self, in_features, out_features, has_bias): # class constructor (code)\n",
    "        super().__init__()\n",
    "        self.weight = Parameter(initializer('normal', [out_features, in_features], mindspore.float32), 'weight') # layer weight (data)\n",
    "        self.bias = Parameter(initializer('zeros', [out_features], mindspore.float32), 'bias') # layer weight (data)\n",
    "\n",
    "    def construct(self, inputs): # method declaration (code)\n",
    "        output = ops.matmul(inputs, self.weight.transpose(0, 1)) # tensor transformation (code)\n",
    "        output = output + self.bias # tensor transformation (code)\n",
    "        return output\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "72d6cbde",
   "metadata": {},
   "source": [
    "除神经网络层的构造使用面向对象编程范式外,MindSpore支持纯面向对象编程方式构造神经网络训练逻辑,此时神经网络的正向计算、反向传播、梯度优化等操作均使用类进行构造。下面是纯面向对象编程的示例:\n",
    "\n",
    "```python\n",
    "import mindspore\n",
    "import mindspore.nn as nn\n",
    "from mindspore import value_and_grad\n",
    "\n",
    "class TrainOneStepCell(nn.Cell):\n",
    "    def __init__(self, network, optimizer):\n",
    "        super().__init__()\n",
    "        self.network = network\n",
    "        self.optimizer = optimizer\n",
    "        self.grad_fn = value_and_grad(self.network, None, self.optimizer.parameters)\n",
    "\n",
    "    def construct(self, *inputs):\n",
    "        loss, grads = self.grad_fn(*inputs)\n",
    "        self.optimizer(grads)\n",
    "        return loss\n",
    "\n",
    "network = nn.Dense(5, 3)\n",
    "loss_fn = nn.BCEWithLogitsLoss()\n",
    "network_with_loss = nn.WithLossCell(network, loss_fn)\n",
    "optimizer = nn.SGD(network.trainable_params(), 0.001)\n",
    "trainer = TrainOneStepCell(network_with_loss, optimizer)\n",
    "```\n",
    "\n",
    "此时,不论是神经网络,还是其训练过程,均使用继承`nn.Cell`的类进行管理,可以方便地作为计算图进行编译加速。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "eea3afc9-7172-4175-a836-374f97589b71",
   "metadata": {},
   "source": [
    "## 函数式编程\n",
    "\n",
    "函数式编程(Functional programming)是一种将计算机运算视为函数运算,并且避免使用程序状态以及可变对象的编程范式。\n",
    "\n",
    "在函数式编程中,函数被视为一等公民,这意味着它们可以绑定到名称(包括本地标识符),作为参数传递,并从其他函数返回,就像任何其他数据类型一样。这允许以声明性和可组合的风格编写程序,其中小功能以模块化方式组合。函数式编程有时被视为纯函数式编程的同义词,是将所有函数视为确定性数学函数或纯函数的函数式编程的一个子集。当使用一些给定参数调用纯函数时,它将始终返回相同的结果,并且不受任何可变状态或其他副作用的影响。\n",
    "\n",
    "函数式编程有两个核心特点,使其十分符合科学计算的需要:\n",
    "\n",
    "1. 编程函数语义与数学函数语义完全对等。\n",
    "2. 确定性,给定相同输入必然返回相同输出。无副作用。\n",
    "\n",
    "由于确定性这一特点,通过限制副作用,程序可以有更少的错误,更容易调试和测试,更适合形式验证。\n",
    "\n",
    "MindSpore提供纯函数式编程的支持,配合`mindspore.numpy`及`mindspore.scipy`提供的数值计算接口,可以便捷地进行科学计算编程。下面是使用函数式编程的示例:"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "edf6610b-aa3f-4073-880c-fc88f416f26d",
   "metadata": {},
   "source": [
    "```python\n",
    "import mindspore.numpy as mnp\n",
    "from mindspore import grad\n",
    "\n",
    "grad_tanh = grad(mnp.tanh)\n",
    "print(grad_tanh(2.0))\n",
    "# 0.070650816\n",
    "\n",
    "print(grad(grad(mnp.tanh))(2.0))\n",
    "print(grad(grad(grad(mnp.tanh)))(2.0))\n",
    "# -0.13621868\n",
    "# 0.25265405\n",
    "```\n",
    "\n",
    "配合函数式编程范式的需要,MindSpore提供了多种函数变换接口,涵盖包括自动微分、自动向量化、自动并行、即时编译、数据下沉等功能模块,下面简单进行介绍:\n",
    "\n",
    "- 自动微分:`grad`、`value_and_grad`,提供微分函数变换功能;\n",
    "- 自动向量化:`vmap`,用于沿参数轴映射函数 fn 的高阶函数;\n",
    "- 自动并行:`shard`,函数式算子切分,指定函数输入/输出Tensor的分布策略;\n",
    "- 即时编译:`jit`,将Python函数编译为一张可调用的MindSpore图;\n",
    "- 数据下沉:`data_sink`,对输入的函数进行变换,获得可使用数据下沉模式的函数。\n",
    "\n",
    "基于上述函数变换接口,在使用函数式编程范式时可以快速高效地使用函数变换实现复杂的功能。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 函数式微分编程\n",
    "\n",
    "### 自动微分\n",
    "\n",
    "深度学习等现代AI算法,通过使用大量的数据来学习拟合出一个优化后带参模型,其中使用的学习算法,多是基于现实数据自模型中的经验误差来反向传播以更新模型的参数,**自动微分技术(Automatic Differentiation,AD)**正是其中的关键技术。\n",
    "\n",
    "自动微分是一种介于数值微分与符号微分之间的一种求导方法。自动微分的核心思想是将计算机程序中的运算操作分解为一个有限的基本操作合集,且合集中基本操作的求导规则均为已知的。在完成每一个基本操作的求导后,使用链式求导法则将结果组合得到整体程序的求导结果。\n",
    "\n",
    "链式求导法则:\n",
    "\n",
    "$$\n",
    "(f\\circ g)^{'}(x)=f^{'}(g(x))g^{'}(x) \\tag{1}\n",
    "$$\n",
    "\n",
    "根据对分解后的基本操作求导和链式规则的组合不同,自动微分可以分为**前向模式**和**反向模式**。\n",
    "\n",
    "- 前向自动微分(Forward Automatic Differentiation,也叫做 tangent linear mode AD)或者前向累积梯度(前向模式)。\n",
    "\n",
    "- 后向自动微分(Reverse Automatic Differentiation,也叫做 adjoint mode AD)或者说反向累计梯度(反向模式)。\n",
    "\n",
    "我们以公式 (2) 为例介绍前向微分与反向微分的具体计算方式:\n",
    "\n",
    "$$\n",
    "y=f(x_{1},x_{2})=ln(x_{1})+x_{1}x_{2}-sin(x_{2}) \\tag{2}\n",
    "$$\n",
    "\n",
    "当我们使用前向自动微分求公式 (2) 在$x_{1}=2,x_{2}=5$处的导数 $\\frac{\\partial y}{\\partial x_{1}}$ 时,前向自动微分的求导方向与原函数的求值方向一致,原函数结果与微分结果可以被同时获得。\n",
    "\n",
    "![forward](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.1/docs/mindspore/source_zh_cn/design/images/auto_gradient_forward.png)\n",
    "\n",
    "当使用反向自动微分时,反向自动微分的求导方向与原函数的求值方向相反,微分结果需要依赖原函数的运行结果。\n",
    "\n",
    "![backward](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.1/docs/mindspore/source_zh_cn/design/images/auto_gradient_backward.png)\n",
    "\n",
    "MindSpore先构建的是基于反向模式的自动微分,并在该方法的基础上实现了正向微分。\n",
    "\n",
    "为了进一步说明前向微分与反向微分的区别,我们将被求导的原函数,泛化为具有N输入与M输出的函数F:\n",
    "\n",
    "$$\n",
    "(Y_{1},Y_{2},...,Y_{M})=F(X_{1},X_{2},...,X_{N}) \\tag{3}\n",
    "$$\n",
    "\n",
    "函数 $F()$ 的导数本身为一个雅可比矩阵(Jacobian matrix)。\n",
    "\n",
    "$$\n",
    "\\left[\n",
    " \\begin{matrix}\n",
    "   \\frac{\\partial Y_{1}}{\\partial X_{1}}& ... & \\frac{\\partial Y_{1}}{\\partial X_{N}} \\\\\n",
    "   ... & ... & ... \\\\\n",
    "   \\frac{\\partial Y_{M}}{\\partial X_{1}} & ... & \\frac{\\partial Y_{M}}{\\partial X_{N}}\n",
    "  \\end{matrix}\n",
    "  \\right]\n",
    "\\tag{4}\n",
    "$$\n",
    "\n",
    "#### 前向自动微分\n",
    "\n",
    "在前向自动微分当中,我们是从输入开始向输出的方向计算的,因此每一次计算我们可以求得输出对某一输入的导数,即雅可比矩阵中的一列。\n",
    "\n",
    "$$\n",
    "\\left[\n",
    " \\begin{matrix}\n",
    "   \\frac{\\partial Y_{1}}{\\partial X_{1}}\\\\\n",
    "   ...  \\\\\n",
    "   \\frac{\\partial Y_{M}}{\\partial X_{1}}\n",
    "  \\end{matrix}\n",
    "  \\right]\n",
    "\\tag{5}\n",
    "$$\n",
    "\n",
    "为了求取该列的值,自动微分将程序分解为一系列求导规则已知的基本操作,这些基本操作也可以被泛化表达为具有$n$输入和$m$输出的函数$f$:\n",
    "\n",
    "$$\n",
    "(y_{1},y_{2},...,y_{m})=f(x_{1},x_{2},...,x_{n}) \\tag{6}\n",
    "$$\n",
    "\n",
    "由于我们的已知基础函数 $f$ 的求导规则,即 $f$ 的雅可比矩阵是已知的。于是我们可以对$f$计算雅可比向量积(Jvp, Jacobian-vector-product),并应用链式求导法则获得导数结果。\n",
    "\n",
    "$$\n",
    "\\left[\n",
    " \\begin{matrix}\n",
    "   \\frac{\\partial y_{1}}{\\partial X_{i}}\\\\\n",
    "   ...  \\\\\n",
    "   \\frac{\\partial y_{m}}{\\partial X_{i}}\n",
    "  \\end{matrix}\n",
    "  \\right]=\\left[\n",
    " \\begin{matrix}\n",
    "   \\frac{\\partial y_{1}}{\\partial x_{1}}& ... & \\frac{\\partial y_{1}}{\\partial x_{n}} \\\\\n",
    "   ... & ... & ... \\\\\n",
    "   \\frac{\\partial y_{m}}{\\partial x_{1}} & ... & \\frac{\\partial y_{M}}{\\partial x_{n}}\n",
    "  \\end{matrix}\n",
    "  \\right]\\left[\n",
    " \\begin{matrix}\n",
    "   \\frac{\\partial x_{1}}{\\partial X_{i}}\\\\\n",
    "   ...  \\\\\n",
    "   \\frac{\\partial x_{n}}{\\partial X_{i}}\n",
    "  \\end{matrix}\n",
    "  \\right]\n",
    "\\tag{7}\n",
    "$$\n",
    "\n",
    "#### 反向自动微分\n",
    "\n",
    "在反向自动微分当中,我们是从输出开始向输入的方向计算的,因此每一次计算我们可以求得某一输出对输入的导数,即雅可比矩阵中的一行。\n",
    "\n",
    "$$\n",
    "\\left[\n",
    " \\begin{matrix}\n",
    "   \\frac{\\partial Y_{1}}{\\partial X_{1}}& ... & \\frac{\\partial Y_{1}}{\\partial X_{N}} \\\\\n",
    "  \\end{matrix}\n",
    "  \\right]\n",
    "\\tag{8}\n",
    "$$\n",
    "\n",
    "为了求取该列的值,自动微分将程序分解为一系列求导规则已知的基本操作,这些基本操作也可以被泛化表达为具有n输入和m输出的函数$f$:\n",
    "\n",
    "$$\n",
    "(y_{1},y_{2},...,y_{m})=f(x_{1},x_{2},...,x_{n}) \\tag{9}\n",
    "$$\n",
    "\n",
    "由于我们的已知基础函数$f$的求导规则,即f的雅可比矩阵是已知的。于是我们可以对$f$计算向量雅可比积(Vjp, Vector-jacobian-product),并应用链式求导法则获得导数结果。\n",
    "\n",
    "$$\n",
    "\\left[\n",
    " \\begin{matrix}\n",
    "   \\frac{\\partial Y_{j}}{\\partial x_{1}}& ... & \\frac{\\partial Y_{j}}{\\partial x_{N}} \\\\\n",
    "  \\end{matrix}\n",
    "  \\right]=\\left[\n",
    " \\begin{matrix}\n",
    "   \\frac{\\partial Y_{j}}{\\partial y_{1}}& ... & \\frac{\\partial Y_{j}}{\\partial y_{m}} \\\\\n",
    "  \\end{matrix}\n",
    "  \\right]\\left[\n",
    " \\begin{matrix}\n",
    "   \\frac{\\partial y_{1}}{\\partial x_{1}}& ... & \\frac{\\partial y_{1}}{\\partial x_{n}} \\\\\n",
    "   ... & ... & ... \\\\\n",
    "   \\frac{\\partial y_{m}}{\\partial x_{1}} & ... & \\frac{\\partial y_{m}}{\\partial x_{n}}\n",
    "  \\end{matrix}\n",
    "  \\right]\n",
    "\\tag{10}\n",
    "$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### `grad`实现\n",
    "\n",
    "MindSpore中`grad`使用的是反向自动微分模式,即从正向网络的输出开始计算梯度。\n",
    "\n",
    "#### `grad`算法设计\n",
    "\n",
    "设模型定义的原函数为:\n",
    "\n",
    "$$\n",
    "f(g(x, y, z)) \\tag{11}\n",
    "$$\n",
    "\n",
    "则$f()$对$x$的梯度为:\n",
    "\n",
    "$$\n",
    "\\frac{df}{dx}=\\frac{df}{dg}\\frac{dg}{dx}\\frac{dx}{dx}+\\frac{df}{dg}\\frac{dg}{dy}\\frac{dy}{dx}+\\frac{df}{dg}\\frac{dg}{dz}\\frac{dz}{dx}\\tag{12}\n",
    "$$\n",
    "\n",
    "$\\frac{df}{dy}$和$\\frac{df}{dz}$与$\\frac{df}{dx}$类似。\n",
    "\n",
    "应用链式求导法则,对每个函数(包括算子和图)定义梯度函数`bprop: dout->(df, dinputs)`,这里`df`表示函数对自由变量(函数外定义的变量)的梯度,`dinputs`是对函数输入的梯度。在此基础上,应用全微分法则,将`(df, dinputs)`累加到对应的变量。\n",
    "\n",
    "MindIR实现了分支,循环,闭包的函数表达式,所以对相应的算子实现正确的反向规则即可求得输入函数的梯度函数。\n",
    "\n",
    "定义运算符K,反向自动微分算法可以简单表示如下:\n",
    "\n",
    "```text\n",
    "v = (func, inputs)\n",
    "F(v): {\n",
    "    (result, bprop) = K(func)(K(inputs))\n",
    "    df, dinputs = bprop(dout)\n",
    "    v.df += df\n",
    "    v.dinputs += dinputs\n",
    "}\n",
    "```\n",
    "\n",
    "#### `grad`算法实现\n",
    "\n",
    "在自动微分流程中,需要进行自动微分的函数会被取出。并作为自动微分模块的输入,并输出对应的梯度图。\n",
    "\n",
    "MindSpore的自动微分模块实现了从原函数对象到梯度函数对象的转换。转换后的对象为`fprop`形式的梯度函数对象。\n",
    "\n",
    "`fprop = (forward_result, bprop)`、`forward_result`是前向计算图的输出节点, `bprop`是以`fprop`的闭包对象形式生成的梯度函数,它只有`dout`一个入参, `inputs`和`outputs`是引用的`fprop`的输入和输出。\n",
    "\n",
    "```c++\n",
    "  MapObject();    // 实现ValueNode/Parameter/FuncGraph/Primitive对象的映射\n",
    "  MapMorphism();  // 实现CNode的态射\n",
    "  res = k_graph(); // res就是梯度函数的fprop对象\n",
    "```\n",
    "\n",
    "在生成梯度函数对象的过程中,需要完成从原函数到梯度函数的一系列的映射, 即为每个原函数中的节点生成其所对应的梯度函数的节点,再按照反向自动微分的规则将这些节点连接在一起,生成梯度函数图。\n",
    "\n",
    "每张原函数对象的子图都会都会生成一个`Dfunctor`对象,负责将该原函数对象映射为梯度函数对象。`DFunctor`主要需要经过 `MapObject`, `MapMorphism`两步来实现这种映射关系。\n",
    "\n",
    "`MapObject`实现了原函数节点到梯度函数节点的映射,具体包括对自由变量,参数节点以及ValueNode的映射。\n",
    "\n",
    "```c++\n",
    "MapFvObject();    // 自由变量的映射\n",
    "MapParamObject(); // 参数节点的映射\n",
    "MapValueObject(); // ValueNode的映射\n",
    "```\n",
    "\n",
    "- `MapFvObject`是对自由变量的映射;\n",
    "\n",
    "- `MapParamObject`是对参数节点的映射;\n",
    "\n",
    "- `MapValueObject`中主要对`Primitive`以及`FuncGraph`对象进行映射。\n",
    "\n",
    "其中,对`FuncGraph`进行的映射同样需要为该子图创造相应的`DFunctor`,是一个递归的过程。 `Primitive`表明了算子的种类,为了支持自动微分,需要为每一种`Primitive`定义其对应的反向微分函数。\n",
    "\n",
    "MindSpore将这些定义放在了Python侧,以`sin`算子为例:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import mindspore.ops as ops\n",
    "from mindspore.ops._grad.grad_base import bprop_getters\n",
    "\n",
    "@bprop_getters.register(ops.Sin)\n",
    "def get_bprop_sin(self):\n",
    "    \"\"\"Grad definition for `Sin` operation.\"\"\"\n",
    "    cos = ops.Cos()\n",
    "\n",
    "    def bprop(x, out, dout):\n",
    "        dx = dout * cos(x)\n",
    "        return (dx,)\n",
    "\n",
    "    return bprop"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "`x`为原函数对象`sin`的输入,`out`为原函数对象`sin`的输出,`dout`为当前累加的梯度输入。\n",
    "\n",
    "当`MapObject`完成对以上节点的映射后,`MapMorphism`从原函数的输出节点开始以递归的方式实现对`CNode`的态射,建立起节点间的反向传播链接,实现梯度累加。\n",
    "\n",
    "#### `grad`示例\n",
    "\n",
    "我们构建一个简单的网络来表示公式:\n",
    "\n",
    "$$\n",
    "f(x) = cos(sin(x)) \\tag{13}\n",
    "$$\n",
    "\n",
    "并对公式(13)的输入`x`进行求导:\n",
    "\n",
    "$$\n",
    "f'(x) = -sin(sin(x)) * cos(x) \\tag{14}\n",
    "$$\n",
    "\n",
    "在MindSpore中公式(13)的网络的结构实现为:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "import mindspore.nn as nn\n",
    "\n",
    "class Net(nn.Cell):\n",
    "    def __init__(self):\n",
    "        super(Net, self).__init__()\n",
    "        self.sin = ops.Sin()\n",
    "        self.cos = ops.Cos()\n",
    "\n",
    "    def construct(self, x):\n",
    "        a = self.sin(x)\n",
    "        out = self.cos(a)\n",
    "        return out"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "正向网络的结构为:\n",
    "\n",
    "![auto-gradient-foward](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.1/docs/mindspore/source_zh_cn/design/images/auto_gradient_foward.png)\n",
    "\n",
    "对该网络进行反向微分后,所得微分网络结构为:\n",
    "\n",
    "![auto-gradient-forward2](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.1/docs/mindspore/source_zh_cn/design/images/auto_gradient_forward2.png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 前向自动微分实现\n",
    "\n",
    "除了支持反向自动微分的`grad`之外,MindSpore还扩展实现了前向自动微分`jvp`(Jacobian-Vector-Product)。\n",
    "\n",
    "相比于反向自动微分,前向自动微分更适合于求取输入维度小于输出维度的网络的梯度。MindSpore的前向自动微分是基于反向自动微分接口`grad`开发的。\n",
    "\n",
    "![auto-gradient-jvp](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.1/docs/mindspore/source_zh_cn/design/images/auto_gradient_jvp.png)\n",
    "\n",
    "黑色为网络的正向流程,第一次求导为针对$x$的求导,得到的是蓝色的图。第二次的为蓝色图针对$v$的求导,得到的是黄色的图。\n",
    "\n",
    "黄色的图就是我们所需要的前向模式自动微分的结果图。由于蓝色图可以视为关于$v$的线性函数,蓝色节点与黄色节点之间不会存在连边。蓝色节点全部为悬空节点,会被消除,真正运行的就只有原函数节点以及前向微分的节点。因此,该方法不会有额外的运行开销。\n",
    "\n",
    "#### 参考文献\n",
    "\n",
    "[1] Baydin, A.G. et al., 2018. [Automatic differentiation in machine learning: A survey](https://arxiv.org/abs/1502.05767). arXiv.org. \\[Accessed September 1, 2021\\].\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "680306ff-ecba-4b1f-be44-df6ac1935f91",
   "metadata": {},
   "source": [
    "## 函数式+面向对象融合编程"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d74658ba-41e7-4178-932b-ae8cd888902a",
   "metadata": {},
   "source": [
    "考虑到神经网络模型构建和训练流程的灵活性和易用性需求,结合MindSpore自身的函数式自动微分机制,MindSpore针对AI模型训练设计了函数式+面向对象融合编程范式,可以兼顾面向对象编程和函数式编程的优势,同时使用同一套自动微分机制实现深度学习反向传播和科学计算自动微分的兼容,从底层支持AI和科学计算建模的兼容。下面是函数式+面向对象融合编程的典型过程:\n",
    "\n",
    "1. 用类构建神经网络;\n",
    "2. 实例化神经网络对象;\n",
    "3. 构造正向函数,连接神经网络和损失函数;\n",
    "4. 使用函数变换,获得梯度计算(反向传播)函数;\n",
    "5. 构造训练过程函数;\n",
    "6. 调用函数进行训练。\n",
    "\n",
    "下面是函数式+面向对象融合编程的简单示例:"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "673cec96-9965-46da-b5a4-2d705ee869e3",
   "metadata": {},
   "source": [
    "```python\n",
    "# Class definition\n",
    "class Net(nn.Cell):\n",
    "    def __init__(self):\n",
    "        ......\n",
    "    def construct(self, inputs):\n",
    "        ......\n",
    "\n",
    "# Object instantiation\n",
    "net = Net() # network\n",
    "loss_fn = nn.CrossEntropyLoss() # loss function\n",
    "optimizer = nn.Adam(net.trainable_params(), lr) # optimizer\n",
    "\n",
    "# define forward function\n",
    "def forword_fn(inputs, targets):\n",
    "    logits = net(inputs)\n",
    "    loss = loss_fn(logits, targets)\n",
    "    return loss, logits\n",
    "\n",
    "# get grad function\n",
    "grad_fn = value_and_grad(forward_fn, None, optim.parameters, has_aux=True)\n",
    "\n",
    "# define train step function\n",
    "def train_step(inputs, targets):\n",
    "    (loss, logits), grads = grad_fn(inputs, targets) # get values and gradients\n",
    "    optimizer(grads) # update gradient\n",
    "    return loss, logits\n",
    "\n",
    "for i in range(epochs):\n",
    "    for inputs, targets in dataset():\n",
    "        loss = train_step(inputs, targets)\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "49aadacc-d515-47a9-945b-f255e02508ae",
   "metadata": {},
   "source": [
    "如上述示例,在神经网络构造时,使用面向对象编程,神经网络层的构造方式符合AI编程的习惯。在进行前向计算和反向传播时,MindSpore使用函数式编程,将前向计算构造为函数,然后通过函数变换,获得`grad_fn`,最后通过执行`grad_fn`获得权重对应的梯度。\n",
    "\n",
    "通过函数式+面向对象融合编程,即保证了神经网络构建的易用性,同时提高了前向计算和反向传播等训练过程的灵活性,是MindSpore推荐的默认编程范式。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "MindSpore",
   "language": "python",
   "name": "mindspore"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.5"
  },
  "vscode": {
   "interpreter": {
    "hash": "8c9da313289c39257cb28b126d2dadd33153d4da4d524f730c81a4aaccbd2ca7"
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}