{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "0ec30174",
   "metadata": {},
   "source": [
    "# 使能图算融合\n",
    "\n",
    "[![在线运行](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.2/resource/_static/logo_modelarts.svg)](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9taW5kc3BvcmUtd2Vic2l0ZS5vYnMuY24tbm9ydGgtNC5teWh1YXdlaWNsb3VkLmNvbS9ub3RlYm9vay9yMi4yL3R1dG9yaWFscy9leHBlcnRzL3poX2NuL29wdGltaXplL21pbmRzcG9yZV9ncmFwaF9mdXNpb25fZW5naW5lLmlweW5i&imageid=4c43b3ad-9df7-4b83-a096-c775dc4ba243) [![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.2/resource/_static/logo_notebook.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.2/tutorials/experts/zh_cn/optimize/mindspore_graph_fusion_engine.ipynb) [![下载样例代码](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.2/resource/_static/logo_download_code.svg)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r2.2/tutorials/experts/zh_cn/optimize/mindspore_graph_fusion_engine.py) [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.2/resource/_static/logo_source.svg)](https://gitee.com/mindspore/docs/blob/r2.2/tutorials/experts/source_zh_cn/optimize/graph_fusion_engine.ipynb)\n",
    "\n",
    "## 概述\n",
    "\n",
    "图算融合是MindSpore特有的网络性能优化技术。它可以通过自动分析和优化现有网络计算图逻辑,并结合目标硬件能力,对计算图进行计算化简和替代、算子拆分和融合、算子特例化编译等优化,以提升设备计算资源利用率,实现对网络性能的整体优化。相比传统优化技术,图算融合具有多算子跨边界联合优化、与MindSpore AKG(基于Polyhedral的算子编译器)跨层协同、即时编译等独特优势。另外,图算融合只需要用户打开对应配置后,整个优化过程即可自动完成,不需要网络开发人员进行其它额外感知,使得用户可以聚焦网络算法实现。\n",
    "\n",
    "> MindSpore默认自动安装MindSpore AKG。对于CPU后端并且通过源码安装的MindSpore,需确保已正确安装[llvm 12.0.1版本](https://github.com/llvm/llvm-project/archive/refs/tags/llvmorg-12.0.1.tar.gz)。\n",
    "\n",
    "图算融合的适用场景包括:\n",
    "\n",
    "- 对网络执行时间具有较高性能要求的场景;\n",
    "- 通过拼接基本算子实现自定义组合算子,并希望对这些基本算子进行自动融合,以提升自定义组合算子性能的场景。\n",
    "\n",
    "## 使用方法\n",
    "\n",
    "当前图算融合优化默认关闭状态,我们只需在训练脚本中为`context`指定参数`enable_graph_kernel=True`即可启用图算融合:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "f90e449f",
   "metadata": {},
   "outputs": [],
   "source": [
    "import mindspore as ms\n",
    "ms.set_context(enable_graph_kernel=True)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9a01be76",
   "metadata": {},
   "source": [
    "> - 图算融合优化可以选择在Graph和PyNative两种模式下使能。在使能之后,会根据计算图特征以及图算优化能力,选择性进行融合优化。并且图算优化能力也可能在不同版本之间会进行变化和演进。例如:PyNative模式当前会选择性对jit子图或者反向子图进行融合优化,另外如某些动态shape算子可能会跳过融合。\n",
    "> - 对于大部分场景下,图算融合优化通常可以获得正向性能收益以及相同或相近的计算精度。但在极少数场景下,也有可能会出现性能劣化。另外由于算子实现差别,也可能会有一些精度上变化。建议用户结合自身场景选择性使用。\n",
    "> - 对于CPU平台,图算融合采用了[OpenMP](https://www.openmp.org/)并行计算技术进行算子性能加速。为了获取更好的执行性能,建议配置OMP_NUM_THREADS环境变量以指定OpenMP并行线程数。推荐配置为小于等于当前CPU核数的正整数,如:`export OMP_NUM_THREADS=10`\n",
    "\n",
    "### 样例脚本\n",
    "\n",
    "为了说明图算融合优化场景,我们构造了一个简单网络`MyNet`, 包含一个乘法和加法计算。在打开图算融合进行优化之后,这两个计算便会自动合成一个融合算子:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "b11ed122",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "result: [[2. 2. 2. 2.]\n",
      " [2. 2. 2. 2.]\n",
      " [2. 2. 2. 2.]\n",
      " [2. 2. 2. 2.]]\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "import mindspore as ms\n",
    "from mindspore.nn import Cell\n",
    "import mindspore.ops as ops\n",
    "\n",
    "ms.set_context(mode=ms.GRAPH_MODE, device_target=\"GPU\")\n",
    "# save graph ir to view fusion detail.\n",
    "ms.set_context(save_graphs=2)\n",
    "# enable graph kernel optimization.\n",
    "ms.set_context(enable_graph_kernel=True)\n",
    "\n",
    "class MyNet(Cell):\n",
    "    def construct(self, x):\n",
    "        a = ops.mul(x, 2.0)\n",
    "        res = ops.add(a, 1.0)\n",
    "        return res\n",
    "\n",
    "x = np.ones((4, 4)).astype(np.float32) * 0.5\n",
    "net = MyNet()\n",
    "result = net(ms.Tensor(x))\n",
    "print(\"result: {}\".format(result))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5938d4ca",
   "metadata": {},
   "source": [
    "该计算图的融合结果如图1所示,其中左图为未使能图算融合时的对应计算图,右图为使能图算融合后的对应计算图。可以看到该网络中的加法和乘法被融合成一个算子。该融合过程可以通过查看中间IR,或者通过Profiling等工具跟踪算子执行过程进行验证。\n",
    "\n",
    "![基本算子融合示例](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.2/tutorials/experts/source_zh_cn/optimize/images/graph_kernel_example_fuse_basic.png)\n",
    "\n",
    "图1:*图算融合优化计算图*\n",
    "\n",
    "## 自定义组合算子\n",
    "\n",
    "基于图算融合技术,用户可以很方便地实现高性能的自定义组合算子。其主要流程为:  \n",
    "\n",
    "1. 在脚本中用基本算子组合的方式实现自定义算子定义和使用;\n",
    "2. 打开图算融合配置;\n",
    "3. 图算融合对自定义组合算子中的基本算子自动进行算子融合,并生成高性能融合算子。\n",
    "\n",
    "相比其它自定义算子方式,这种方式具有对框架无侵入、简单易用等优点。\n",
    "\n",
    "### 样例脚本\n",
    "\n",
    "我们构造一个简单网络`MyNet`,并在其中使用了自定义算子`MyOp`。代码样例如下:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "94ed8610",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "result: [[-0.015104 -0.015104 -0.015104 -0.015104]\n",
      " [-0.015104 -0.015104 -0.015104 -0.015104]\n",
      " [-0.015104 -0.015104 -0.015104 -0.015104]\n",
      " [-0.015104 -0.015104 -0.015104 -0.015104]]\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "import mindspore as ms\n",
    "from mindspore.nn import Cell\n",
    "import mindspore.ops as ops\n",
    "\n",
    "ms.set_context(mode=ms.GRAPH_MODE, device_target=\"GPU\")\n",
    "# enable graph kernel optimization.\n",
    "ms.set_context(enable_graph_kernel=True)\n",
    "\n",
    "class MyOp(Cell):\n",
    "    \"\"\" my first custom OP composited by basic OPs \"\"\"\n",
    "    def construct(self, x, y):\n",
    "        a = ops.sub(x, y)\n",
    "        return ops.mul(a, x)\n",
    "\n",
    "class MyNet(Cell):\n",
    "    def __init__(self):\n",
    "        super(MyNet, self).__init__()\n",
    "        self.my_op = MyOp()\n",
    "\n",
    "    def construct(self, x, y):\n",
    "        a = ops.mul(x, 2.0)\n",
    "        b = ops.pow(a, 3.0)\n",
    "        res = self.my_op(b, y)\n",
    "        return res\n",
    "\n",
    "x = np.ones((4, 4)).astype(np.float32) * 0.2\n",
    "y = np.ones((4, 4)).astype(np.float32) * 0.3\n",
    "net = MyNet()\n",
    "result = net(ms.Tensor(x), ms.Tensor(y))\n",
    "print(\"result: {}\".format(result))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d00e734d",
   "metadata": {},
   "source": [
    "该计算图的融合结果如图2所示,其中左图为未使能图算融合时的对应计算图,右图为使能图算融合后的对应计算图。可以看到不仅自定义算子`MyOp`中的基本算子进行了融合,并且与主图中的其他算子也进行了更大范围融合。该融合过程可以通过查看中间IR,或者通过Profiling等工具跟踪算子执行过程进行验证。\n",
    "\n",
    "![自定义组合算子融合示例](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r2.2/tutorials/experts/source_zh_cn/optimize/images/graph_kernel_example_custom_op.png)\n",
    "\n",
    "图2:*自定义组合算子优化计算图*\n",
    "\n",
    "## FAQs\n",
    "\n",
    "### Cuda头文件缺失\n",
    "\n",
    "Akg依赖cuda相关头文件用于生成cuda kernel,若自动搜索头文件失败(提示 **error: cuda_runtime.h: No such file or directory**),请尝试设置相关环境变量:"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d00e734d",
   "metadata": {},
   "source": [
    "```text\n",
    "# Linux-X86_64系统示例\n",
    "\n",
    "export CPATH=/usr/local/cuda/targets/x86_64-linux/include:$CPATH\n",
    "export LD_LIBRARY_PATH=/usr/local/cuda/targets/x86_64-linux/lib:$LD_LIBRARY_PATH\n",
    "export PATH=/usr/local/cuda/bin:$PATH\n",
    "```"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "MindSpore",
   "language": "python",
   "name": "mindspore"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}