{ "cells": [ { "cell_type": "markdown", "id": "2ad04333", "metadata": {}, "source": [ "# 迁移脚本\n", "\n", "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r1.8/resource/_static/logo_notebook.png)](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/r1.8/zh_cn/migration_guide/mindspore_migration_script.ipynb) [![下载样例代码](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r1.8/resource/_static/logo_download_code.png)](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/r1.8/zh_cn/migration_guide/mindspore_migration_script.py) [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r1.8/resource/_static/logo_source.png)](https://gitee.com/mindspore/docs/blob/r1.8/docs/mindspore/source_zh_cn/migration_guide/migration_script.ipynb)\n", "\n", "## 概述\n", "\n", "本文档主要介绍,怎样将网络脚本从TensorFlow或PyTorch框架迁移至MindSpore。\n", "\n", "## TensorFlow脚本迁移MindSpore\n", "\n", "通过读TensorBoard图,进行脚本迁移。\n", "\n", "1. 以TensorFlow实现的[PoseNet](https://arxiv.org/pdf/1505.07427v4.pdf)为例,演示如何利用TensorBoard读图,编写MindSpore代码,将[TensorFlow模型](https://github.com/kentsommer/tensorflow-posenet)迁移到MindSpore上。\n", "\n", " > 此处提到的PoseNet代码为基于Python2的代码,需要对Python3做一些语法更改才能在Python3上运行,具体修改内容不予赘述。\n", "\n", "2. 改写代码,利用`tf.summary`接口,保存TensorBoard需要的log,并启动TensorBoard。\n", "\n", "3. 打开的TensorBoard如图所示,图例仅供参考,可能因log生成方式的差异,TensorBoard展示的图也有所差异。\n", "\n", " ![PoseNet TensorBoard](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r1.8/docs/mindspore/source_zh_cn/migration_guide/images/pic1.png)\n", "\n", "4. 找到3个输入的Placeholder,通过看图并阅读代码得知,第二、第三个输入都只在计算loss时使用。\n", "\n", " ![PoseNet Placeholder](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r1.8/docs/mindspore/source_zh_cn/migration_guide/images/pic3.png)\n", "\n", " ![PoseNet Placeholder-1 Placeholder-2](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r1.8/docs/mindspore/source_zh_cn/migration_guide/images/pic2.png)\n", "\n", " ![PoseNet script input1 2 3](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r1.8/docs/mindspore/source_zh_cn/migration_guide/images/pic4.png)\n", "\n", " 至此,我们可以初步划分出,构造网络模型三步:\n", "\n", " 第一步,在网络的三个输入中,第一个输入将在backbone中计算出六个输出;\n", "\n", " 第二步,上一步结果与第二、第三个输入在loss子网中计算loss;\n", "\n", " 第三步,利用`TrainOneStepCell`自动微分构造反向网络;利用TensorFlow工程中提供的Adam优化器及属性,写出对应的MindSpore优化器来更新参数,网络脚本骨干可写作:" ] }, { "cell_type": "code", "execution_count": 1, "id": "c5237f5d", "metadata": { "ExecuteTime": { "end_time": "2022-05-31T07:38:11.783041Z", "start_time": "2022-05-31T07:38:09.174410Z" } }, "outputs": [], "source": [ "from mindspore import nn\n", "from mindspore.nn import TrainOneStepCell\n", "from mindspore.nn import Adam\n", "from mindspore.common.initializer import Normal\n", "\n", "# combine backbone and loss\n", "class PoseNetLossCell(nn.Cell):\n", " def __init__(self, backbone, loss):\n", " super(PoseNetLossCell, self).__init__()\n", " self.pose_net = backbone\n", " self.loss = loss\n", " def construct(self, input_1, input_2, input_3):\n", " p1_x, p1_q, p2_x, p2_q, p3_x, p3_q = self.poss_net(input_1)\n", " loss = self.loss(p1_x, p1_q, p2_x, p2_q, p3_x, p3_q, input_2, input_3)\n", " return loss\n", "\n", "# define backbone\n", "class PoseNet(nn.Cell):\n", " def __init__(self):\n", " super(PoseNet, self).__init__()\n", " self.fc = nn.Dense(1, 6, Normal(0.02), Normal(0.02))\n", "\n", " def construct(self, input_1):\n", " \"\"\"do something with input_1, output num 6\"\"\"\n", " p1_x, p1_q, p2_x, p2_q, p3_x, p3_q = self.fc(input_1)\n", " return p1_x, p1_q, p2_x, p2_q, p3_x, p3_q\n", "\n", "# define loss\n", "class PoseNetLoss(nn.Cell):\n", " def __init__(self):\n", " super(PoseNetLoss, self).__init__()\n", "\n", " def construct(self, p1_x, p1_q, p2_x, p2_q, p3_x, p3_q, poses_x, poses_q):\n", " \"\"\"do something to calc loss\"\"\"\n", " return loss\n", "\n", "# define network\n", "backbone = PoseNet()\n", "loss = PoseNetLoss()\n", "net_with_loss = PoseNetLossCell(backbone, loss)\n", "opt = Adam(net_with_loss.trainable_params(), learning_rate=0.001, beta1=0.9, beta2=0.999, eps=1e-08, use_locking=False)\n", "net_with_grad = TrainOneStepCell(net_with_loss, opt)" ] }, { "cell_type": "markdown", "id": "15f3df33", "metadata": {}, "source": [ "5. 接下来,我们来具体实现backbone中的计算逻辑。\n", "\n", "第一个输入首先经过了一个名为conv1的子图,通过看图可得,其中计算逻辑为:\n", "\n", "![PoseNet conv1 子图](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r1.8/docs/mindspore/source_zh_cn/migration_guide/images/pic5.png)\n", "\n", "输入->Conv2D->BiasAdd->ReLU,虽然图上看起来,BiasAdd后的算子名虽然为conv1,但其实际执行的是ReLU。\n", "\n", "![PoseNet Conv1 conv1 relu](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r1.8/docs/mindspore/source_zh_cn/migration_guide/images/pic6.png)\n", "\n", "这样一来,第一个子图conv1,可以定义如下,具体参数,与原工程中的参数对齐:\n", "\n", "```python\n", "class Conv1(nn.Cell):\n", " def __init__(self):\n", " super(Conv1, self).__init__()\n", " self.conv = Conv2d()\n", " self.relu = ReLU()\n", " def construct(self, x):\n", " x = self.conv(x)\n", " x = self.relu(x)\n", " return x\n", "```\n", "\n", "通过观察TensorBoard图和代码,我们不难发现,原TensorFlow工程中定义的conv这一类型的子网,可以复写为MindSpore的子网,减少重复代码。\n", "\n", "TensorFlow工程conv子网定义:\n", "\n", "```python\n", "def conv(self, input, k_h, k_w, c_o, s_h, s_w, name, relu=True, padding=DEFAULT_PADDING, group=1, biased=True):\n", " # Verify that the padding is acceptable\n", " self.validate_padding(padding)\n", " # Get the number of channels in the input\n", " c_i = input.get_shape()[-1]\n", " # Verify that the grouping parameter is valid\n", " assert c_i % group == 0\n", " assert c_o % group == 0\n", " # Convolution for a given input and kernel\n", " convolve = lambda i, k: tf.nn.conv2d(i, k, [1, s_h, s_w, 1], padding=padding)\n", " with tf.variable_scope(name) as scope:\n", " kernel = self.make_var('weights', shape=[k_h, k_w, c_i / group, c_o])\n", " if group == 1:\n", " # This is the common-case. Convolve the input without any further complications.\n", " output = convolve(input, kernel)\n", " else:\n", " # Split the input into groups and then convolve each of them independently\n", " input_groups = tf.split(3, group, input)\n", " kernel_groups = tf.split(3, group, kernel)\n", " output_groups = [convolve(i, k) for i, k in zip(input_groups, kernel_groups)]\n", " # Concatenate the groups\n", " output = tf.concat(3, output_groups)\n", " # Add the biases\n", " if biased:\n", " biases = self.make_var('biases', [c_o])\n", " output = tf.nn.bias_add(output, biases)\n", " if relu:\n", " # ReLU non-linearity\n", " output = tf.nn.relu(output, name=scope.name)\n", " return output\n", "```" ] }, { "cell_type": "markdown", "id": "f93e70c5", "metadata": {}, "source": [ "则对应MindSpore子网定义如下:" ] }, { "cell_type": "code", "execution_count": 2, "id": "7e58b680", "metadata": { "ExecuteTime": { "end_time": "2022-05-31T07:38:11.789720Z", "start_time": "2022-05-31T07:38:11.785071Z" } }, "outputs": [], "source": [ "from mindspore import nn\n", "from mindspore.nn import Conv2d, ReLU\n", "class ConvReLU(nn.Cell):\n", " def __init__(self, channel_in, kernel_size, channel_out, strides):\n", " super(ConvReLU, self).__init__()\n", " self.conv = Conv2d(channel_in, channel_out, kernel_size, strides, has_bias=True)\n", " self.relu = ReLU()\n", "\n", " def construct(self, x):\n", " x = self.conv(x)\n", " x = self.relu(x)\n", " return x" ] }, { "cell_type": "markdown", "id": "8f4e234d", "metadata": {}, "source": [ "那么,对照着TensorBoard中的数据流向与算子属性,backbone计算逻辑可编写如下:" ] }, { "cell_type": "markdown", "id": "f5af3e76", "metadata": { "ExecuteTime": { "end_time": "2022-05-31T03:14:40.689648Z", "start_time": "2022-05-31T03:14:40.670941Z" } }, "source": [ "```python\n", "from mindspore.nn import MaxPool2d\n", "import mindspore.ops as ops\n", "\n", "\n", "class LRN(nn.Cell):\n", " def __init__(self, radius, alpha, beta, bias=1.0):\n", " super(LRN, self).__init__()\n", " self.lrn = ops.LRN(radius, bias, alpha, beta)\n", " def construct(self, x):\n", " return self.lrn(x)\n", "\n", "\n", "class PoseNet(nn.Cell):\n", " def __init__(self):\n", " super(PoseNet, self).__init__()\n", " self.conv1 = ConvReLU(3, 7, 64, 2)\n", " self.pool1 = MaxPool2d(3, 2, pad_mode=\"SAME\")\n", " self.norm1 = LRN(2, 2e-05, 0.75)\n", " self.reduction2 = ConvReLU(64, 1, 64, 1)\n", " self.conv2 = ConvReLU(64, 3, 192, 1)\n", " self.norm2 = LRN(2, 2e-05, 0.75)\n", " self.pool2 = MaxPool2d(3, 2, pad_mode=\"SAME\")\n", " self.icp1_reduction1 = ConvReLU(192, 1, 96, 1)\n", " self.icp1_out1 = ConvReLU(96, 3, 128, 1)\n", " self.icp1_reduction2 = ConvReLU(192, 1, 16, 1)\n", " self.icp1_out2 = ConvReLU(16, 5, 32, 1)\n", " self.icp1_pool = MaxPool2d(3, 1, pad_mode=\"SAME\")\n", " self.icp1_out3 = ConvReLU(192, 5, 32, 1)\n", " self.icp1_out0 = ConvReLU(192, 1, 64, 1)\n", " self.concat = ops.Concat(axis=1)\n", " self.icp2_reduction1 = ConvReLU(256, 1, 128, 1)\n", " self.icp2_out1 = ConvReLU(128, 3, 192, 1)\n", " self.icp2_reduction2 = ConvReLU(256, 1, 32, 1)\n", " self.icp2_out2 = ConvReLU(32, 5, 96, 1)\n", " self.icp2_pool = MaxPool2d(3, 1, pad_mode=\"SAME\")\n", " self.icp2_out3 = ConvReLU(256, 1, 64, 1)\n", " self.icp2_out0 = ConvReLU(256, 1, 128, 1)\n", " self.icp3_in = MaxPool2d(3, 2, pad_mode=\"SAME\")\n", " self.icp3_reduction1 = ConvReLU(480, 1, 96, 1)\n", " self.icp3_out1 = ConvReLU(96, 3, 208, 1)\n", " self.icp3_reduction2 = ConvReLU(480, 1, 16, 1)\n", " self.icp3_out2 = ConvReLU(16, 5, 48, 1)\n", " self.icp3_pool = MaxPool2d(3, 1, pad_mode=\"SAME\")\n", " self.icp3_out3 = ConvReLU(480, 1, 64, 1)\n", " self.icp3_out0 = ConvReLU(480, 1, 192, 1)\n", " \"\"\"etc\"\"\"\n", " \"\"\"...\"\"\"\n", "\n", " def construct(self, input_1):\n", " \"\"\"do something with input_1, output num 6\"\"\"\n", " x = self.conv1(input_1)\n", " x = self.pool1(x)\n", " x = self.norm1(x)\n", " x = self.reduction2(x)\n", " x = self.conv2(x)\n", " x = self.norm2(x)\n", " x = self.pool2(x)\n", " pool2 = x\n", " x = self.icp1_reduction1(x)\n", " x = self.icp1_out1(x)\n", " icp1_out1 = x\n", "\n", " icp1_reduction2 = self.icp1_reduction2(pool2)\n", " icp1_out2 = self.icp1_out2(icp1_reduction2)\n", "\n", " icp1_pool = self.icp1_pool(pool2)\n", " icp1_out3 = self.icp1_out3(icp1_pool)\n", "\n", " icp1_out0 = self.icp1_out0(pool2)\n", "\n", " icp2_in = self.concat((icp1_out0, icp1_out1, icp1_out2, icp1_out3))\n", " \"\"\"etc\"\"\"\n", " \"\"\"...\"\"\"\n", "\n", " return p1_x, p1_q, p2_x, p2_q, p3_x, p3_q\n", "```" ] }, { "cell_type": "markdown", "id": "c3b38158", "metadata": {}, "source": [ "相应的,loss计算逻辑可编写如下:" ] }, { "cell_type": "code", "execution_count": 3, "id": "e16d433b", "metadata": { "ExecuteTime": { "end_time": "2022-05-31T07:38:11.804523Z", "start_time": "2022-05-31T07:38:11.793817Z" } }, "outputs": [], "source": [ "from mindspore import ops\n", "\n", "class PoseNetLoss(nn.Cell):\n", " def __init__(self):\n", " super(PoseNetLoss, self).__init__()\n", " self.sub = ops.Sub()\n", " self.square = ops.Square()\n", " self.reduce_sum = ops.ReduceSum()\n", " self.sqrt = ops.Sqrt()\n", "\n", " def construct(self, p1_x, p1_q, p2_x, p2_q, p3_x, p3_q, poses_x, poses_q):\n", " \"\"\"do something to calc loss\"\"\"\n", " l1_x = self.sqrt(self.reduce_sum(self.square(self.sub(p1_x, poses_x)))) * 0.3\n", " l1_q = self.sqrt(self.reduce_sum(self.square(self.sub(p1_q, poses_q)))) * 150\n", " l2_x = self.sqrt(self.reduce_sum(self.square(self.sub(p2_x, poses_x)))) * 0.3\n", " l2_q = self.sqrt(self.reduce_sum(self.square(self.sub(p2_q, poses_q)))) * 150\n", " l3_x = self.sqrt(self.reduce_sum(self.square(self.sub(p3_x, poses_x)))) * 1\n", " l3_q = self.sqrt(self.reduce_sum(self.square(self.sub(p3_q, poses_q)))) * 500\n", " return l1_x + l1_q + l2_x + l2_q + l3_x + l3_q" ] }, { "cell_type": "markdown", "id": "915a28ac", "metadata": {}, "source": [ "最终,你的训练脚本应该类似如下所示:\n", "\n", "```python\n", "import mindspore as ms\n", "from mindspore import dataset as ds\n", "import numpy as np\n", "\n", "if __name__ == \"__main__\":\n", " epoch_size = 5\n", " backbone = PoseNet()\n", " loss = PoseNetLoss()\n", " net_with_loss = PoseNetLossCell(backbone, loss)\n", " opt = Adam(net_with_loss.trainable_params(), learning_rate=0.001, beta1=0.9, beta2=0.999, eps=1e-08, use_locking=False)\n", " net_with_grad = TrainOneStepCell(net_with_loss, opt)\n", " \"\"\"dataset define\"\"\"\n", " model = ms.Model(net_with_grad)\n", " model.train(epoch_size, dataset)\n", "```" ] }, { "cell_type": "markdown", "id": "8eae7c27", "metadata": {}, "source": [ "这样,就基本完成了模型脚本从TensorFlow到MindSpore的迁移,接下来就是利用丰富的MindSpore工具和计算策略,对精度进行调优,在此不予详述。\n", "\n", "## PyTorch脚本迁移MindSpore\n", "\n", "通过读PyTorch脚本,直接进行迁移。\n", "\n", "1. PyTorch子网模块通常继承`torch.nn.Module`,MindSpore通常继承`mindspore.nn.Cell`;PyTorch子网模块正向计算逻辑需要重写forward方法,MindSpore子网模块正向计算逻辑需要重写construct方法。\n", "\n", "2. 以常见的Bottleneck类在MindSpore下的迁移为例。\n", "\n", "PyTorch工程代码\n", "\n", "```python\n", "# defined in PyTorch\n", "class Bottleneck(nn.Module):\n", " def __init__(self, inplanes, planes, stride=1, mode='NORM', k=1, dilation=1):\n", " super(Bottleneck, self).__init__()\n", " self.mode = mode\n", " self.relu = nn.ReLU(inplace=True)\n", " self.k = k\n", "\n", " btnk_ch = planes // 4\n", " self.bn1 = nn.BatchNorm2d(inplanes)\n", " self.conv1 = nn.Conv2d(inplanes, btnk_ch, kernel_size=1, bias=False)\n", "\n", " self.bn2 = nn.BatchNorm2d(btnk_ch)\n", " self.conv2 = nn.Conv2d(btnk_ch, btnk_ch, kernel_size=3, stride=stride, padding=dilation,\n", " dilation=dilation, bias=False)\n", "\n", " self.bn3 = nn.BatchNorm2d(btnk_ch)\n", " self.conv3 = nn.Conv2d(btnk_ch, planes, kernel_size=1, bias=False)\n", "\n", " if mode == 'UP':\n", " self.shortcut = None\n", " elif inplanes != planes or stride > 1:\n", " self.shortcut = nn.Sequential(\n", " nn.BatchNorm2d(inplanes),\n", " self.relu,\n", " nn.Conv2d(inplanes, planes, kernel_size=1, stride=stride, bias=False)\n", " )\n", " else:\n", " self.shortcut = None\n", "\n", " def _pre_act_forward(self, x):\n", " residual = x\n", "\n", " out = self.bn1(x)\n", " out = self.relu(out)\n", " out = self.conv1(out)\n", "\n", " out = self.bn2(out)\n", " out = self.relu(out)\n", " out = self.conv2(out)\n", "\n", " out = self.bn3(out)\n", " out = self.relu(out)\n", " out = self.conv3(out)\n", "\n", " if self.mode == 'UP':\n", " residual = self.squeeze_idt(x)\n", " elif self.shortcut is not None:\n", " residual = self.shortcut(residual)\n", "\n", " out += residual\n", "\n", " return out\n", "\n", " def squeeze_idt(self, idt):\n", " n, c, h, w = idt.size()\n", " return idt.view(n, c // self.k, self.k, h, w).sum(2)\n", "\n", " def forward(self, x):\n", " out = self._pre_act_forward(x)\n", " return out\n", "\n", "```\n", "\n", "根据PyTorch和MindSpore对卷积参数定义的区别,可以翻译成如下定义:\n", "\n", "```python\n", "from mindspore import nn\n", "import mindspore.ops as ops\n", "\n", "# defined in MindSpore\n", "class Bottleneck(nn.Cell):\n", " def __init__(self, inplanes, planes, stride=1, k=1, dilation=1):\n", " super(Bottleneck, self).__init__()\n", " self.mode = mode\n", " self.relu = nn.ReLU()\n", " self.k = k\n", "\n", " btnk_ch = planes // 4\n", " self.bn1 = nn.BatchNorm2d(num_features=inplanes, momentum=0.9)\n", " self.conv1 = nn.Conv2d(in_channels=inplanes, out_channels=btnk_ch, kernel_size=1, pad_mode='pad', has_bias=False)\n", "\n", " self.bn2 = nn.BatchNorm2d(num_features=btnk_ch, momentum=0.9)\n", " self.conv2 = nn.Conv2d(in_channels=btnk_ch, out_channels=btnk_ch, kernel_size=3, stride=stride, pad_mode='pad', padding=dilation, dilation=dilation, has_bias=False)\n", "\n", " self.bn3 = nn.BatchNorm2d(num_features=btnk_ch, momentum=0.9)\n", " self.conv3 = nn.Conv2d(in_channels=btnk_ch, out_channels=planes, kernel_size=1, pad_mode='pad', has_bias=False)\n", "\n", " self.shape = ops.Shape()\n", " self.reshape = ops.Reshape()\n", " self.reduce_sum = ops.ReduceSum()\n", "\n", " if mode == 'UP':\n", " self.shortcut = None\n", " elif inplanes != planes or stride > 1:\n", " self.shortcut = nn.SequentialCell([\n", " nn.BatchNorm2d(num_features=inplanes, momentum=0.9),\n", " nn.ReLU(),\n", " nn.Conv2d(in_channels=inplanes, out_channels=planes, kernel_size=1, stride=stride, pad_mode='pad', has_bias=False)])\n", " else:\n", " self.shortcut = None\n", "\n", " def _pre_act_forward(self, x):\n", " residual = x\n", "\n", " out = self.bn1(x)\n", " out = self.relu(out)\n", " out = self.conv1(out)\n", "\n", " out = self.bn2(out)\n", " out = self.relu(out)\n", " out = self.conv2(out)\n", "\n", " out = self.bn3(out)\n", " out = self.relu(out)\n", " out = self.conv3(out)\n", "\n", " if self.shortcut is not None:\n", " residual = self.shortcut(residual)\n", "\n", " out += residual\n", " return out\n", "\n", " def construct(self, x):\n", " out = self._pre_act_forward(x)\n", " return out\n", "```\n", "\n", "3. PyTorch的反向传播通常使用`loss.backward()`实现,参数更新通过`optimizer.step()`实现,在MindSpore中,这些不需要用户显式调用执行,可以交给`TrainOneStepCell`类进行反向传播和梯度更新。最后,训练脚本结构应如下所示:\n", "\n", "```python\n", "# define dataset\n", "dataset = ...\n", "\n", "# define backbone and loss\n", "backbone = Net()\n", "loss = NetLoss()\n", "\n", "# combine backbone and loss\n", "net_with_loss = WithLossCell(backbone, loss)\n", "\n", "# define optimizer\n", "opt = ...\n", "\n", "# combine forward and backward\n", "net_with_grad = TrainOneStepCell(net_with_loss, opt)\n", "\n", "# define model and train\n", "model = ms.Model(net_with_grad)\n", "model.train(epoch_size, dataset)\n", "```\n", "\n", "PyTorch和mindspore在一些基础API的定义上比较相似,比如[mindspore.nn.SequentialCell](https://www.mindspore.cn/docs/zh-CN/r1.8/api_python/nn/mindspore.nn.SequentialCell.html#mindspore.nn.SequentialCell)和[torch.nn.Sequential](https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html#torch.nn.Sequential),另外,一些算子API可能不尽相同,此处列举一些常见的API对照,更多信息可以参考MindSpore官网的[MindSpore与PyTorch对照表](https://www.mindspore.cn/docs/zh-CN/r1.8/note/api_mapping/pytorch_api_mapping.html)。\n", "\n", "| PyTorch | MindSpore |\n", "| :-------------------------------: | :------------------------------------------------: |\n", "| tensor.view() | mindspore.ops.operations.Reshape()(tensor) |\n", "| tensor.size() | mindspore.ops.operations.Shape()(tensor) |\n", "| tensor.sum(axis) | mindspore.ops.operations.ReduceSum()(tensor, axis) |\n", "| torch.nn.Upsample[mode: nearest] | mindspore.ops.operations.ResizeNearestNeighbor |\n", "| torch.nn.Upsample[mode: bilinear] | mindspore.ops.operations.ResizeBilinear |\n", "| torch.nn.Linear | mindspore.nn.Dense |\n", "| torch.nn.PixelShuffle | mindspore.ops.operations.DepthToSpace |\n", "\n", "值得注意的是,尽管`torch.nn.MaxPool2d`和`mindspore.nn.MaxPool2d`在接口定义上较为相似,但在Ascend上的训练过程中,MindSpore实际调用了`MaxPoolWithArgMax`算子,该算子与TensorFlow的同名算子功能相同,在迁移过程中MaxPool层后的输出MindSpore与PyTorch不一致是正常现象,理论上不影响最终训练结果。" ] } ], "metadata": { "kernelspec": { "display_name": "MindSpore", "language": "python", "name": "mindspore" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.7" } }, "nbformat": 4, "nbformat_minor": 5 }