{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 使用贝叶斯神经网络实现图片分类应用\n", "\n", "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r1.7/resource/_static/logo_notebook.png)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r1.7/probability/zh_cn/mindspore_using_bnn.ipynb) [![下载样例代码](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r1.7/resource/_static/logo_download_code.png)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r1.7/probability/zh_cn/mindspore_using_bnn.py) [![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r1.7/resource/_static/logo_source.png)](https://gitee.com/mindspore/docs/blob/r1.7/docs/probability/docs/source_zh_cn/using_bnn.ipynb)\n", "\n", "深度学习模型具有强大的拟合能力,而贝叶斯理论具有很好的可解释能力。MindSpore深度概率编程(MindSpore Probability)将深度学习和贝叶斯学习结合,通过设置网络权重为分布、引入隐空间分布等,可以对分布进行采样前向传播,由此引入了不确定性,从而增强了模型的鲁棒性和可解释性。\n", "\n", "本章将详细介绍深度概率编程中的贝叶斯神经网络在MindSpore上的应用。在动手进行实践之前,确保,你已经正确安装了MindSpore 0.7.0-beta及其以上版本。\n", "\n", "> 本例面向GPU或Ascend 910 AI处理器平台,你可以在这里下载完整的样例代码:。\n", ">\n", "> 贝叶斯神经网络目前只支持图模式,需要在代码中设置`context.set_context(mode=context.GRAPH_MODE)`。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 使用贝叶斯神经网络\n", "\n", "贝叶斯神经网络是由概率模型和神经网络组成的基本模型,它的权重不再是一个确定的值,而是一个分布。本例介绍了如何使用MDP中的`bnn_layers`模块实现贝叶斯神经网络,并利用贝叶斯神经网络实现一个简单的图片分类功能,整体流程如下:\n", "\n", "1. 处理MNIST数据集;\n", "\n", "2. 定义贝叶斯LeNet网络;\n", "\n", "3. 定义损失函数和优化器;\n", "\n", "4. 加载数据集并进行训练。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 环境准备\n", "\n", "设置训练模式为图模式,计算平台为GPU。" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "from mindspore import context\n", "\n", "context.set_context(mode=context.GRAPH_MODE, save_graphs=False, device_target=\"GPU\")" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 数据准备\n", "\n", "### 下载数据集\n", "\n", "以下示例代码将MNIST数据集下载并解压到指定位置。" ] }, { "cell_type": "code", "execution_count": null, "metadata": {}, "outputs": [], "source": [ "import os\n", "import requests\n", "\n", "requests.packages.urllib3.disable_warnings()\n", "\n", "def download_dataset(dataset_url, path):\n", " filename = dataset_url.split(\"/\")[-1]\n", " save_path = os.path.join(path, filename)\n", " if os.path.exists(save_path):\n", " return\n", " if not os.path.exists(path):\n", " os.makedirs(path)\n", " res = requests.get(dataset_url, stream=True, verify=False)\n", " with open(save_path, \"wb\") as f:\n", " for chunk in res.iter_content(chunk_size=512):\n", " if chunk:\n", " f.write(chunk)\n", " print(\"The {} file is downloaded and saved in the path {} after processing\".format(os.path.basename(dataset_url), path))\n", "\n", "train_path = \"datasets/MNIST_Data/train\"\n", "test_path = \"datasets/MNIST_Data/test\"\n", "\n", "download_dataset(\"https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-labels-idx1-ubyte\", train_path)\n", "download_dataset(\"https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/train-images-idx3-ubyte\", train_path)\n", "download_dataset(\"https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-labels-idx1-ubyte\", test_path)\n", "download_dataset(\"https://mindspore-website.obs.myhuaweicloud.com/notebook/datasets/mnist/t10k-images-idx3-ubyte\", test_path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "下载的数据集文件的目录结构如下:\n", "\n", "```text\n", "./datasets/MNIST_Data\n", "├── test\n", "│ ├── t10k-images-idx3-ubyte\n", "│ └── t10k-labels-idx1-ubyte\n", "└── train\n", " ├── train-images-idx3-ubyte\n", " └── train-labels-idx1-ubyte\n", "```" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### 定义数据集增强方法\n", "\n", "MNIST数据集的原始训练数据集是60000张$28\\times28$像素的单通道数字图片,本次训练用到的含贝叶斯层的LeNet5网络接收到训练数据的张量为`(32,1,32,32)`,通过自定义create_dataset函数将原始数据集增强为适应训练要求的数据,具体的增强操作解释可参考[初学入门](https://www.mindspore.cn/tutorials/zh-CN/r1.7/beginner/quick_start.html)。" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "import mindspore.dataset.vision.c_transforms as CV\n", "import mindspore.dataset.transforms.c_transforms as C\n", "from mindspore.dataset.vision import Inter\n", "from mindspore import dataset as ds\n", "\n", "def create_dataset(data_path, batch_size=32, repeat_size=1,\n", " num_parallel_workers=1):\n", " # define dataset\n", " mnist_ds = ds.MnistDataset(data_path)\n", "\n", " # define some parameters needed for data enhancement and rough justification\n", " resize_height, resize_width = 32, 32\n", " rescale = 1.0 / 255.0\n", " shift = 0.0\n", " rescale_nml = 1 / 0.3081\n", " shift_nml = -1 * 0.1307 / 0.3081\n", "\n", " # according to the parameters, generate the corresponding data enhancement method\n", " c_trans = [\n", " CV.Resize((resize_height, resize_width), interpolation=Inter.LINEAR),\n", " CV.Rescale(rescale_nml, shift_nml),\n", " CV.Rescale(rescale, shift),\n", " CV.HWC2CHW()\n", " ]\n", " type_cast_op = C.TypeCast(mstype.int32)\n", "\n", " # using map to apply operations to a dataset\n", " mnist_ds = mnist_ds.map(operations=type_cast_op, input_columns=\"label\", num_parallel_workers=num_parallel_workers)\n", " mnist_ds = mnist_ds.map(operations=c_trans, input_columns=\"image\", num_parallel_workers=num_parallel_workers)\n", "\n", " # process the generated dataset\n", " buffer_size = 10000\n", " mnist_ds = mnist_ds.shuffle(buffer_size=buffer_size)\n", " mnist_ds = mnist_ds.batch(batch_size, drop_remainder=True)\n", " mnist_ds = mnist_ds.repeat(repeat_size)\n", "\n", " return mnist_ds" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 定义贝叶斯神经网络\n", "\n", "在经典LeNet5网络中,数据经过如下计算过程:卷积1->激活->池化->卷积2->激活->池化->降维->全连接1->全连接2->全连接3。 \n", "本例中将引入概率编程方法,利用`bnn_layers`模块将卷层和全连接层改造成贝叶斯层" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "conv1.weight_posterior.mean\n", "conv1.weight_posterior.untransformed_std\n", "conv2.weight_posterior.mean\n", "conv2.weight_posterior.untransformed_std\n", "fc1.weight_posterior.mean\n", "fc1.weight_posterior.untransformed_std\n", "fc1.bias_posterior.mean\n", "fc1.bias_posterior.untransformed_std\n", "fc2.weight_posterior.mean\n", "fc2.weight_posterior.untransformed_std\n", "fc2.bias_posterior.mean\n", "fc2.bias_posterior.untransformed_std\n", "fc3.weight_posterior.mean\n", "fc3.weight_posterior.untransformed_std\n", "fc3.bias_posterior.mean\n", "fc3.bias_posterior.untransformed_std\n" ] } ], "source": [ "import mindspore.nn as nn\n", "from mindspore.nn.probability import bnn_layers\n", "import mindspore.ops as ops\n", "from mindspore import dtype as mstype\n", "\n", "\n", "class BNNLeNet5(nn.Cell):\n", " def __init__(self, num_class=10):\n", " super(BNNLeNet5, self).__init__()\n", " self.num_class = num_class\n", " self.conv1 = bnn_layers.ConvReparam(1, 6, 5, stride=1, padding=0, has_bias=False, pad_mode=\"valid\")\n", " self.conv2 = bnn_layers.ConvReparam(6, 16, 5, stride=1, padding=0, has_bias=False, pad_mode=\"valid\")\n", " self.fc1 = bnn_layers.DenseReparam(16 * 5 * 5, 120)\n", " self.fc2 = bnn_layers.DenseReparam(120, 84)\n", " self.fc3 = bnn_layers.DenseReparam(84, self.num_class)\n", " self.relu = nn.ReLU()\n", " self.max_pool2d = nn.MaxPool2d(kernel_size=2, stride=2)\n", " self.flatten = nn.Flatten()\n", "\n", " def construct(self, x):\n", " x = self.max_pool2d(self.relu(self.conv1(x)))\n", " x = self.max_pool2d(self.relu(self.conv2(x)))\n", " x = self.flatten(x)\n", " x = self.relu(self.fc1(x))\n", " x = self.relu(self.fc2(x))\n", " x = self.fc3(x)\n", " return x\n", "\n", "network = BNNLeNet5(num_class=10)\n", "for layer in network.trainable_params():\n", " print(layer.name)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "打印信息表明,使用`bnn_layers`模块构建的LeNet网络,其卷积层和全连接层均为贝叶斯层。" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 定义损失函数和优化器\n", "\n", "接下来需要定义损失函数(Loss)和优化器(Optimizer)。损失函数是深度学习的训练目标,也叫目标函数,可以理解为神经网络的输出(Logits)和标签(Labels)之间的距离,是一个标量数据。\n", "\n", "常见的损失函数包括均方误差、L2损失、Hinge损失、交叉熵等等。图像分类应用通常采用交叉熵损失(CrossEntropy)。\n", "\n", "优化器用于神经网络求解(训练)。由于神经网络参数规模庞大,无法直接求解,因而深度学习中采用随机梯度下降算法(SGD)及其改进算法进行求解。MindSpore封装了常见的优化器,如`SGD`、`Adam`、`Momemtum`等等。本例采用`Adam`优化器,通常需要设定两个参数,学习率(`learning_rate`)和权重衰减项(`weight_decay`)。\n", "\n", "MindSpore中定义损失函数和优化器的代码样例如下:" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [], "source": [ "import mindspore.nn as nn\n", "\n", "# loss function definition\n", "criterion = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction=\"mean\")\n", "\n", "# optimization definition\n", "optimizer = nn.AdamWeightDecay(params=network.trainable_params(), learning_rate=0.0001)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 训练网络\n", "\n", "贝叶斯神经网络的训练过程与DNN基本相同,唯一不同的是将`WithLossCell`替换为适用于BNN的`WithBNNLossCell`。除了`backbone`和`loss_fn`两个参数之外,`WithBNNLossCell`增加了`dnn_factor`和`bnn_factor`两个参数。这两个参数是用来平衡网络整体损失和贝叶斯层的KL散度的,防止KL散度的值过大掩盖了网络整体损失。\n", "\n", "- `dnn_factor`是由损失函数计算得到的网络整体损失的系数。\n", "- `bnn_factor`是每个贝叶斯层的KL散度的系数。\n", "\n", "构建模型训练函数`train_model`和模型验证函数`validate_model`。" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [], "source": [ "def train_model(train_net, net, dataset):\n", " accs = []\n", " loss_sum = 0\n", " for _, data in enumerate(dataset.create_dict_iterator()):\n", " train_x = Tensor(data['image'].asnumpy().astype(np.float32))\n", " label = Tensor(data['label'].asnumpy().astype(np.int32))\n", " loss = train_net(train_x, label)\n", " output = net(train_x)\n", " log_output = ops.LogSoftmax(axis=1)(output)\n", " acc = np.mean(log_output.asnumpy().argmax(axis=1) == label.asnumpy())\n", " accs.append(acc)\n", " loss_sum += loss.asnumpy()\n", "\n", " loss_sum = loss_sum / len(accs)\n", " acc_mean = np.mean(accs)\n", " return loss_sum, acc_mean\n", "\n", "\n", "def validate_model(net, dataset):\n", " accs = []\n", " for _, data in enumerate(dataset.create_dict_iterator()):\n", " train_x = Tensor(data['image'].asnumpy().astype(np.float32))\n", " label = Tensor(data['label'].asnumpy().astype(np.int32))\n", " output = net(train_x)\n", " log_output = ops.LogSoftmax(axis=1)(output)\n", " acc = np.mean(log_output.asnumpy().argmax(axis=1) == label.asnumpy())\n", " accs.append(acc)\n", "\n", " acc_mean = np.mean(accs)\n", " return acc_mean" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "执行训练。" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Epoch: 1 \tTraining Loss: 21444.8605 \tTraining Accuracy: 0.8928 \tvalidation Accuracy: 0.9513\n", "Epoch: 2 \tTraining Loss: 9396.3887 \tTraining Accuracy: 0.9536 \tvalidation Accuracy: 0.9635\n", "Epoch: 3 \tTraining Loss: 7320.2412 \tTraining Accuracy: 0.9641 \tvalidation Accuracy: 0.9674\n", "Epoch: 4 \tTraining Loss: 6221.6970 \tTraining Accuracy: 0.9685 \tvalidation Accuracy: 0.9731\n", "Epoch: 5 \tTraining Loss: 5450.9543 \tTraining Accuracy: 0.9725 \tvalidation Accuracy: 0.9733\n", "Epoch: 6 \tTraining Loss: 4898.9741 \tTraining Accuracy: 0.9754 \tvalidation Accuracy: 0.9767\n", "Epoch: 7 \tTraining Loss: 4505.7502 \tTraining Accuracy: 0.9775 \tvalidation Accuracy: 0.9784\n", "Epoch: 8 \tTraining Loss: 4099.8783 \tTraining Accuracy: 0.9797 \tvalidation Accuracy: 0.9791\n", "Epoch: 9 \tTraining Loss: 3795.2288 \tTraining Accuracy: 0.9810 \tvalidation Accuracy: 0.9796\n", "Epoch: 10 \tTraining Loss: 3581.4254 \tTraining Accuracy: 0.9823 \tvalidation Accuracy: 0.9773\n" ] } ], "source": [ "from mindspore.nn import TrainOneStepCell\n", "from mindspore import Tensor\n", "import numpy as np\n", "\n", "net_with_loss = bnn_layers.WithBNNLossCell(network, criterion, dnn_factor=60000, bnn_factor=0.000001)\n", "train_bnn_network = TrainOneStepCell(net_with_loss, optimizer)\n", "train_bnn_network.set_train()\n", "\n", "train_set = create_dataset('./datasets/MNIST_Data/train', 64, 1)\n", "test_set = create_dataset('./datasets/MNIST_Data/test', 64, 1)\n", "\n", "epoch = 10\n", "\n", "for i in range(epoch):\n", " train_loss, train_acc = train_model(train_bnn_network, network, train_set)\n", "\n", " valid_acc = validate_model(network, test_set)\n", "\n", " print('Epoch: {} \\tTraining Loss: {:.4f} \\tTraining Accuracy: {:.4f} \\tvalidation Accuracy: {:.4f}'.\n", " format(i+1, train_loss, train_acc, valid_acc))" ] } ], "metadata": { "kernelspec": { "display_name": "MindSpore", "language": "python", "name": "mindspore" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.6" } }, "nbformat": 4, "nbformat_minor": 4 }