{ "cells": [ { "cell_type": "markdown", "source": [ "# NumPy Interfaces in MindSpore\r\n", "\r\n", "`Ascend` `GPU` `CPU` `Model Development`" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "[![](https://gitee.com/mindspore/docs/raw/r1.5/resource/_static/logo_source_en.png)](https://gitee.com/mindspore/docs/blob/r1.5/docs/mindspore/programming_guide/source_en/numpy.ipynb) [![](https://gitee.com/mindspore/docs/raw/r1.5/resource/_static/logo_notebook_en.png)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r1.5/programming_guide/en/mindspore_numpy.ipynb) [![](https://gitee.com/mindspore/docs/raw/r1.5/resource/_static/logo_modelarts_en.png)](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9taW5kc3BvcmUtd2Vic2l0ZS5vYnMuY24tbm9ydGgtNC5teWh1YXdlaWNsb3VkLmNvbS9ub3RlYm9vay9tYXN0ZXIvcHJvZ3JhbW1pbmdfZ3VpZGUvZW4vbWluZHNwb3JlX251bXB5LmlweW5i&imageid=65f636a0-56cf-49df-b941-7d2a07ba8c8c)" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Overview\n", "MindSpore Numpy package contains a set of Numpy-like interfaces, which allows developers to build models on MindSpore with similar syntax of Numpy." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "> This sample is suitable for CPU, GPU and Ascend environments." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Operator Functions\n", "\n", "Mindspore Numpy operators can be classified into four functional modules: `array generation`, `array operation`, `logic operation` and `math operation`. For details about the supported operators on the Ascend AI processors, GPU, and CPU, see [Numpy Interface List](https://www.mindspore.cn/docs/api/en/r1.5/api_python/mindspore.numpy.html).\n" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "### Array Generations\n", "\n", "Array generation operators are used to generate tensors.\n", "\n", "Here is an example to generate an array:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 1, "source": [ "import mindspore.numpy as np\r\n", "import mindspore.ops as ops\r\n", "input_x = np.array([1, 2, 3], np.float32)\r\n", "print(\"input_x =\", input_x)\r\n", "print(\"type of input_x =\", ops.typeof(input_x))" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "input_x = [1. 2. 3.]\n", "type of input_x = Tensor[Float32]\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Here we have more examples:\n", "\n", "#### Generate a tensor filled with the same element\n", "\n", "`np.full` can be used to generate a tensor with user-specified values:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 2, "source": [ "input_x = np.full((2, 3), 6, np.float32)\r\n", "print(input_x)" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "[[6. 6. 6.]\n", " [6. 6. 6.]]\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Here is another example to generate an array with the specified shape and filled with the value of 1:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 3, "source": [ "input_x = np.ones((2, 3), np.float32)\n", "print(input_x)" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "[[1. 1. 1.]\n", " [1. 1. 1.]]\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "#### Generate tensors in a specified range\n", "\n", "Generate an arithmetic array within the specified range:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 4, "source": [ "input_x = np.arange(0, 5, 1)\n", "print(input_x)" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "[0 1 2 3 4]\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "#### Generate tensors with specific requirement\n", "\n", "Generate a matrix where the lower elements are 1 and the upper elements are 0 on the given diagonal:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 5, "source": [ "input_x = np.tri(3, 3, 1)\n", "print(input_x)" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "[[1. 1. 0.]\n", " [1. 1. 1.]\n", " [1. 1. 1.]]\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Another example, generate a 2-D matrix with a diagonal of 1 and other elements of 0:\n" ], "metadata": {} }, { "cell_type": "code", "execution_count": 6, "source": [ "input_x = np.eye(2, 2)\n", "print(input_x)" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "[[1. 0.]\n", " [0. 1.]]\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "### Array Operations\n", "\n", "Array operations focus on tensor manipulation.\n", "\n", "#### Manipulate the shape of the tensor\n", "\n", "For example, transpose a matrix:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 7, "source": [ "input_x = np.arange(10).reshape(5, 2)\n", "output = np.transpose(input_x)\n", "print(output)" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "[[0 2 4 6 8]\n", " [1 3 5 7 9]]\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "Another example, swap two axes:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 8, "source": [ "input_x = np.ones((1, 2, 3))\n", "output = np.swapaxes(input_x, 0, 1)\n", "print(output.shape)" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "(2, 1, 3)\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "#### Tensor splitting\n", "\n", "Divide the input tensor into multiple tensors equally, for example:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 9, "source": [ "input_x = np.arange(9)\n", "output = np.split(input_x, 3)\n", "print(output)" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "(Tensor(shape=[3], dtype=Int32, value= [0, 1, 2]), Tensor(shape=[3], dtype=Int32, value= [3, 4, 5]), Tensor(shape=[3], dtype=Int32, value= [6, 7, 8]))\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "#### Tensor combination\n", "\n", "Concatenate the two tensors according to the specified axis, for example:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 10, "source": [ "input_x = np.arange(0, 5)\n", "input_y = np.arange(10, 15)\n", "output = np.concatenate((input_x, input_y), axis=0)\n", "print(output)" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "[ 0 1 2 3 4 10 11 12 13 14]\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "### Logic Operations\n", "\n", "Logic operations define computations related with boolean types.\n", "Examples of `equal` and `less` operations are as follows:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 11, "source": [ "input_x = np.arange(0, 5)\n", "input_y = np.arange(0, 10, 2)\n", "output = np.equal(input_x, input_y)\n", "print(\"output of equal:\", output)\n", "output = np.less(input_x, input_y)\n", "print(\"output of less:\", output)" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "output of equal: [ True False False False False]\n", "output of less: [False True True True True]\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "### Math Operations\n", "\n", "Math operations include basic and advanced math operations on tensors, and they have full support on Numpy broadcasting rules. Here are some examples:\n", "\n", "#### Sum two tensors\n", "\n", "The following code implements the operation of adding two tensors of `input_x` and `input_y`:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 12, "source": [ "input_x = np.full((3, 2), [1, 2])\n", "input_y = np.full((3, 2), [3, 4])\n", "output = np.add(input_x, input_y)\n", "print(output)" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "[[4 6]\n", " [4 6]\n", " [4 6]]\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "#### Matrics multiplication\n", "\n", "The following code implements the operation of multiplying two matrices `input_x` and `input_y`:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 13, "source": [ "input_x = np.arange(2*3).reshape(2, 3).astype('float32')\n", "input_y = np.arange(3*4).reshape(3, 4).astype('float32')\n", "output = np.matmul(input_x, input_y)\n", "print(output)" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "[[20. 23. 26. 29.]\n", " [56. 68. 80. 92.]]\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "#### Take the average along a given axis\n", "\n", "The following code implements the operation of averaging all the elements of `input_x`:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 14, "source": [ "input_x = np.arange(6).astype('float32')\n", "output = np.mean(input_x)\n", "print(output)" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "2.5\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "#### Exponential arithmetic\n", "\n", "The following code implements the operation of the natural constant `e` to the power of `input_x`:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 15, "source": [ "input_x = np.arange(5).astype('float32')\n", "output = np.exp(input_x)\n", "print(output)" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "[ 1. 2.7182817 7.389056 20.085537 54.59815 ]\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Interact With MindSpore Functions\n", "\n", "Since `mindspore.numpy` directly wraps MindSpore tensors and operators, it has all the advantages and properties of MindSpore. In this section, we will briefly introduce how to employ MindSpore execution management and automatic differentiation in `mindspore.numpy` coding scenarios. These include:\n", "\n", "- `ms_function`: for running codes in static graph mode for better efficiency.\n", "- `GradOperation`: for automatic gradient computation.\n", "- `mindspore.context`: for `mindspore.numpy` execution management.\n", "- `mindspore.nn.Cell`: for using `mindspore.numpy` interfaces in MindSpore Deep Learning Models.\n", "\n", "### Use ms_function to run code in static graph mode\n", "\n", "Let's first see an example consisted of matrix multiplication and bias add, which is a typical process in Neural Networks:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 16, "source": [ "import mindspore.numpy as np\n", "\n", "x = np.arange(8).reshape(2, 4).astype('float32')\n", "w1 = np.ones((4, 8))\n", "b1 = np.zeros((8,))\n", "w2 = np.ones((8, 16))\n", "b2 = np.zeros((16,))\n", "w3 = np.ones((16, 4))\n", "b3 = np.zeros((4,))\n", "\n", "def forward(x, w1, b1, w2, b2, w3, b3):\n", " x = np.dot(x, w1) + b1\n", " x = np.dot(x, w2) + b2\n", " x = np.dot(x, w3) + b3\n", " return x\n", "\n", "print(forward(x, w1, b1, w2, b2, w3, b3))" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "[[ 768. 768. 768. 768.]\n", " [2816. 2816. 2816. 2816.]]\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "In this function, MindSpore dispatches each computing kernel to device separately. However, with the help of `ms_function`, we can compile all operations into a single static computing graph." ], "metadata": {} }, { "cell_type": "code", "execution_count": 17, "source": [ "from mindspore import ms_function\n", "\n", "forward_compiled = ms_function(forward)\n", "print(forward(x, w1, b1, w2, b2, w3, b3))" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "[[ 768. 768. 768. 768.]\n", " [2816. 2816. 2816. 2816.]]\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "> Currently, static graph cannot run in Python interactive mode and not all python types can be passed into functions decorated with `ms_function`. For details about how to use `ms_function`, see [API: ms_function](https://www.mindspore.cn/docs/api/en/r1.5/api_python/mindspore/mindspore.ms_function.html)." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "### Use GradOperation to compute deratives\n", "\n", "`GradOperation` can be used to take deratives from normal functions and functions decorated with `ms_function`. Take the previous example:" ], "metadata": {} }, { "cell_type": "code", "execution_count": 18, "source": [ "from mindspore import ops\n", "\n", "grad_all = ops.composite.GradOperation(get_all=True)\n", "grad_all(forward)(x, w1, b1, w2, b2, w3, b3)" ], "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "(Tensor(shape=[2, 4], dtype=Float32, value=\n", " [[ 5.12000000e+02, 5.12000000e+02, 5.12000000e+02, 5.12000000e+02],\n", " [ 5.12000000e+02, 5.12000000e+02, 5.12000000e+02, 5.12000000e+02]]),\n", " Tensor(shape=[4, 8], dtype=Float32, value=\n", " [[ 2.56000000e+02, 2.56000000e+02, 2.56000000e+02 ... 2.56000000e+02, 2.56000000e+02, 2.56000000e+02],\n", " [ 3.84000000e+02, 3.84000000e+02, 3.84000000e+02 ... 3.84000000e+02, 3.84000000e+02, 3.84000000e+02],\n", " [ 5.12000000e+02, 5.12000000e+02, 5.12000000e+02 ... 5.12000000e+02, 5.12000000e+02, 5.12000000e+02]\n", " [ 6.40000000e+02, 6.40000000e+02, 6.40000000e+02 ... 6.40000000e+02, 6.40000000e+02, 6.40000000e+02]]),\n", " ...\n", " Tensor(shape=[4], dtype=Float32, value= [ 2.00000000e+00, 2.00000000e+00, 2.00000000e+00, 2.00000000e+00]))" ] }, "metadata": {}, "execution_count": 18 } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "To take the gradient of `ms_function` compiled functions, first we need to set the execution mode to static graph mode." ], "metadata": {} }, { "cell_type": "code", "execution_count": 19, "source": [ "from mindspore import ms_function, context\n", "\n", "context.set_context(mode=context.GRAPH_MODE)\n", "grad_all = ops.composite.GradOperation(get_all=True)\n", "grad_all(ms_function(forward))(x, w1, b1, w2, b2, w3, b3)" ], "outputs": [ { "output_type": "execute_result", "data": { "text/plain": [ "(Tensor(shape=[2, 4], dtype=Float32, value=\n", " [[ 5.12000000e+02, 5.12000000e+02, 5.12000000e+02, 5.12000000e+02],\n", " [ 5.12000000e+02, 5.12000000e+02, 5.12000000e+02, 5.12000000e+02]]),\n", " Tensor(shape=[4, 8], dtype=Float32, value=\n", " [[ 2.56000000e+02, 2.56000000e+02, 2.56000000e+02 ... 2.56000000e+02, 2.56000000e+02, 2.56000000e+02],\n", " [ 3.84000000e+02, 3.84000000e+02, 3.84000000e+02 ... 3.84000000e+02, 3.84000000e+02, 3.84000000e+02],\n", " [ 5.12000000e+02, 5.12000000e+02, 5.12000000e+02 ... 5.12000000e+02, 5.12000000e+02, 5.12000000e+02]\n", " [ 6.40000000e+02, 6.40000000e+02, 6.40000000e+02 ... 6.40000000e+02, 6.40000000e+02, 6.40000000e+02]]),\n", " ...\n", " Tensor(shape=[4], dtype=Float32, value= [ 2.00000000e+00, 2.00000000e+00, 2.00000000e+00, 2.00000000e+00]))" ] }, "metadata": {}, "execution_count": 19 } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "For more details, see [API: GradOperation](https://www.mindspore.cn/docs/api/en/r1.5/api_python/ops/mindspore.ops.GradOperation.html)." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "### Use mindspore.context to control execution mode\n", "\n", "Most functions in `mindspore.numpy` can run in Graph Mode and PyNative Mode, and can run on `CPU`,`GPU` and `Ascend`. Like MindSpore, users can manage the execution mode using `mindspore.context`:" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "```python\n", "from mindspore import context\n", "\n", "# Execucation in static graph mode\n", "context.set_context(mode=context.GRAPH_MODE)\n", "\n", "# Execucation in PyNative mode\n", "context.set_context(mode=context.PYNATIVE_MODE)\n", "\n", "# Execucation on CPU backend\n", "context.set_context(device_target=\"CPU\")\n", "\n", "# Execucation on GPU backend\n", "context.set_context(device_target=\"GPU\")\n", "\n", "# Execucation on Ascend backend\n", "context.set_context(device_target=\"Ascend\")\n", "...\n", "```" ], "metadata": {} }, { "cell_type": "markdown", "source": [ " For more details, see [API: mindspore.context](https://www.mindspore.cn/docs/api/en/r1.5/api_python/mindspore.context.html)." ], "metadata": {} }, { "cell_type": "markdown", "source": [ "### Use mindspore.numpy in MindSpore Deep Learning Models\n", "\n", "`mindspore.numpy` interfaces can be used inside `nn.cell` blocks as well. For example, the above code can be modified to:\n" ], "metadata": {} }, { "cell_type": "code", "execution_count": 20, "source": [ "import mindspore.numpy as np\n", "from mindspore import context\n", "from mindspore.nn import Cell\n", "\n", "context.set_context(mode=context.GRAPH_MODE)\n", "\n", "x = np.arange(8).reshape(2, 4).astype('float32')\n", "w1 = np.ones((4, 8))\n", "b1 = np.zeros((8,))\n", "w2 = np.ones((8, 16))\n", "b2 = np.zeros((16,))\n", "w3 = np.ones((16, 4))\n", "b3 = np.zeros((4,))\n", "\n", "class NeuralNetwork(Cell):\n", " def __init__(self):\n", " super(NeuralNetwork, self).__init__()\n", "\n", " def construct(self, x, w1, b1, w2, b2, w3, b3):\n", " x = np.dot(x, w1) + b1\n", " x = np.dot(x, w2) + b2\n", " x = np.dot(x, w3) + b3\n", " return x\n", "\n", "net = NeuralNetwork()\n", "\n", "print(net(x, w1, b1, w2, b2, w3, b3))\n" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "[[ 768. 768. 768. 768.]\n", " [2816. 2816. 2816. 2816.]]\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "For more details on building Neural Network with MindSpore, see [MindSpore Training Guide](https://www.mindspore.cn/docs/programming_guide/en/r1.5/index.html)." ], "metadata": {} } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.5" } }, "nbformat": 4, "nbformat_minor": 5 }