{ "cells": [ { "cell_type": "markdown", "source": [ "# Loading Text Dataset\r\n", "\r\n", "`Ascend` `GPU` `CPU` `Data Preparation`\r\n", "\r\n", "[![](https://gitee.com/mindspore/docs/raw/r1.5/resource/_static/logo_source_en.png)](https://gitee.com/mindspore/docs/blob/r1.5/docs/mindspore/programming_guide/source_en/load_dataset_text.ipynb) [![](https://gitee.com/mindspore/docs/raw/r1.5/resource/_static/logo_notebook_en.png)](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/notebook/r1.5/programming_guide/en/mindspore_load_dataset_text.ipynb) [![](https://gitee.com/mindspore/docs/raw/r1.5/resource/_static/logo_modelarts_en.png)](https://authoring-modelarts-cnnorth4.huaweicloud.com/console/lab?share-url-b64=aHR0cHM6Ly9taW5kc3BvcmUtd2Vic2l0ZS5vYnMuY24tbm9ydGgtNC5teWh1YXdlaWNsb3VkLmNvbS9ub3RlYm9vay9tYXN0ZXIvcHJvZ3JhbW1pbmdfZ3VpZGUvZW4vbWluZHNwb3JlX2xvYWRfZGF0YXNldF90ZXh0LmlweW5i&imageid=65f636a0-56cf-49df-b941-7d2a07ba8c8c)" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Overview\n", "\n", "The `mindspore.dataset` module provided by MindSpore enables users to customize their data fetching strategy from disk. At the same time, data processing and tokenization operators are applied to the data. Pipelined data processing produces a continuous flow of data to the training network, improving overall performance.\n", "\n", "In addition, MindSpore supports data loading in distributed scenarios. Users can define the number of shards while loading. For more details, see [Loading the Dataset in Data Parallel Mode](https://www.mindspore.cn/docs/programming_guide/en/r1.5/distributed_training_ascend.html#loading-the-dataset-in-data-parallel-mode).\n", "\n", "This tutorial briefly demonstrates how to load and process text data using MindSpore.\n", "\n", "## Preparations\n", "\n", "1. Prepare the following text data." ], "metadata": {} }, { "cell_type": "markdown", "source": [ " Welcome to Beijing!\n", "\n", " 北京欢迎您!\n", "\n", " 我喜欢English!" ], "metadata": {} }, { "cell_type": "markdown", "source": [ "2. Create the `tokenizer.txt` file, copy the text data to the file, and save the file under `./datasets` directory. The directory structure is as follow." ], "metadata": {} }, { "cell_type": "code", "execution_count": 5, "source": [ " import os\r\n", "\r\n", " if not os.path.exists('./datasets'):\r\n", " os.mkdir('./datasets')\r\n", " file_handle=open('./datasets/tokenizer.txt',mode='w')\r\n", " file_handle.write('Welcome to Beijing \\n北京欢迎您! \\n我喜欢English! \\n')\r\n", " file_handle.close()" ], "outputs": [], "metadata": {} }, { "cell_type": "code", "execution_count": 6, "source": [ " ! tree ./datasets" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "./datasets\n", "├── MNIST_Data\n", "│   ├── test\n", "│   │   ├── t10k-images-idx3-ubyte\n", "│   │   └── t10k-labels-idx1-ubyte\n", "│   └── train\n", "│   ├── train-images-idx3-ubyte\n", "│   └── train-labels-idx1-ubyte\n", "└── tokenizer.txt\n", "\n", "3 directories, 5 files\n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "3. Import the `mindspore.dataset` and `mindspore.dataset.text` modules." ], "metadata": {} }, { "cell_type": "code", "execution_count": 7, "source": [ " import mindspore.dataset as ds\n", " import mindspore.dataset.text as text" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Loading Dataset\n", "\n", "MindSpore supports loading common datasets in the field of text processing that come in a variety of on-disk formats. Users can also implement custom dataset class to load customized data. For detailed loading methods of various datasets, please refer to the [Loading Dataset](https://www.mindspore.cn/docs/programming_guide/en/r1.5/dataset_loading.html) chapter in the Programming Guide.\n", "\n", "The following tutorial demonstrates loading datasets using the `TextFileDataset` in the `mindspore.dataset` module.\n", "\n", "1. Configure the dataset directory as follows and create a dataset object." ], "metadata": {} }, { "cell_type": "code", "execution_count": 8, "source": [ " DATA_FILE = \"./datasets/tokenizer.txt\"\n", " dataset = ds.TextFileDataset(DATA_FILE, shuffle=False)" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "2. Create an iterator then obtain data through the iterator." ], "metadata": {} }, { "cell_type": "code", "execution_count": 9, "source": [ " for data in dataset.create_dict_iterator(output_numpy=True):\n", " print(text.to_str(data['text']))" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "Welcome to Beijing \n", "北京欢迎您! \n", "我喜欢English! \n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Processing Data\n", "\n", "For the data processing operators currently supported by MindSpore and their detailed usage methods, please refer to the [Processing Data](https://www.mindspore.cn/docs/programming_guide/en/r1.5/pipeline.html) chapter in the Programming Guide\n", "\n", "The following tutorial demonstrates how to construct a pipeline and perform operations such as `shuffle` and `RegexReplace` on the text dataset.\n", "\n", "1. Shuffle the dataset." ], "metadata": {} }, { "cell_type": "code", "execution_count": 10, "source": [ " ds.config.set_seed(58)\n", " dataset = dataset.shuffle(buffer_size=3)\n", "\n", " for data in dataset.create_dict_iterator(output_numpy=True):\n", " print(text.to_str(data['text']))" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "我喜欢English! \n", "Welcome to Beijing \n", "北京欢迎您! \n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "2. Perform `RegexReplace` on the dataset." ], "metadata": {} }, { "cell_type": "code", "execution_count": 11, "source": [ " replace_op1 = text.RegexReplace(\"Beijing\", \"Shanghai\")\n", " replace_op2 = text.RegexReplace(\"北京\", \"上海\")\n", " dataset = dataset.map(operations=replace_op1)\n", " dataset = dataset.map(operations=replace_op2)\n", "\n", " for data in dataset.create_dict_iterator(output_numpy=True):\n", " print(text.to_str(data['text']))" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "我喜欢English! \n", "Welcome to Shanghai \n", "上海欢迎您! \n" ] } ], "metadata": {} }, { "cell_type": "markdown", "source": [ "## Tokenization\n", "\n", "For the data tokenization operators currently supported by MindSpore and their detailed usage methods, please refer to the [Tokenizer](https://www.mindspore.cn/docs/programming_guide/en/r1.5/tokenizer.html) chapter in the Programming Guide.\n", "\n", "The following tutorial demonstrates how to use the `WhitespaceTokenizer` to tokenize words with space.\n", "\n", "1. Create a `tokenizer`." ], "metadata": {} }, { "cell_type": "code", "execution_count": 12, "source": [ " tokenizer = text.WhitespaceTokenizer()" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "2. Apply the `tokenizer`." ], "metadata": {} }, { "cell_type": "code", "execution_count": 13, "source": [ " dataset = dataset.map(operations=tokenizer)" ], "outputs": [], "metadata": {} }, { "cell_type": "markdown", "source": [ "3. Create an iterator and obtain data through the iterator." ], "metadata": {} }, { "cell_type": "code", "execution_count": 14, "source": [ " for data in dataset.create_dict_iterator(output_numpy=True):\n", " print(text.to_str(data['text']).tolist())" ], "outputs": [ { "output_type": "stream", "name": "stdout", "text": [ "['我喜欢English!']\n", "['Welcome', 'to', 'Shanghai']\n", "['上海欢迎您!']\n" ] } ], "metadata": {} } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.5" } }, "nbformat": 4, "nbformat_minor": 5 }