{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# 文本数据加载与增强\n", "\n", "[![下载Notebook](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r1.8/resource/_static/logo_notebook.png)](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/r1.8/tutorials/zh_cn/advanced/dataset/mindspore_augment_text_data.ipynb) \n", "[![下载样例代码](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r1.8/resource/_static/logo_download_code.png)](https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/r1.8/tutorials/zh_cn/advanced/dataset/mindspore_augment_text_data.py) \n", "[![查看源文件](https://mindspore-website.obs.cn-north-4.myhuaweicloud.com/website-images/r1.8/resource/_static/logo_source.png)](https://gitee.com/mindspore/docs/blob/r1.8/tutorials/source_zh_cn/advanced/dataset/augment_text_data.ipynb)\n", "\n", "随着可获得的文本数据逐步增多,对文本数据进行预处理,以便获得可用于网络训练所需干净数据的诉求也更为迫切。文本数据集预处理通常包括文本数据集加载与数据增强两部分。\n", "\n", "文本数据加载通常包含以下三种方式:\n", "\n", "1. 通过文本读取的Dataset接口如[ClueDataset](https://www.mindspore.cn/docs/zh-CN/r1.8/api_python/dataset/mindspore.dataset.CLUEDataset.html#mindspore.dataset.CLUEDataset)、[TextFileDataset](https://www.mindspore.cn/docs/zh-CN/r1.8/api_python/dataset/mindspore.dataset.TextFileDataset.html#mindspore.dataset.TextFileDataset)进行读取。\n", "2. 将数据集转成标准格式(如MindRecord格式),再通过对应接口(如[MindDataset](https://www.mindspore.cn/docs/zh-CN/r1.8/api_python/dataset/mindspore.dataset.MindDataset.html#mindspore.dataset.MindDataset))进行读取。\n", "3. 通过GeneratorDataset接口,接收用户自定义的数据集加载函数,进行数据加载,用法可参考[自定义数据集加载](https://www.mindspore.cn/tutorials/zh-CN/r1.8/advanced/dataset/custom.html)章节。\n", "\n", "## 加载文本数据\n", "\n", "下面我们以从TXT文件中读取数据为例,介绍`TextFileDataset`的使用方式,更多文本数据集加载相关信息可参考[API文档](https://www.mindspore.cn/docs/zh-CN/r1.8/api_python/mindspore.dataset.html#文本)。\n", "\n", "1. 准备文本数据,内容如下:\n", "\n", "```text\n", "Welcome to Beijing\n", "北京欢迎您!\n", "我喜欢China!\n", "```\n", "\n", "2. 创建`tokenizer.txt`文件并复制文本数据到该文件中,将该文件存放在./datasets路径下。执行如下代码:" ] }, { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [], "source": [ "import os\n", "\n", "if not os.path.exists('./datasets'):\n", " os.mkdir('./datasets')\n", "\n", "# 把上面的文本数据写入文件tokenizer.txt\n", "file_handle = open('./datasets/tokenizer.txt', mode='w')\n", "file_handle.write('Welcome to Beijing \\n北京欢迎您! \\n我喜欢China! \\n')\n", "file_handle.close()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "上面的代码执行完后,数据集结构为:\n", "\n", "```text\n", "./datasets\n", "└── tokenizer.txt\n", "```\n", "\n", "3. 从TXT文件中加载数据集并打印。代码如下:" ] }, { "cell_type": "code", "execution_count": 2, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Welcome to Beijing \n", "北京欢迎您! \n", "我喜欢China! \n" ] } ], "source": [ "import mindspore.dataset as ds\n", "import mindspore.dataset.text as text\n", "\n", "# 定义文本数据集的加载路径\n", "DATA_FILE = './datasets/tokenizer.txt'\n", "\n", "# 从tokenizer.txt中加载数据集\n", "dataset = ds.TextFileDataset(DATA_FILE, shuffle=False)\n", "\n", "for data in dataset.create_dict_iterator(output_numpy=True):\n", " print(text.to_str(data['text']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "## 文本数据增强\n", "\n", "针对文本数据增强,常用操作包含文本分词、词汇表查找等:\n", "\n", "- 文本分词:将原始一长串句子分割成多个基本的词汇。\n", "- 词汇表查找:查找分割后各词汇对应的id,并将句子中包含的id组成词向量传入网络进行训练。\n", "\n", "下面对数据增强过程中用到的分词功能、词汇表查找等功能进行介绍,更多关于文本处理API的使用说明,可以参考[API文档](https://www.mindspore.cn/docs/zh-CN/r1.8/api_python/mindspore.dataset.text.html)。\n", "\n", "### 构造与使用词汇表\n", "\n", "词汇表提供了单词与id对应的映射关系,通过词汇表,输入单词能找到对应的单词id,反之依据单词id也能获取对应的单词。\n", "\n", "MindSpore提供了多种构造词汇表(Vocab)的方法,可以从字典、文件、列表以及Dataset对象中获取原始数据,以便构造词汇表,对应的接口为:[from_dict](https://www.mindspore.cn/docs/zh-CN/r1.8/api_python/dataset_text/mindspore.dataset.text.Vocab.html#mindspore.dataset.text.Vocab.from_dict)、[from_file](https://www.mindspore.cn/docs/zh-CN/r1.8/api_python/dataset_text/mindspore.dataset.text.Vocab.html#mindspore.dataset.text.Vocab.from_file)、[from_list](https://www.mindspore.cn/docs/zh-CN/r1.8/api_python/dataset_text/mindspore.dataset.text.Vocab.html#mindspore.dataset.text.Vocab.from_list)、[from_dataset](https://www.mindspore.cn/docs/zh-CN/r1.8/api_python/dataset_text/mindspore.dataset.text.Vocab.html#mindspore.dataset.text.Vocab.from_dataset)。\n", "\n", "以from_dict为例,构造Vocab的方式如下,传入的dict中包含多组单词和id对。" ] }, { "cell_type": "code", "execution_count": 3, "metadata": {}, "outputs": [], "source": [ "from mindspore.dataset import text\n", "\n", "# 构造词汇表\n", "vocab = text.Vocab.from_dict({\"home\": 3, \"behind\": 2, \"the\": 4, \"world\": 5, \"\": 6})" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "Vocab提供了单词与id之间相互查询的方法,即:[tokens_to_ids](https://www.mindspore.cn/docs/zh-CN/r1.8/api_python/dataset_text/mindspore.dataset.text.Vocab.html#mindspore.dataset.text.Vocab.tokens_to_ids)和[ids_to_tokens](https://www.mindspore.cn/docs/zh-CN/r1.8/api_python/dataset_text/mindspore.dataset.text.Vocab.html#mindspore.dataset.text.Vocab.ids_to_tokens)方法,用法如下所示:" ] }, { "cell_type": "code", "execution_count": 4, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "ids: [3, 5]\n", "tokens: ['behind', 'world']\n" ] } ], "source": [ "# 根据单词查找id\n", "ids = vocab.tokens_to_ids([\"home\", \"world\"])\n", "print(\"ids: \", ids)\n", "\n", "# 根据id查找单词\n", "tokens = vocab.ids_to_tokens([2, 5])\n", "print(\"tokens: \", tokens)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "从上面打印的结果可以看出:\n", "\n", "- 单词`\"home\"`和`\"world\"`的id分别为`3`和`5`;\n", "- id为`2`的单词为`\"behind\"`,id为`5`的单词为`\"world\"`;\n", "\n", "这一结果也与词汇表一致。此外Vocab也是多种分词器(如WordpieceTokenizer)的必要入参,分词时会将句子中存在于词汇表的单词,前后分割开,变成单独的一个词汇,之后通过查找词汇表能够获取对应的词汇id。\n", "\n", "### 分词器\n", "\n", "分词就是将连续的字序列按照一定的规范划分成词序列的过程,合理的分词有助于语义理解。\n", "\n", "MindSpore提供了多种不同用途的分词器,如BasicTokenizer、BertTokenizer、JiebaTokenizer等,能够帮助用户高性能地处理文本。用户可以构建自己的字典,使用适当的标记器将句子拆分为不同的标记,并通过查找操作获取字典中标记的索引。此外,用户也可以根据需要实现自定义的分词器。\n", "\n", "> 下面介绍几种常用分词器的使用方法,更多分词器相关信息请参考[API文档](https://mindspore.cn/docs/zh-CN/r1.8/api_python/mindspore.dataset.text.html)。\n", "\n", "#### BertTokenizer\n", "\n", "`BertTokenizer`操作是通过调用`BasicTokenizer`和`WordpieceTokenizer`来进行分词的。\n", "\n", "下面的样例首先构建了一个文本数据集和字符串列表,然后通过`BertTokenizer`对数据集进行分词,并展示了分词前后的文本结果。" ] }, { "cell_type": "code", "execution_count": 5, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "------------------------before tokenization----------------------------\n", "床前明月光\n", "疑是地上霜\n", "举头望明月\n", "低头思故乡\n", "I am making small mistakes during working hours\n", "😀嘿嘿😃哈哈😄大笑😁嘻嘻\n", "繁體字\n" ] } ], "source": [ "import mindspore.dataset as ds\n", "import mindspore.dataset.text as text\n", "\n", "# 构造待分词数据\n", "input_list = [\"床前明月光\", \"疑是地上霜\", \"举头望明月\", \"低头思故乡\", \"I am making small mistakes during working hours\",\n", " \"😀嘿嘿😃哈哈😄大笑😁嘻嘻\", \"繁體字\"]\n", "\n", "# 加载文本数据集\n", "dataset = ds.NumpySlicesDataset(input_list, column_names=[\"text\"], shuffle=False)\n", "\n", "print(\"------------------------before tokenization----------------------------\")\n", "for data in dataset.create_dict_iterator(output_numpy=True):\n", " print(text.to_str(data['text']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "上面为数据集未被分词前的数据打印情况,下面使用`BertTokenizer`分词器对数据集进行分词。" ] }, { "cell_type": "code", "execution_count": 6, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "------------------------after tokenization-----------------------------\n", "['床' '前' '明' '月' '光']\n", "['疑' '是' '地' '上' '霜']\n", "['举' '头' '望' '明' '月']\n", "['低' '头' '思' '故' '乡']\n", "['I' 'am' 'mak' '##ing' 'small' 'mistake' '##s' 'during' 'work' '##ing'\n", " 'hour' '##s']\n", "['😀' '嘿' '嘿' '😃' '哈' '哈' '😄' '大' '笑' '😁' '嘻' '嘻']\n", "['繁' '體' '字']\n" ] } ], "source": [ "# 构建词汇表\n", "vocab_list = [\n", " \"床\", \"前\", \"明\", \"月\", \"光\", \"疑\", \"是\", \"地\", \"上\", \"霜\", \"举\", \"头\", \"望\", \"低\", \"思\", \"故\", \"乡\",\n", " \"繁\", \"體\", \"字\", \"嘿\", \"哈\", \"大\", \"笑\", \"嘻\", \"i\", \"am\", \"mak\", \"make\", \"small\", \"mistake\",\n", " \"##s\", \"during\", \"work\", \"##ing\", \"hour\", \"😀\", \"😃\", \"😄\", \"😁\", \"+\", \"/\", \"-\", \"=\", \"12\",\n", " \"28\", \"40\", \"16\", \" \", \"I\", \"[CLS]\", \"[SEP]\", \"[UNK]\", \"[PAD]\", \"[MASK]\", \"[unused1]\", \"[unused10]\"]\n", "\n", "# 加载词汇表\n", "vocab = text.Vocab.from_list(vocab_list)\n", "\n", "# 使用BertTokenizer分词器对文本数据集进行分词操作\n", "tokenizer_op = text.BertTokenizer(vocab=vocab)\n", "dataset = dataset.map(operations=tokenizer_op)\n", "\n", "print(\"------------------------after tokenization-----------------------------\")\n", "for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):\n", " print(text.to_str(i['text']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "从上面两次的打印结果可以看出,数据集中的句子、词语和表情符号等都被`BertTokenizer`分词器以词汇表中的词汇为最小单元进行了分割,“故乡”被分割成了‘故’和‘乡’,“明月”被分割成了‘明’和‘月’。值得注意的是,“mistakes”被分割成了‘mistake’和‘##s’。\n", "\n", "#### JiebaTokenizer\n", "\n", "`JiebaTokenizer`操作是基于jieba的中文分词。\n", "\n", "以下示例代码完成下载字典文件`hmm_model.utf8`和`jieba.dict.utf8`,并将其放到指定位置。" ] }, { "cell_type": "code", "execution_count": 7, "metadata": {}, "outputs": [], "source": [ "from mindvision.dataset import DownLoad\n", "\n", "# 字典文件存放路径\n", "dl_path = \"./dictionary\"\n", "\n", "# 获取字典文件源\n", "dl_url_hmm = \"https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/datasets/hmm_model.utf8\"\n", "dl_url_jieba = \"https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/datasets/jieba.dict.utf8\"\n", "\n", "# 下载字典文件\n", "dl = DownLoad()\n", "dl.download_url(url=dl_url_hmm, path=dl_path)\n", "dl.download_url(url=dl_url_jieba, path=dl_path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "下载的文件放置的目录结构如下:\n", "\n", "```text\n", "./dictionary/\n", "├── hmm_model.utf8\n", "└── jieba.dict.utf8\n", "```\n", "\n", "下面的样例首先构建了一个文本数据集,然后使用HMM与MP字典文件创建`JiebaTokenizer`对象,并对数据集进行分词,最后展示了分词前后的文本结果。" ] }, { "cell_type": "code", "execution_count": 8, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "------------------------before tokenization----------------------------\n", "明天天气太好了我们一起去外面玩吧\n" ] } ], "source": [ "import mindspore.dataset as ds\n", "import mindspore.dataset.text as text\n", "\n", "# 构造待分词数据\n", "input_list = [\"明天天气太好了我们一起去外面玩吧\"]\n", "\n", "# 加载数据集\n", "dataset = ds.NumpySlicesDataset(input_list, column_names=[\"text\"], shuffle=False)\n", "\n", "print(\"------------------------before tokenization----------------------------\")\n", "for data in dataset.create_dict_iterator(output_numpy=True):\n", " print(text.to_str(data['text']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "上面为数据集未被分词前的数据打印情况,下面使用`JiebaTokenizer`分词器对数据集进行分词。" ] }, { "cell_type": "code", "execution_count": 9, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "------------------------after tokenization-----------------------------\n", "['明天' '天气' '太好了' '我们' '一起' '去' '外面' '玩吧']\n" ] } ], "source": [ "HMM_FILE = \"./dictionary/hmm_model.utf8\"\n", "MP_FILE = \"./dictionary/jieba.dict.utf8\"\n", "\n", "# 使用JiebaTokenizer分词器对数据集进行分词\n", "jieba_op = text.JiebaTokenizer(HMM_FILE, MP_FILE)\n", "dataset = dataset.map(operations=jieba_op, input_columns=[\"text\"], num_parallel_workers=1)\n", "\n", "print(\"------------------------after tokenization-----------------------------\")\n", "for data in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):\n", " print(text.to_str(data['text']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "从上面两次打印结果来看,数据集中的句子被`JiebaTokenizer`分词器以词语为最小单元进行了划分。\n", "\n", "#### SentencePieceTokenizer\n", "\n", "`SentencePieceTokenizer`操作是基于开源自然语言处理工具包[SentencePiece](https://github.com/google/sentencepiece)封装的分词器。\n", "\n", "以下示例代码将下载文本数据集文件`botchan.txt`,并将其放置到指定位置。" ] }, { "cell_type": "code", "execution_count": 10, "metadata": {}, "outputs": [], "source": [ "# 数据集存放位置\n", "dl_path = \"./datasets\"\n", "\n", "# 获取语料数据源\n", "dl_url_botchan = \"https://obs.dualstack.cn-north-4.myhuaweicloud.com/mindspore-website/notebook/datasets/botchan.txt\"\n", "\n", "# 下载语料数据\n", "dl.download_url(url=dl_url_botchan, path=dl_path)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "下载的文件放置的目录结构如下:\n", "\n", "```text\n", "./datasets/\n", "└── botchan.txt\n", "```\n", "\n", "下面的样例首先构建了一个文本数据集,然后从`vocab_file`文件中构建一个`vocab`对象,再通过`SentencePieceTokenizer`对数据集进行分词,并展示了分词前后的文本结果。" ] }, { "cell_type": "code", "execution_count": 11, "metadata": { "scrolled": true }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "------------------------before tokenization----------------------------\n", "Nothing in the world is difficult for one who sets his mind on it.\n" ] } ], "source": [ "import mindspore.dataset as ds\n", "import mindspore.dataset.text as text\n", "from mindspore.dataset.text import SentencePieceModel, SPieceTokenizerOutType\n", "\n", "# 构造待分词数据\n", "input_list = [\"Nothing in the world is difficult for one who sets his mind on it.\"]\n", "\n", "# 加载数据集\n", "dataset = ds.NumpySlicesDataset(input_list, column_names=[\"text\"], shuffle=False)\n", "\n", "print(\"------------------------before tokenization----------------------------\")\n", "for data in dataset.create_dict_iterator(output_numpy=True):\n", " print(text.to_str(data['text']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "上面为数据集未被分词前的数据打印情况,下面使用`SentencePieceTokenizer`分词器对数据集进行分词。" ] }, { "cell_type": "code", "execution_count": 12, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "------------------------after tokenization-----------------------------\n", "['▁Nothing' '▁in' '▁the' '▁world' '▁is' '▁difficult' '▁for' '▁one' '▁who'\n", " '▁sets' '▁his' '▁mind' '▁on' '▁it.']\n" ] } ], "source": [ "# 语料数据文件存放路径\n", "vocab_file = \"./datasets/botchan.txt\"\n", "\n", "# 从语料数据中学习构建词汇表\n", "vocab = text.SentencePieceVocab.from_file([vocab_file], 5000, 0.9995, SentencePieceModel.WORD, {})\n", "\n", "# 使用SentencePieceTokenizer分词器对数据集进行分词\n", "tokenizer_op = text.SentencePieceTokenizer(vocab, out_type=SPieceTokenizerOutType.STRING)\n", "dataset = dataset.map(operations=tokenizer_op)\n", "\n", "print(\"------------------------after tokenization-----------------------------\")\n", "for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):\n", " print(text.to_str(i['text']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "从上面两次打印结果来看,数据集中的句子被`SentencePieceTokenizer`分词器以词语为最小单元进行了划分。在`SentencePieceTokenizer`分词器的处理过程中,空格作为普通符号处理,并使用下划线标记空格。\n", "\n", "#### UnicodeCharTokenizer\n", "\n", "`UnicodeCharTokenizer`操作是根据Unicode字符集来分词的。\n", "\n", "下面的样例首先构建了一个文本数据集,然后通过`UnicodeCharTokenizer`对数据集进行分词,并展示了分词前后的文本结果。" ] }, { "cell_type": "code", "execution_count": 13, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "------------------------before tokenization----------------------------\n", "Welcome to Beijing!\n", "北京欢迎您!\n", "我喜欢China!\n" ] } ], "source": [ "import mindspore.dataset as ds\n", "import mindspore.dataset.text as text\n", "\n", "# 构造待分词数据\n", "input_list = [\"Welcome to Beijing!\", \"北京欢迎您!\", \"我喜欢China!\"]\n", "\n", "# 加载数据集\n", "dataset = ds.NumpySlicesDataset(input_list, column_names=[\"text\"], shuffle=False)\n", "\n", "print(\"------------------------before tokenization----------------------------\")\n", "for data in dataset.create_dict_iterator(output_numpy=True):\n", " print(text.to_str(data['text']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "上面为数据集未被分词前的数据打印情况,下面使用`UnicodeCharTokenizer`分词器对数据集进行分词。" ] }, { "cell_type": "code", "execution_count": 14, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "------------------------after tokenization-----------------------------\n", "['W', 'e', 'l', 'c', 'o', 'm', 'e', ' ', 't', 'o', ' ', 'B', 'e', 'i', 'j', 'i', 'n', 'g', '!']\n", "['北', '京', '欢', '迎', '您', '!']\n", "['我', '喜', '欢', 'C', 'h', 'i', 'n', 'a', '!']\n" ] } ], "source": [ "# 使用UnicodeCharTokenizer分词器对数据集进行分词\n", "tokenizer_op = text.UnicodeCharTokenizer()\n", "dataset = dataset.map(operations=tokenizer_op)\n", "\n", "print(\"------------------------after tokenization-----------------------------\")\n", "for data in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):\n", " print(text.to_str(data['text']).tolist())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "从上面两次打印结果可以看出,数据集中的句子被`UnicodeCharTokenizer`分词器进行分割,中文以单个汉字为最小单元,英文以单个字母为最小单元。\n", "\n", "#### WhitespaceTokenizer\n", "\n", "`WhitespaceTokenizer`操作是根据空格来进行分词的。\n", "\n", "下面的样例首先构建了一个文本数据集,然后通过`WhitespaceTokenizer`对数据集进行分词,并展示了分词前后的文本结果。" ] }, { "cell_type": "code", "execution_count": 15, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "------------------------before tokenization----------------------------\n", "Welcome to Beijing!\n", "北京欢迎您!\n", "我喜欢China!\n", "床前明月光,疑是地上霜。\n" ] } ], "source": [ "import mindspore.dataset as ds\n", "import mindspore.dataset.text as text\n", "\n", "# 构造待分词数据\n", "input_list = [\"Welcome to Beijing!\", \"北京欢迎您!\", \"我喜欢China!\", \"床前明月光,疑是地上霜。\"]\n", "\n", "# 加载数据集\n", "dataset = ds.NumpySlicesDataset(input_list, column_names=[\"text\"], shuffle=False)\n", "\n", "print(\"------------------------before tokenization----------------------------\")\n", "for data in dataset.create_dict_iterator(output_numpy=True):\n", " print(text.to_str(data['text']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "上面为数据集未被分词前的数据打印情况,下面使用`WhitespaceTokenizer`分词器对数据集进行分词。" ] }, { "cell_type": "code", "execution_count": 16, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "------------------------after tokenization-----------------------------\n", "['Welcome', 'to', 'Beijing!']\n", "['北京欢迎您!']\n", "['我喜欢China!']\n", "['床前明月光,疑是地上霜。']\n" ] } ], "source": [ "# 使用WhitespaceTokenizer分词器对数据集进行分词\n", "tokenizer_op = text.WhitespaceTokenizer()\n", "dataset = dataset.map(operations=tokenizer_op)\n", "\n", "print(\"------------------------after tokenization-----------------------------\")\n", "for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):\n", " print(text.to_str(i['text']).tolist())" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "从上面两次打印结果可以看出,数据集中的句子被`WhitespaceTokenizer`分词器以空格为分隔符进行分割。\n", "\n", "#### WordpieceTokenizer\n", "\n", "`WordpieceTokenizer`操作是基于单词集来进行划分的,划分依据可以是单词集中的单个单词,或者多个单词的组合形式。\n", "\n", "下面的样例首先构建了一个文本数据集,然后从单词列表中构建`vocab`对象,通过`WordpieceTokenizer`对数据集进行分词,并展示了分词前后的文本结果。" ] }, { "cell_type": "code", "execution_count": 17, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "------------------------before tokenization----------------------------\n", "My\n", "favorite\n", "book\n", "is\n", "love\n", "during\n", "the\n", "cholera\n", "era\n", ".\n", "what\n", "我\n", "最\n", "喜\n", "欢\n", "的\n", "书\n", "是\n", "霍\n", "乱\n", "时\n", "期\n", "的\n", "爱\n", "情\n", "。\n", "好\n" ] } ], "source": [ "import mindspore.dataset as ds\n", "import mindspore.dataset.text as text\n", "\n", "# 构造待分词数据\n", "input_list = [\"My\", \"favorite\", \"book\", \"is\", \"love\", \"during\", \"the\", \"cholera\", \"era\", \".\", \"what\",\n", " \"我\", \"最\", \"喜\", \"欢\", \"的\", \"书\", \"是\", \"霍\", \"乱\", \"时\", \"期\", \"的\", \"爱\", \"情\", \"。\", \"好\"]\n", "\n", "# 构造英文词汇表\n", "vocab_english = [\"book\", \"cholera\", \"era\", \"favor\", \"##ite\", \"My\", \"is\", \"love\", \"dur\", \"##ing\", \"the\", \".\"]\n", "\n", "# 构造中文词汇表\n", "vocab_chinese = ['我', '最', '喜', '欢', '的', '书', '是', '霍', '乱', '时', '期', '爱', '情', '。']\n", "\n", "# 加载数据集\n", "dataset = ds.NumpySlicesDataset(input_list, column_names=[\"text\"], shuffle=False)\n", "\n", "print(\"------------------------before tokenization----------------------------\")\n", "for data in dataset.create_dict_iterator(output_numpy=True):\n", " print(text.to_str(data['text']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "上面为数据集未被分词前的数据打印情况,此处特意构造了词汇表中没有的单词“what”和“好”,下面使用`WordpieceTokenizer`分词器对数据集进行分词。" ] }, { "cell_type": "code", "execution_count": 18, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "------------------------after tokenization-----------------------------\n", "['My']\n", "['favor' '##ite']\n", "['book']\n", "['is']\n", "['love']\n", "['dur' '##ing']\n", "['the']\n", "['cholera']\n", "['era']\n", "['.']\n", "['[UNK]']\n", "['我']\n", "['最']\n", "['喜']\n", "['欢']\n", "['的']\n", "['书']\n", "['是']\n", "['霍']\n", "['乱']\n", "['时']\n", "['期']\n", "['的']\n", "['爱']\n", "['情']\n", "['。']\n", "['[UNK]']\n" ] } ], "source": [ "# 使用WordpieceTokenizer分词器对数据集进行分词\n", "vocab = text.Vocab.from_list(vocab_english+vocab_chinese)\n", "tokenizer_op = text.WordpieceTokenizer(vocab=vocab)\n", "dataset = dataset.map(operations=tokenizer_op)\n", "\n", "print(\"------------------------after tokenization-----------------------------\")\n", "for i in dataset.create_dict_iterator(num_epochs=1, output_numpy=True):\n", " print(text.to_str(i['text']))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "从上面两次打印结果可以看出,数据集中的词语被`WordpieceTokenizer`分词器以构造的词汇表进行分词,“My”仍然被分为“My”,“love”仍然被分为“love”。值得注意的是,“favorite”被分为了“favor”和“##ite”,由于“word”和“好”在词汇表中未找到,所以使用\\[UNK\\]表示。" ] } ], "metadata": { "kernelspec": { "display_name": "MindSpore", "language": "python", "name": "mindspore" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.4" } }, "nbformat": 4, "nbformat_minor": 4 }