mindspore.dataset.text.UnicodeScriptTokenizer

查看源文件
class mindspore.dataset.text.UnicodeScriptTokenizer(keep_whitespace=False, with_offsets=False)[源代码]

使用UnicodeScript分词器对UTF-8编码的字符串进行分词。

说明

Windows平台尚不支持 UnicodeScriptTokenizer

参数:
  • keep_whitespace (bool, 可选) - 是否输出空白标记(token)。默认值: False

  • with_offsets (bool, 可选) - 是否输出各Token在原字符串中的起始和结束偏移量。默认值: False

异常:
  • TypeError - 参数 keep_whitespace 的类型不为bool。

  • TypeError - 参数 with_offsets 的类型不为bool。

支持平台:

CPU

样例:

>>> import mindspore.dataset as ds
>>> import mindspore.dataset.text as text
>>>
>>> # Use the transform in dataset pipeline mode
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data=["北 京", "123", "欢 迎", "你"],
...                                              column_names=["text"], shuffle=False)
>>>
>>> # 1) If with_offsets=False, default output one column {["text", dtype=str]}
>>> tokenizer_op = text.UnicodeScriptTokenizer(keep_whitespace=True, with_offsets=False)
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=tokenizer_op)
>>> for item in numpy_slices_dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
...     print(item["text"])
...     break
['北' ' ' '京']
>>>
>>> # 2) If with_offsets=True, then output three columns {["token", dtype=str],
>>> #                                                     ["offsets_start", dtype=uint32],
>>> #                                                     ["offsets_limit", dtype=uint32]}
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data=["北 京", "123", "欢 迎", "你"],
...                                              column_names=["text"], shuffle=False)
>>> tokenizer_op = text.UnicodeScriptTokenizer(keep_whitespace=True, with_offsets=True)
>>> numpy_slices_dataset = numpy_slices_dataset.map(
...     operations=tokenizer_op,
...     input_columns=["text"],
...     output_columns=["token", "offsets_start", "offsets_limit"])
>>> for item in numpy_slices_dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
...     print(item["token"], item["offsets_start"], item["offsets_limit"])
...     break
['北' ' ' '京'] [0 3 4] [3 4 7]
>>>
>>> # Use the transform in eager mode
>>> data = "北 京"
>>> unicode_script_tokenizer_op = text.UnicodeScriptTokenizer(keep_whitespace=True, with_offsets=False)
>>> output = unicode_script_tokenizer_op(data)
>>> print(output)
['北' ' ' '京']
教程样例: