mindspore.dataset.text.UnicodeScriptTokenizer

View Source On Gitee
class mindspore.dataset.text.UnicodeScriptTokenizer(keep_whitespace=False, with_offsets=False)[source]

Tokenize a scalar tensor of UTF-8 string based on Unicode script boundaries.

Note

UnicodeScriptTokenizer is not supported on Windows platform yet.

Parameters
  • keep_whitespace (bool, optional) – Whether or not emit whitespace tokens. Default: False.

  • with_offsets (bool, optional) – Whether to output the start and end offsets of each token in the original string. Default: False .

Raises
  • TypeError – If keep_whitespace is not of type bool.

  • TypeError – If with_offsets is not of type bool.

Supported Platforms:

CPU

Examples

>>> import mindspore.dataset as ds
>>> import mindspore.dataset.text as text
>>>
>>> text_file_list = ["/path/to/text_file_dataset_file"]
>>> text_file_dataset = ds.TextFileDataset(dataset_files=text_file_list)
>>>
>>> # 1) If with_offsets=False, default output one column {["text", dtype=str]}
>>> tokenizer_op = text.UnicodeScriptTokenizer(keep_whitespace=True, with_offsets=False)
>>> text_file_dataset = text_file_dataset.map(operations=tokenizer_op)
>>>
>>> # 2) If with_offsets=True, then output three columns {["token", dtype=str],
>>> #                                                     ["offsets_start", dtype=uint32],
>>> #                                                     ["offsets_limit", dtype=uint32]}
>>> tokenizer_op = text.UnicodeScriptTokenizer(keep_whitespace=True, with_offsets=True)
>>> text_file_dataset = text_file_dataset.map(operations=tokenizer_op, input_columns=["text"],
...                                           output_columns=["token", "offsets_start", "offsets_limit"])
Tutorial Examples: