mindspore.dataset.text.WhitespaceTokenizer
- class mindspore.dataset.text.WhitespaceTokenizer(with_offsets=False)[source]
Tokenize a scalar tensor of UTF-8 string on ICU4C defined whitespaces, such as: ‘ ‘, ‘\t’, ‘\r’, ‘\n’.
Note
WhitespaceTokenizer is not supported on Windows platform yet.
- Parameters
with_offsets (bool, optional) – Whether to output the start and end offsets of each token in the original string. Default:
False
.- Raises
TypeError – If with_offsets is not of type bool.
- Supported Platforms:
CPU
Examples
>>> import mindspore.dataset as ds >>> import mindspore.dataset.text as text >>> >>> text_file_list = ["/path/to/text_file_dataset_file"] >>> text_file_dataset = ds.TextFileDataset(dataset_files=text_file_list) >>> >>> # 1) If with_offsets=False, default output one column {["text", dtype=str]} >>> tokenizer_op = text.WhitespaceTokenizer(with_offsets=False) >>> text_file_dataset = text_file_dataset.map(operations=tokenizer_op) >>> >>> # 2) If with_offsets=True, then output three columns {["token", dtype=str], >>> # ["offsets_start", dtype=uint32], >>> # ["offsets_limit", dtype=uint32]} >>> tokenizer_op = text.WhitespaceTokenizer(with_offsets=True) >>> text_file_dataset = text_file_dataset.map(operations=tokenizer_op, input_columns=["text"], ... output_columns=["token", "offsets_start", "offsets_limit"])
- Tutorial Examples: