mindspore.dataset.text.transforms.UnicodeScriptTokenizer
- class mindspore.dataset.text.transforms.UnicodeScriptTokenizer(keep_whitespace=False, with_offsets=False)[source]
Tokenize a scalar tensor of UTF-8 string based on Unicode script boundaries.
Note
UnicodeScriptTokenizer is not supported on Windows platform yet.
- Parameters
- Raises
- Supported Platforms:
CPU
Examples
>>> # If with_offsets=False, default output one column {["text", dtype=str]} >>> tokenizer_op = text.UnicodeScriptTokenizer(keep_whitespace=True, with_offsets=False) >>> text_file_dataset = text_file_dataset.map(operations=tokenizer_op) >>> # If with_offsets=True, then output three columns {["token", dtype=str], >>> # ["offsets_start", dtype=uint32], >>> # ["offsets_limit", dtype=uint32]} >>> tokenizer_op = text.UnicodeScriptTokenizer(keep_whitespace=True, with_offsets=True) >>> text_file_dataset = text_file_dataset.map(operations=tokenizer_op, input_columns=["text"], ... output_columns=["token", "offsets_start", "offsets_limit"], ... column_order=["token", "offsets_start", "offsets_limit"])