mindspore.dataset.text.AddToken

class mindspore.dataset.text.AddToken(token, begin=True)[源代码]

将分词(token)添加到序列的开头或结尾处。

参数:
  • token (str) - 待添加的分词(token)。

  • begin (bool, 可选) - 选择分词(token)插入的位置,若为 True 则在序列开头插入,否则在序列结尾插入。默认值: True

异常:
  • TypeError - 如果 token 的类型不为str。

  • TypeError - 如果 begin 的类型不为bool。

支持平台:

CPU

样例:

>>> import mindspore.dataset as ds
>>> import mindspore.dataset.text as text
>>>
>>> # Use the transform in dataset pipeline mode
>>> numpy_slices_dataset = ds.NumpySlicesDataset(data=[['a', 'b', 'c', 'd', 'e']], column_names=["text"])
>>> # Data before
>>> # |           text            |
>>> # +---------------------------+
>>> # | ['a', 'b', 'c', 'd', 'e'] |
>>> # +---------------------------+
>>> add_token_op = text.AddToken(token='TOKEN', begin=True)
>>> numpy_slices_dataset = numpy_slices_dataset.map(operations=add_token_op)
>>> for item in numpy_slices_dataset.create_dict_iterator(num_epochs=1, output_numpy=True):
...     print(item["text"])
['TOKEN' 'a' 'b' 'c' 'd' 'e']
>>> # Data after
>>> # |           text            |
>>> # +---------------------------+
>>> # | ['TOKEN', 'a', 'b', 'c', 'd', 'e'] |
>>> # +---------------------------+
>>>
>>> # Use the transform in eager mode
>>> data = ["happy", "birthday", "to", "you"]
>>> output = text.AddToken(token='TOKEN', begin=True)(data)
>>> print(output)
['TOKEN' 'happy' 'birthday' 'to' 'you']
教程样例: