mindspore.nn.EmbeddingLookup

View Source On Gitee
class mindspore.nn.EmbeddingLookup(vocab_size, embedding_size, param_init='normal', target='CPU', slice_mode='batch_slice', manual_shapes=None, max_norm=None, sparse=True, vocab_cache_size=0, dtype=mstype.float32)[source]

EmbeddingLookup layer. Same function as the embedding layer, mainly used for heterogeneous parallel scenarios where large-scale embedding layers exist when automatic parallelism or semi-automatic parallelism is present.

Note

When 'target' is set to 'CPU', this module will use P.EmbeddingLookup().set_device('CPU') which specified 'offset = 0' to lookup table. When 'target' is set to 'DEVICE', this module will use P.Gather() which specified 'axis = 0' to lookup table. In field slice mode, the manual_shapes must be given. It is a tuple ,where the element is vocab[i], vocab[i] is the row numbers for i-th part.

Parameters
  • vocab_size (int) – Size of the dictionary of embeddings.

  • embedding_size (int) – The size of each embedding vector.

  • param_init (Union[Tensor, str, Initializer, numbers.Number]) – Initializer for the embedding_table. Refer to class initializer for the values of string when a string is specified. Default: 'normal' .

  • target (str) – Specifies the target where the op is executed. The value must in [ 'DEVICE' , 'CPU' ]. Default: 'CPU' .

  • slice_mode (str) –

    The slicing way in semi_auto_parallel/auto_parallel. Default: 'batch_slice' .

    • batch_slice (str): Divides the input index tensor into batches and retrieves the corresponding embedding vectors. This is applicable when each sample has the same number of indices.

    • field_slice (str): Divides the input index tensor into fields and retrieves the corresponding embedding vectors. This is applicable when each sample may have a different number of indices, but have the same feature dimensions.

    • table_row_slice (str): Treats the input index tensor as a 2D table, divides it by rows, and retrieves the corresponding embedding vectors.

    • table_column_slice (str): Treats the input index tensor as a 2D table, divides it by columns, and retrieves the corresponding embedding vectors.

  • manual_shapes (tuple) – The accompaniment array in field slice mode. Default: None .

  • max_norm (Union[float, None]) – A maximum clipping value. The data type must be float16, float32 or None. Default: None .

  • sparse (bool) – Using sparse mode. When 'target' is set to 'CPU', 'sparse' has to be true. Default: True .

  • vocab_cache_size (int) – Cache size of the dictionary of embeddings. Default: 0 . It is valid only in parameter server trainning mode and 'DEVICE' target. And the moment parameter of corresponding optimizer will also be set to the cache size. In addition, it should be noted that it will cost the 'DEVICE' memory, so suggests setting a reasonable value to avoid insufficient memory.

  • dtype (mindspore.dtype) – Dtype of Parameters. Default: mstype.float32 .

Inputs:
  • input_indices (Tensor) - The shape of tensor is \((y_1, y_2, ..., y_S)\). Specifies the indices of elements of the original Tensor. Values can be out of range of embedding_table, and the exceeding part will be filled with 0 in the output. Values does not support negative and the result is undefined if values are negative. Input_indices must only be a 2d tensor in this interface when run in semi auto parallel/auto parallel mode.

Outputs:

Tensor, the shape of tensor is \((z_1, z_2, ..., z_N)\).

Raises
  • TypeError – If vocab_size or embedding_size or vocab_cache_size is not an int.

  • TypeError – If sparse is not a bool or manual_shapes is not a tuple.

  • ValueError – If vocab_size or embedding_size is less than 1.

  • ValueError – If vocab_cache_size is less than 0.

  • ValueError – If target is neither 'CPU' nor 'DEVICE'.

  • ValueError – If slice_mode is not one of 'batch_slice' or 'field_slice' or 'table_row_slice' or 'table_column_slice'.

  • ValueError – If sparse is False and target is 'CPU'.

  • ValueError – If slice_mode is 'field_slice' and manual_shapes is None.

Supported Platforms:

Ascend GPU CPU

Examples

>>> import mindspore
>>> from mindspore import Tensor, nn
>>> import numpy as np
>>> input_indices = Tensor(np.array([[1, 0], [3, 2]]), mindspore.int32)
>>> result = nn.EmbeddingLookup(4,2)(input_indices)
>>> print(result.shape)
(2, 2, 2)