mindformers.models.LlamaConfig

View Source On Gitee
class mindformers.models.LlamaConfig(batch_size: int = 1, seq_length: int = 2048, hidden_size: int = 4096, num_layers: int = 32, num_heads: int = 32, n_kv_heads: Optional[int] = None, max_position_embedding: Optional[int] = None, intermediate_size: Optional[int] = None, vocab_size: int = 32000, multiple_of: int = 256, ffn_dim_multiplier: Optional[int] = None, rms_norm_eps: float = 1e-5, bos_token_id: int = 1, eos_token_id: int = 2, pad_token_id: int = 0, ignore_token_id: int = - 100, theta: float = 10000.0, compute_dtype: str = 'float16', layernorm_compute_type: str = 'float32', softmax_compute_type: str = 'float32', rotary_dtype: str = 'float32', param_init_type: str = 'float16', embedding_init_type=None, qkv_has_bias: bool = False, qkv_concat: bool = False, parallel_config: Union[dict, TransformerOpParallelConfig] = default_transformer_config, moe_config: Union[dict, MoEConfig] = default_moe_config, use_past: bool = False, extend_method: str = 'None', scaling_factor: float = 1.0, is_dynamic: bool = False, use_rope_slice: bool = False, use_flash_attention: bool = False, use_ring_attention: bool = False, use_attn_mask_compression: bool = False, parallel_optimizer: bool = False, fine_grain_interleave: int = 1, pp_interleave_num: int = 1, offset: int = 0, checkpoint_name_or_path: str = '', repetition_penalty: float = 1.0, max_decode_length: int = 1024, block_size: int = 16, num_blocks: int = 512, top_k: int = 5, top_p: float = 1.0, do_sample: bool = True, quant_config: dict = None, tie_word_embeddings: bool = False, llm_backend: str = '', fused_rms_norm: bool = True, **kwargs)[source]

Llama config class which defines the model size.

Parameters
  • batch_size (int, optional) – Batch size for input data, use in predict. Default: 1.

  • seq_length (int, optional) – The sequence length of input_ids. Default: 2048.

  • hidden_size (int, optional) – Dimensionality of the encoder layers and the pooler layer. Default: 4096.

  • num_layers (int, optional) – Number of hidden layers in the Transformer decoder. Default: 32.

  • num_heads (int, optional) – Number of attention heads for each attention layer in the Transformer decoder. Default: 32.

  • n_kv_heads (int, optional) – Define multi group head attention heads number. Default: None.

  • max_position_embedding (int, optional) – Customize the maximum sequence length that the model can handle. Default: "None".

  • intermediate_size (int, optional) – Customize the number of dimension of the intermediate layer. Default: None.

  • vocab_size (int, optional) – Vocabulary size of the llama model. Default: 32000.

  • multiple_of (int, optional) – Define SwiGLU hidden layer size multiples. Default: 256.

  • ffn_dim_multiplier (int, optional) – Define ffn layer dim multiples. Default: None.

  • rms_norm_eps (float, optional) – The epsilon value of the denominator. Default: 1e-5.

  • bos_token_id (int, optional) – The id of the beginning-of-sequence token. Default: 1.

  • eos_token_id (int, optional) – The id of the end-of-sequence token. Default: 2.

  • pad_token_id (int, optional) – The id of the padding token. Default: 0.

  • ignore_token_id (int, optional) – The id of the ignoring token. Default: -100.

  • theta (float, optional) – Frequency factors for sine and cosine functions in RoPE. Default: 10000.0.

  • compute_dtype (str, optional) – Linear layer compute dtype. Default: float16.

  • layernorm_compute_type (str, optional) – Layernorm compute dtype. Default: float32.

  • softmax_compute_type (str, optional) – Softmax compute dtype. Default: float32.

  • rotary_dtype (str, optional) – RoPE compute dtype. Default: float32.

  • param_init_type (str, optional) – Parameter initial dtype. Default: float16.

  • embedding_init_type (str, optional) – Embedding weight initial dtype. Default: None.

  • qkv_has_bias (bool, optional) – Whether the Query, Key, and Value projection has bias. Default: False.

  • qkv_concat (bool, optional) – Whether concatenate the Query, Key, and Value projection. Default: False.

  • parallel_config (Union[dict, TransformerOpParallelConfig], optional) – The parallel configuration. Default: default_transformer_config , an instance of TransformerOpParallelConfig with default args.

  • moe_config (Union[dict, MoEConfig], optional) – The MoE configuration. Default: default_moe_config , an instance of MoEConfig with default args.

  • use_past (bool, optional) – Whether the model should use the past last key/values attentions (if applicable to the model) to speed up decoding. Default: False.

  • extend_method (str, optional) – The extent method of seq length in inference. Default: None.

  • scaling_factor (float, optional) – Scaling factor to adjust the weights of the frequency factors in the sine and cosine functions. Default: 1.0.

  • is_dynamic (bool, optional) – Whether to use dynamic shape. Default: False.

  • use_rope_slice (bool, optional) – Whether to enable RoPE slicing. Default: False.

  • use_flash_attention (bool, optional) – Whether to enable flash attention ops. Default: False.

  • use_ring_attention (bool, optional) – Whether to enable ring attention ops. Default: False.

  • use_attn_mask_compression (bool, optional) – Whether to enable attention mask compression. Default: False.

  • parallel_optimizer (bool, optional) – Whether to enable optimizer parallism. Default: False.

  • fine_grain_interleave (int, optional) – Set the number of fine-grained interleave. Default: 1.

  • pp_interleave_num (int, optional) – Set the number of pipeline interleave. Default: 1.

  • offset (int, optional) – Offset of transformer layer when set pipeline stage number. Default: 0.

  • checkpoint_name_or_path (str, optional) – checkpoint path or name used to load to the network. Default: None.

  • repetition_penalty (float, optional) – The parameter for repetition penalty. 1.0 means no penalty. See this paper for more details. Default: 1.0.

  • max_decode_length (int, optional) – The maximum length the generated tokens can have. Corresponds to the length of the input prompt + max_new_tokens. Its effect is overridden by max_new_tokens, if also set. Default: 1024.

  • block_size (int, optional) – The maximum number of tokens in one block can have when using paged attention. Default: 16.

  • num_blocks (int, optional) – The maximum number of blocks when using paged attention. Default: 512.

  • top_k (int, optional) – The number of highest probability vocabulary tokens to keep for top-k-filtering. Default: 5.

  • top_p (float, optional) – If set to float < 1, only the smallest set of most probable tokens with probabilities that add up to top_p or higher are kept for generation. Default: 1.0.

  • do_sample (bool, optional) – Whether to use sampling; use greedy decoding otherwise. Default: True.

  • quant_config (dict, optional) – Quantitative configuration. Default: None.

  • tie_word_embeddings (bool, optional) – Whether to tie input and output embeddings. Default: False.

  • llm_backend (str, optional) – Llm boost backend. Default: None.

  • fused_rms_norm (bool, optional) – Whether to use the RMSNorm of the fusion operator. Default: True.

Returns

LlamaConfig, a LlamaConfig instance.

Examples

>>> from mindformers.models import LlamaConfig
>>> config = LlamaConfig(num_layers=2, seq_length=1024)
>>> print(config.num_layers)
2
>>> print(config.seq_length)
1024