mindformers.pipeline
- mindformers.pipeline(task: str = None, model: Optional[Union[str, PreTrainedModel, Model, Tuple[str, str]]] = None, tokenizer: Optional[PreTrainedTokenizerBase] = None, image_processor: Optional[BaseImageProcessor] = None, audio_processor: Optional[BaseAudioProcessor] = None, backend: Optional[str] = 'ms', **kwargs: Any)[源代码]
通过流水线执行套件中已集成任务和模型的推理流程。
- 参数:
task (str, 可选) - 支持的任务列表['text_generation', 'image_to_text_generation', 'multi_modal_to_text_generation']。默认值:
None
。model (Union[str, PreTrainedModel, Model, Tuple[str, str]], 可选) - 执行任务的模型。默认值:
None
。tokenizer (PreTrainedTokenizerBase, 可选) - 模型分词器。默认值:
None
。image_processor (BaseImageProcessor, 可选) - 图片处理器。默认值:
None
。audio_processor (BaseAudioProcessor, 可选) - 音频处理器。默认值:
None
。backend (str, 可选) - 推理后端,当前仅支持 ms。默认值:
"ms"
。kwargs (Any) - 参考对应流水线任务的 kwargs 描述。
- 返回:
一个流水线任务。
- 异常:
KeyError - 如果输入模型和任务不在支持列表中。
样例:
>>> from mindformers import build_context >>> from mindformers import AutoModel, AutoTokenizer, pipeline >>> # Construct inputs >>> inputs = ["I love Beijing, because", "LLaMA is a", "Huawei is a company that"] >>> # Initialize the environment >>> build_context({ ... 'context': {'mode': 0, 'jit_config': {'jit_level': 'O0', 'infer_boost': 'on'}}, ... 'parallel': {}, ... 'parallel_config': {}}) >>> # Tokenizer instantiation >>> tokenizer = AutoTokenizer.from_pretrained('llama2_7b') >>> # Model instantiation >>> # Download the weights of the corresponding model from the HuggingFace model library, >>> # Refer to the README.md of the model to convert the weights to ckpt format. >>> model = AutoModel.from_pretrained('llama2_7b', checkpoint_name_or_path="path/to/llama2_7b.ckpt", ... use_past=True) >>> # The pipeline performs inference task. >>> text_generation_pipeline = pipeline(task="text_generation", model=model, tokenizer=tokenizer) >>> outputs = text_generation_pipeline(inputs, max_length=512, do_sample=False) >>> for output in outputs: ... print(output) 'text_generation_text': [I love Beijing, because it is a city that is constantly constantly changing. I ......] 'text_generation_text': [LLaMA is a large-scale, open-source, multimodal, multilingual, multitask, and ......] 'text_generation_text': [Huawei is a company that has been around for a long time. ......]