Overall Structure

View Source On Gitee

The overall architecture of MindFormers can be divided into the following sections:

  1. At the hardware level, MindFormers supports users running large models on Ascend servers;

  2. At the software level, MindFormers implements the big model-related code through the Python interface provided by MindSpore and performs data computation by the operator libraries provided by the supporting software package of the Ascend AI processor;

  3. The basic functionality features currently supported by MindFormers are listed below:

    1. Supports tasks such as running training and inference for large models distributed parallelism, with parallel capabilities including data parallelism, model parallelism, ultra-long sequence parallelism;

    2. Supports model weight conversion, distributed weight splitting and combination, and different format of dataset loading and resumable training after breakpoint;

    3. Support 20+ large models pretraining, fine-tuning, inference and [evaluation] (https://www.mindspore.cn/mindformers/docs/en/r1.3.0/usage/evaluation.html). Meanwhile, it also supports quantization, and the list of supported models can be found in Model Library;

  4. MindFormers supports users to carry out model service deployment function through MindIE, and also supports the use of MindX to realize large-scale cluster scheduling; more third-party platforms will be supported in the future, please look forward to it.