mindspore_lite.RunnerConfig

class mindspore_lite.RunnerConfig(context=None, workers_num=None, config_info=None, config_path='')[source]

RunnerConfig Class defines the context and configuration of ModelParallelRunner class.

Parameters
  • context (Context, optional) – Define the context used to store options during execution. Default: None.

  • workers_num (int, optional) – the num of workers. A ModelParallelRunner contains multiple workers, which are the units that actually perform parallel inferring. Setting workers_num to 0 represents workers_num will be automatically adjusted based on computer performance and core numbers. Default: None, None is equivalent to 0.

  • config_info (dict{str, dict{str, str}}, optional) – Nested map for passing model weight paths. For example, {“weight”: {“weight_path”: “/home/user/weight.cfg”}}. Default: None, None is equivalent to {}. key currently supports [“weight”]; value is in dict format, key of it currently supports [“weight_path”], value of it is the path of weight, For example, “/home/user/weight.cfg”.

  • config_path (str, optional) –

    Define the config file path. the config file is used to transfer user defined options during building ModelParallelRunner . In the following scenarios, users may need to set the parameter. For example, “/home/user/config.txt”. Default: “”.

    • Usage 1: Set mixed precision inference. The content and description of the configuration file are as

      follows:

      [execution_plan]
      [op_name1]=data_Type: float16 (The operator named op_name1 sets the data type as Float16)
      [op_name2]=data_Type: float32 (The operator named op_name2 sets the data type as Float32)
      
    • Usage 2: When GPU inference, set the configuration of TensorRT. The content and description of the

      configuration file are as follows:

      [ms_cache]
      serialize_Path=[serialization model path](storage path of serialization model)
      [gpu_context]
      input_shape=input_Name: [input_dim] (Model input dimension, for dynamic shape)
      dynamic_Dims=[min_dim~max_dim] (dynamic dimension range of model input, for dynamic shape)
      opt_Dims=[opt_dim] (the optimal input dimension of the model, for dynamic shape)
      

Raises
  • TypeErrorcontext is neither a Context nor None.

  • TypeErrorworkers_num is neither an int nor None.

  • TypeErrorconfig_info is neither a dict nor None.

  • TypeErrorconfig_info is a dict, but the key is not str.

  • TypeErrorconfig_info is a dict, the key is str, but the value is not dict.

  • TypeErrorconfig_info is a dict, the key is str, the value is dict, but the key of value is not str.

  • TypeErrorconfig_info is a dict, the key is str, the value is dict, the key of the value is str, but the value of the value is not str.

  • TypeErrorconfig_path is not a str.

  • ValueErrorworkers_num is an int, but it is less than 0.

  • ValueErrorconfig_path does not exist.

Examples

>>> # Use case: serving inference.
>>> # precondition 1: Building MindSpore Lite serving package by export MSLITE_ENABLE_SERVER_INFERENCE=on.
>>> # precondition 2: install wheel package of MindSpore Lite built by precondition 1.
>>> import mindspore_lite as mslite
>>> context = mslite.Context()
>>> context.append_device_info(mslite.CPUDeviceInfo())
>>> config_info = {"weight": {"weight_path": "path of model weight"}}
>>> runner_config = mslite.RunnerConfig(context=context, workers_num=0, config_info=config_info,
...                                     config_path="file.txt")
>>> print(runner_config)
workers num: 0,
config info: weight: weight_path path of model weight,
context: thread num: 0, bind mode: 1.
config path: file.txt.