Cross-Silo
MindSpore Federated API, used to start server and scheduler, inits and uses worker
- class mindspore_federated.FederatedLearningManager(yaml_config, model, sync_frequency, http_server_address='', data_size=1, sync_type='fixed', run_distribute=False, ssl_config=None, **kwargs)[source]
Manage Federated Learning during training.
- Parameters
yaml_config (str) – The yaml file path. For more detail see federated_server_yaml.
model (nn.Cell) – A model for Federated Training.
sync_frequency (int) – Synchronization frequency of parameters in Federated Learning. Indicating the number of steps between two adjacent synchronization operations when dataset_sink_mode is set to False. If sync_type is set to “fixed”, it serves as a fixed number of steps. If sync_type is set to “adaptive”, it serves as the initial value of the adaptive synchronization frequency. Note that its function is changed in dataset sink mode. If dataset_sink_mode is set to True and sink_size is set to a non-positive value, the synchronization operation will execute once every sync_frequency epochs. If dataset_sink_mode is set to True and sink_size is set to a positive value, the synchronization operation will execute once every sink_size * sync_frequency steps. The dataset_sink_mode and the sink_size is set by users in mindspore.train.Model .
http_server_address (str) – The http server address used for communicating. Default: “”.
data_size (int) – The data size to be reported to the worker. Default: 1.
sync_type (str) –
The synchronization type of parameter in Federated Learning. Supports [“fixed”, “adaptive”]. Default: “fixed”.
fixed: The frequency of parameter synchronization is fixed.
adaptive: The frequency of parameter synchronization changes adaptively.
run_distribute (bool) – Whether to open distribute training. Default: False.
ssl_config (Union(None, SSLConfig)) – Config of ssl. Default: None.
min_consistent_rate (float) – Minimum consistency ratio threshold. The greater the value, the more difficult it is to improve the synchronization frequency. Value range: greater than or equal to 0.0. Default: 1.1.
min_consistent_rate_at_round (int) – The number of rounds of the minimum consistency ratio threshold. The greater the value, the more difficult it is to improve the synchronization frequency. Value range: greater than or equal to 0. Default: 0.
ema_alpha (float) – Gradient consistency smoothing coefficient. The smaller the value, the more the frequency will be judged according to the gradient bifurcation of the current round more. Otherwise it will be judged according to the historical gradient bifurcation more. Value range: (0.0, 1.0). Default: 0.5.
observation_window_size (int) – The number of rounds in the observation time window. The greater the value, the more difficult it is to reduce the synchronization frequency. Value range: greater than 0. Default: 5.
frequency_increase_ratio (int) – Frequency increase amplitude. The greater the value, the greater the frequency increase amplitude. Value range: greater than 0. Default: 2.
unchanged_round (int) – The number of rounds whose frequency does not change. The frequency is unchanged before unchanged_round rounds. Value range: greater than or equal to 0. Default: 0.
Examples
>>> from mindspore_federated import FederatedLearningManager >>> from mindspore import nn, Model >>> from network.lenet import LeNet5, create_dataset_from_folder >>> network = LeNet5(62, 3) >>> federated_learning_manager = FederatedLearningManager( ... yaml_config="default_yaml_config.yaml", ... model=network, ... sync_frequency=100, ... http_server_address="127.0.0.1:10086", ... ) >>> net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction='mean') >>> net_opt = nn.Momentum(network.trainable_params(), 0.001, 0.9) >>> model = Model(network, net_loss, net_opt) >>> dataset = create_dataset_from_folder("0/train/", 32, 16, 1) >>> model.train(100, dataset, callbacks=[federated_learning_manager], dataset_sink_mode=False)