mindarmour.privacy.sup_privacy

This module provides Suppress Privacy feature to protect user privacy.

class mindarmour.privacy.sup_privacy.MaskLayerDes(layer_name, grad_idx, is_add_noise, is_lower_clip, min_num, upper_bound=1.2)[source]

Describe the layer that need to be suppressed.

Parameters
  • layer_name (str) –

    Layer name, get the name of one layer as following:

    for layer in networks.get_parameters(expand=True):
        if layer.name == "conv": ...
    

  • grad_idx (int) – Grad layer index, get mask layer’s index in grad tuple.You can refer to the construct function of TrainOneStepCell in mindarmour/privacy/sup_privacy/train/model.py to get the index of some specified grad layers (print in PYNATIVE_MODE).

  • is_add_noise (bool) – If True, the weight of this layer can add noise. If False, the weight of this layer can not add noise. If parameter num is greater than 100000, is_add_noise has no effect.

  • is_lower_clip (bool) – If True, the weights of this layer would be clipped to greater than an lower bound value. If False, the weights of this layer won’t be clipped. If parameter num is greater than 100000, is_lower_clip has no effect.

  • min_num (int) – The number of weights left that not be suppressed. If min_num is smaller than (parameter num*SupperssCtrl.sparse_end), min_num has no effect.

  • upper_bound (Union[float, int]) – Max abs value of weight in this layer, default: 1.20. If parameter num is greater than 100000, upper_bound has no effect.

class mindarmour.privacy.sup_privacy.SuppressCtrl(networks, mask_layers, end_epoch, batch_num, start_epoch, mask_times, lr, sparse_end, sparse_start)[source]
Parameters
  • networks (Cell) – The training network.

  • mask_layers (list) – Description of those layers that need to be suppressed.

  • end_epoch (int) – The last epoch in suppress operations.

  • batch_num (int) – The num of grad operation in an epoch.

  • start_epoch (int) – The first epoch in suppress operations.

  • mask_times (int) – The num of suppress operations.

  • lr (Union[float, int]) – Learning rate.

  • sparse_end (float) – The sparsity to reach.

  • sparse_start (Union[float, int]) – The sparsity to start.

calc_actual_sparse_for_conv(networks)[source]

Compute actually sparsity of network for conv1 layer and conv2 layer.

Parameters

networks (Cell) – The training network.

calc_actual_sparse_for_layer(networks, layer_name)[source]

Compute actually sparsity of one network layer

Parameters
  • networks (Cell) – The training network.

  • layer_name (str) – The name of target layer.

calc_theoretical_sparse_for_conv()[source]

Compute actually sparsity of mask matrix for conv1 layer and conv2 layer.

print_paras()[source]

Show parameters info

reset_zeros()[source]

Set add mask arrays to be zero.

update_mask(networks, cur_step, target_sparse=0.0)[source]

Update add mask arrays and multiply mask arrays of network layers.

Parameters
  • networks (Cell) – The training network.

  • cur_step (int) – Current epoch of the whole training process.

  • target_sparse (float) – The sparsity to reach. Default: 0.0.

update_mask_layer(weight_array_flat, sparse_weight_thd, sparse_stop_pos, weight_abs_max, layer_index)[source]

Update add mask arrays and multiply mask arrays of one single layer.

Parameters
  • weight_array_flat (numpy.ndarray) – The weight array of layer’s parameters.

  • sparse_weight_thd (float) – The weight threshold of sparse operation.

  • sparse_stop_pos (int) – The maximum number of elements to be suppressed.

  • weight_abs_max (float) – The maximum absolute value of weights.

  • layer_index (int) – The index of target layer.

update_mask_layer_approximity(weight_array_flat, weight_array_flat_abs, actual_stop_pos, layer_index)[source]

Update add mask arrays and multiply mask arrays of one single layer with many parameter. Disable clipping lower, clipping, adding noise operation

Parameters
  • weight_array_flat (numpy.ndarray) – The weight array of layer’s parameters.

  • weight_array_flat_abs (numpy.ndarray) – The abs weight array of layer’s parameters.

  • actual_stop_pos (int) – The actually para num should be suppressed.

  • layer_index (int) – The index of target layer.

update_status(cur_epoch, cur_step, cur_step_in_epoch)[source]

Update the suppress operation status.

Parameters
  • cur_epoch (int) – Current epoch of the whole training process.

  • cur_step (int) – Current step of the whole training process.

  • cur_step_in_epoch (int) – Current step of the current epoch.

class mindarmour.privacy.sup_privacy.SuppressMasker(model, suppress_ctrl)[source]
Parameters

Examples

>>> networks_l5 = LeNet5()
>>> masklayers = []
>>> masklayers.append(MaskLayerDes("conv1.weight", 0, False, True, 10))
>>> suppress_ctrl_instance = SuppressPrivacyFactory().create(networks=networks_l5,
>>>                                                     mask_layers=masklayers,
>>>                                                     policy="local_train",
>>>                                                     end_epoch=10,
>>>                                                     batch_num=(int)(10000/cfg.batch_size),
>>>                                                     start_epoch=3,
>>>                                                     mask_times=1000,
>>>                                                     lr=lr,
>>>                                                     sparse_end=0.90,
>>>                                                     sparse_start=0.0)
>>> net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
>>> net_opt = nn.Momentum(params=networks_l5.trainable_params(), learning_rate=lr, momentum=0.0)
>>> config_ck = CheckpointConfig(save_checkpoint_steps=(int)(samples/cfg.batch_size),  keep_checkpoint_max=10)
>>> model_instance = SuppressModel(network=networks_l5,
>>>                            loss_fn=net_loss,
>>>                            optimizer=net_opt,
>>>                            metrics={"Accuracy": Accuracy()})
>>> model_instance.link_suppress_ctrl(suppress_ctrl_instance)
>>> ds_train = generate_mnist_dataset("./MNIST_unzip/train",
>>>                                 batch_size=cfg.batch_size, repeat_size=1, samples=samples)
>>> ckpoint_cb = ModelCheckpoint(prefix="checkpoint_lenet",
>>>                          directory="./trained_ckpt_file/",
>>>                          config=config_ck)
>>> model_instance.train(epoch_size, ds_train, callbacks=[ckpoint_cb, LossMonitor(), suppress_masker],
>>>                  dataset_sink_mode=False)
step_end(run_context)[source]

Update mask matrix tensor used for SuppressModel instance.

Parameters

run_context (RunContext) – Include some information of the model.

class mindarmour.privacy.sup_privacy.SuppressModel(network, loss_fn, optimizer, **kwargs)[source]

This class is overload mindspore.train.model.Model.

Parameters
  • network (Cell) – The training network.

  • loss_fn (Cell) – Computes softmax cross entropy between logits and labels.

  • optimizer (Optimizer) – optimizer instance.

  • metrics (Union[dict, set]) – Calculates the accuracy for classification and multilabel data.

  • kwargs – Keyword parameters used for creating a suppress model.

Examples

>>> networks_l5 = LeNet5()
>>> mask_layers = []
>>> mask_layers.append(MaskLayerDes("conv1.weight", 0, False, True, 10))
>>> suppress_ctrl_instance = SuppressPrivacyFactory().create(networks=networks_l5,
>>>                                                     mask_layers=mask_layers,
>>>                                                     policy="local_train",
>>>                                                     end_epoch=10,
>>>                                                     batch_num=(int)(10000/cfg.batch_size),
>>>                                                     start_epoch=3,
>>>                                                     mask_times=1000,
>>>                                                     lr=lr,
>>>                                                     sparse_end=0.90,
>>>                                                     sparse_start=0.0)
>>> net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
>>> net_opt = nn.Momentum(params=networks_l5.trainable_params(), learning_rate=lr, momentum=0.0)
>>> config_ck = CheckpointConfig(save_checkpoint_steps=(int)(samples/cfg.batch_size),  keep_checkpoint_max=10)
>>> model_instance = SuppressModel(network=networks_l5,
>>>                            loss_fn=net_loss,
>>>                            optimizer=net_opt,
>>>                            metrics={"Accuracy": Accuracy()})
>>> model_instance.link_suppress_ctrl(suppress_ctrl_instance)
>>> ds_train = generate_mnist_dataset("./MNIST_unzip/train",
>>>                                 batch_size=cfg.batch_size, repeat_size=1, samples=samples)
>>> ckpoint_cb = ModelCheckpoint(prefix="checkpoint_lenet",
>>>                          directory="./trained_ckpt_file/",
>>>                          config=config_ck)
>>> model_instance.train(epoch_size, ds_train, callbacks=[ckpoint_cb, LossMonitor(), suppress_masker],
>>>                  dataset_sink_mode=False)

Link self and SuppressCtrl instance.

Parameters

suppress_pri_ctrl (SuppressCtrl) – SuppressCtrl instance.

class mindarmour.privacy.sup_privacy.SuppressPrivacyFactory[source]

Factory class of SuppressCtrl mechanisms

static create(networks, mask_layers, policy='local_train', end_epoch=10, batch_num=20, start_epoch=3, mask_times=1000, lr=0.05, sparse_end=0.9, sparse_start=0.0)[source]
Parameters
  • networks (Cell) – The training network. This networks parameter should be same as ‘network’ parameter of SuppressModel().

  • mask_layers (list) – Description of the training network layers that need to be suppressed.

  • policy (str) – Training policy for suppress privacy training. Default: “local_train”, means local training.

  • end_epoch (int) – The last epoch in suppress operations, 0<start_epoch<=end_epoch<=100. Default: 10. This end_epoch parameter should be same as ‘epoch’ parameter of mindspore.train.model.train().

  • batch_num (int) – The num of batch in an epoch, should be equal to num_samples/batch_size. Default: 20.

  • start_epoch (int) – The first epoch in suppress operations, 0<start_epoch<=end_epoch<=100. Default: 3.

  • mask_times (int) – The num of suppress operations. Default: 1000.

  • lr (Union[float, int]) – Learning rate, should be unchanged during training. 0<lr<=0.50. Default: 0.05. This lr parameter should be same as ‘learning_rate’ parameter of mindspore.nn.SGD().

  • sparse_end (float) – The sparsity to reach, 0.0<=sparse_start<sparse_end<1.0. Default: 0.90.

  • sparse_start (Union[float, int]) – The sparsity to start, 0.0<=sparse_start<sparse_end<1.0. Default: 0.0.

Returns

SuppressCtrl, class of Suppress Privavy Mechanism.

Examples

>>> networks_l5 = LeNet5()
>>> mask_layers = []
>>> mask_layers.append(MaskLayerDes("conv1.weight", 0, False, True, 10))
>>> suppress_ctrl_instance = SuppressPrivacyFactory().create(networks=networks_l5,
>>>                                                 mask_layers=mask_layers,
>>>                                                 policy="local_train",
>>>                                                 end_epoch=10,
>>>                                                 batch_num=(int)(10000/cfg.batch_size),
>>>                                                 start_epoch=3,
>>>                                                 mask_times=1000,
>>>                                                 lr=lr,
>>>                                                 sparse_end=0.90,
>>>                                                 sparse_start=0.0)
>>> net_loss = nn.SoftmaxCrossEntropyWithLogits(sparse=True, reduction="mean")
>>> net_opt = nn.Momentum(params=networks_l5.trainable_params(), learning_rate=lr, momentum=0.0)
>>> config_ck = CheckpointConfig(save_checkpoint_steps=(int)(samples/cfg.batch_size),
>>>                              keep_checkpoint_max=10)
>>> model_instance = SuppressModel(network=networks_l5,
>>>                             loss_fn=net_loss,
>>>                             optimizer=net_opt,
>>>                             metrics={"Accuracy": Accuracy()})
>>> model_instance.link_suppress_ctrl(suppress_ctrl_instance)
>>> ds_train = generate_mnist_dataset("./MNIST_unzip/train",
>>>                                 batch_size=cfg.batch_size, repeat_size=1, samples=samples)
>>> ckpoint_cb = ModelCheckpoint(prefix="checkpoint_lenet",
>>>                             directory="./trained_ckpt_file/",
>>>                             config=config_ck)
>>> model_instance.train(epoch_size, ds_train, callbacks=[ckpoint_cb, LossMonitor(), suppress_masker],
>>>                 dataset_sink_mode=False)