mindarmour.privacy.evaluation

This module provides some methods to evaluate the risk of privacy leakage of given model.

class mindarmour.privacy.evaluation.MembershipInference(model, n_jobs=- 1)[source]

Evaluation proposed by Shokri, Stronati, Song and Shmatikov is a grey-box attack. The attack requires obtain loss or logits results of training samples.

References: Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov. Membership Inference Attacks against Machine Learning Models. 2017.

Parameters
  • model (Model) – Target model.

  • n_jobs (int) – Number of jobs run in parallel. -1 means using all processors, otherwise the value of n_jobs must be a positive integer.

Examples

>>> # train_1, train_2 are non-overlapping datasets from training dataset of target model.
>>> # test_1, test_2 are non-overlapping datasets from test dataset of target model.
>>> # We use train_1, test_1 to train attack model, and use train_2, test_2 to evaluate attack model.
>>> model = Model(network=net, loss_fn=loss, optimizer=opt, metrics={'acc', 'loss'})
>>> attack_model = MembershipInference(model, n_jobs=-1)
>>> config = [{"method": "KNN", "params": {"n_neighbors": [3, 5, 7]}}]
>>> attack_model.train(train_1, test_1, config)
>>> metrics = ["precision", "recall", "accuracy"]
>>> result = attack_model.eval(train_2, test_2, metrics)
Raises
  • TypeError – If type of model is not mindspore.train.Model.

  • TypeError – If type of n_jobs is not int.

  • ValueError – The value of n_jobs is neither -1 nor a positive integer.

eval(dataset_train, dataset_test, metrics)[source]

Evaluate the different privacy of the target model. Evaluation indicators shall be specified by metrics.

Parameters
  • dataset_train (mindspore.dataset) – The training dataset for the target model.

  • dataset_test (mindspore.dataset) – The test dataset for the target model.

  • metrics (Union[list, tuple]) – Evaluation indicators. The value of metrics must be in [“precision”, “accuracy”, “recall”]. Default: [“precision”].

Returns

list, each element contains an evaluation indicator for the attack model.

train(dataset_train, dataset_test, attack_config)[source]

Depending on the configuration, use the input dataset to train the attack model. Save the attack model to self._attack_list.

Parameters
  • dataset_train (mindspore.dataset) – The training dataset for the target model.

  • dataset_test (mindspore.dataset) – The test set for the target model.

  • attack_config (Union[list, tuple]) – Parameter setting for the attack model. The format is [{“method”: “knn”, “params”: {“n_neighbors”: [3, 5, 7]}}, {“method”: “lr”, “params”: {“C”: np.logspace(-4, 2, 10)}}]. The support methods are knn, lr, mlp and rf, and the params of each method must within the range of changeable parameters. Tips of params implement can be found below: KNN, LR, RF, MLP.

Raises
  • KeyError – If any config in attack_config doesn’t have keys {“method”, “params”}.

  • NameError – If the method(case insensitive) in attack_config is not in [“lr”, “knn”, “rf”, “mlp”].