mindarmour.privacy.evaluation
This module provides some methods to evaluate the risk of privacy leakage of given model.
- class mindarmour.privacy.evaluation.ImageInversionAttack(network, input_shape, input_bound, loss_weights=(1, 0.2, 5))[source]
An attack method used to reconstruct images by inverting their deep representations.
References: Aravindh Mahendran, Andrea Vedaldi. Understanding Deep Image Representations by Inverting Them. 2014.
- Parameters
network (Cell) – The network used to infer images’ deep representations.
input_shape (tuple) – Data shape of single network input, which should be in accordance with the given network. The format of shape should be (channel, image_width, image_height).
input_bound (Union[tuple, list]) – The pixel range of original images, which should be like [minimum_pixel, maximum_pixel] or (minimum_pixel, maximum_pixel).
loss_weights (Union[list, tuple]) – Weights of three sub-loss in InversionLoss, which can be adjusted to obtain better results. Default: (1, 0.2, 5).
- Raises
TypeError – If the type of network is not Cell.
ValueError – If any value of input_shape is not positive int.
ValueError – If any value of loss_weights is not positive value.
- evaluate(original_images, inversion_images, labels=None, new_network=None)[source]
Evaluate the quality of inverted images by three index: the average L2 distance and SSIM value between original images and inversion images, and the average of inverted images’ confidence on true labels of inverted inferred by a new trained network.
- Parameters
original_images (numpy.ndarray) – Original images, whose shape should be (img_num, channels, img_width, img_height).
inversion_images (numpy.ndarray) – Inversion images, whose shape should be (img_num, channels, img_width, img_height).
labels (numpy.ndarray) – Ground truth labels of original images. Default: None.
new_network (Cell) – A network whose structure contains all parts of self._network, but loaded with different checkpoint file. Default: None.
- Returns
float, l2 distance.
float, average ssim value.
Union[float, None], average confidence. It would be None if labels or new_network is None.
Examples
>>> net = LeNet5() >>> inversion_attack = ImageInversionAttack(net, input_shape=(1, 32, 32), input_bound=(0, 1), >>> loss_weights=[1, 0.2, 5]) >>> features = np.random.random((2, 10)).astype(np.float32) >>> inver_images = inversion_attack.generate(features, iters=10) >>> ori_images = np.random.random((2, 1, 32, 32)) >>> result = inversion_attack.evaluate(ori_images, inver_images) >>> print(len(result))
- generate(target_features, iters=100)[source]
Reconstruct images based on target_features.
- Parameters
target_features (numpy.ndarray) – Deep representations of original images. The first dimension of target_features should be img_num. It should be noted that the shape of target_features should be (1, dim2, dim3, …) if img_num equals 1.
iters (int) – iteration times of inversion attack, which should be positive integers. Default: 100.
- Returns
numpy.ndarray, reconstructed images, which are expected to be similar to original images.
- Raises
TypeError – If the type of target_features is not numpy.ndarray.
ValueError – If any value of iters is not positive int.Z
Examples
>>> net = LeNet5() >>> inversion_attack = ImageInversionAttack(net, input_shape=(1, 32, 32), input_bound=(0, 1), >>> loss_weights=[1, 0.2, 5]) >>> features = np.random.random((2, 10)).astype(np.float32) >>> images = inversion_attack.generate(features, iters=10) >>> print(images.shape) (2, 1, 32, 32)
- class mindarmour.privacy.evaluation.MembershipInference(model, n_jobs=- 1)[source]
Evaluation proposed by Shokri, Stronati, Song and Shmatikov is a grey-box attack. The attack requires loss or logits results of training samples.
- Parameters
model (Model) – Target model.
n_jobs (int) – Number of jobs run in parallel. -1 means using all processors, otherwise the value of n_jobs must be a positive integer.
Examples
>>> # train_1, train_2 are non-overlapping datasets from training dataset of target model. >>> # test_1, test_2 are non-overlapping datasets from test dataset of target model. >>> # We use train_1, test_1 to train attack model, and use train_2, test_2 to evaluate attack model. >>> model = Model(network=net, loss_fn=loss, optimizer=opt, metrics={'acc', 'loss'}) >>> attack_model = MembershipInference(model, n_jobs=-1) >>> config = [{"method": "KNN", "params": {"n_neighbors": [3, 5, 7]}}] >>> attack_model.train(train_1, test_1, config) >>> metrics = ["precision", "recall", "accuracy"] >>> result = attack_model.eval(train_2, test_2, metrics)
- Raises
TypeError – If type of model is not mindspore.train.Model.
TypeError – If type of n_jobs is not int.
ValueError – The value of n_jobs is neither -1 nor a positive integer.
- eval(dataset_train, dataset_test, metrics)[source]
Evaluate the different privacy of the target model. Evaluation indicators shall be specified by metrics.
- Parameters
- Returns
list, each element contains an evaluation indicator for the attack model.
- train(dataset_train, dataset_test, attack_config)[source]
Depending on the configuration, use the input dataset to train the attack model. Save the attack model to self._attack_list.
- Parameters
dataset_train (mindspore.dataset) – The training dataset for the target model.
dataset_test (mindspore.dataset) – The test set for the target model.
attack_config (Union[list, tuple]) – Parameter setting for the attack model. The format is [{“method”: “knn”, “params”: {“n_neighbors”: [3, 5, 7]}}, {“method”: “lr”, “params”: {“C”: np.logspace(-4, 2, 10)}}]. The support methods are knn, lr, mlp and rf, and the params of each method must within the range of changeable parameters. Tips of params implement can be found below: KNN, LR, RF, MLP.
- Raises