mindarmour.adv_robustness.defenses
This module includes classical defense algorithms in defencing adversarial examples and enhancing model security and trustworthy.
- class mindarmour.adv_robustness.defenses.AdversarialDefense(network, loss_fn=None, optimizer=None)[source]
Adversarial training using given adversarial examples.
- Parameters
network (Cell) – A MindSpore network to be defensed.
loss_fn (Functions) – Loss function. Default: None.
optimizer (Cell) – Optimizer used to train the network. Default: None.
Examples
>>> class Net(Cell): >>> def __init__(self): >>> super(Net, self).__init__() >>> self._reshape = P.Reshape() >>> self._full_con_1 = Dense(28*28, 120) >>> self._full_con_2 = Dense(120, 84) >>> self._full_con_3 = Dense(84, 10) >>> self._relu = ReLU() >>> >>> def construct(self, x): >>> out = self._reshape(x, (-1, 28*28)) >>> out = self._full_con_1(out) >>> out = self.relu(out) >>> out = self._full_con_2(out) >>> out = self.relu(out) >>> out = self._full_con_3(out) >>> return out >>> >>> net = Net() >>> lr = 0.0001 >>> momentum = 0.9 >>> loss_fn = SoftmaxCrossEntropyWithLogits(sparse=True) >>> optimizer = Momentum(net.trainable_params(), lr, momentum) >>> adv_defense = AdversarialDefense(net, loss_fn, optimizer) >>> inputs = np.random.rand(32, 1, 28, 28).astype(np.float32) >>> labels = np.random.randint(0, 10).astype(np.int32) >>> adv_defense.defense(inputs, labels)
- defense(inputs, labels)[source]
Enhance model via training with input samples.
- Parameters
inputs (numpy.ndarray) – Input samples.
labels (numpy.ndarray) – Labels of input samples.
- Returns
numpy.ndarray, loss of defense operation.
- class mindarmour.adv_robustness.defenses.AdversarialDefenseWithAttacks(network, attacks, loss_fn=None, optimizer=None, bounds=(0.0, 1.0), replace_ratio=0.5)[source]
Adversarial defense with attacks.
- Parameters
network (Cell) – A MindSpore network to be defensed.
loss_fn (Functions) – Loss function. Default: None.
optimizer (Cell) – Optimizer used to train the network. Default: None.
bounds (tuple) – Upper and lower bounds of data. In form of (clip_min, clip_max). Default: (0.0, 1.0).
replace_ratio (float) – Ratio of replacing original samples with adversarial, which must be between 0 and 1. Default: 0.5.
- Raises
ValueError – If replace_ratio is not between 0 and 1.
Examples
>>> net = Net() >>> fgsm = FastGradientSignMethod(net) >>> pgd = ProjectedGradientDescent(net) >>> ead = AdversarialDefenseWithAttacks(net, [fgsm, pgd]) >>> ead.defense(inputs, labels)
- defense(inputs, labels)[source]
Enhance model via training with adversarial examples generated from input samples.
- Parameters
inputs (numpy.ndarray) – Input samples.
labels (numpy.ndarray) – Labels of input samples.
- Returns
numpy.ndarray, loss of adversarial defense operation.
- class mindarmour.adv_robustness.defenses.EnsembleAdversarialDefense(network, attacks, loss_fn=None, optimizer=None, bounds=(0.0, 1.0), replace_ratio=0.5)[source]
Ensemble adversarial defense.
- Parameters
network (Cell) – A MindSpore network to be defensed.
loss_fn (Functions) – Loss function. Default: None.
optimizer (Cell) – Optimizer used to train the network. Default: None.
bounds (tuple) – Upper and lower bounds of data. In form of (clip_min, clip_max). Default: (0.0, 1.0).
replace_ratio (float) – Ratio of replacing original samples with adversarial, which must be between 0 and 1. Default: 0.5.
- Raises
ValueError – If replace_ratio is not between 0 and 1.
Examples
>>> net = Net() >>> fgsm = FastGradientSignMethod(net) >>> pgd = ProjectedGradientDescent(net) >>> ead = EnsembleAdversarialDefense(net, [fgsm, pgd]) >>> ead.defense(inputs, labels)
- class mindarmour.adv_robustness.defenses.NaturalAdversarialDefense(network, loss_fn=None, optimizer=None, bounds=(0.0, 1.0), replace_ratio=0.5, eps=0.1)[source]
Adversarial training based on FGSM.
Reference: A. Kurakin, et al., “Adversarial machine learning at scale,” in ICLR, 2017.
- Parameters
network (Cell) – A MindSpore network to be defensed.
loss_fn (Functions) – Loss function. Default: None.
optimizer (Cell) – Optimizer used to train the network. Default: None.
bounds (tuple) – Upper and lower bounds of data. In form of (clip_min, clip_max). Default: (0.0, 1.0).
replace_ratio (float) – Ratio of replacing original samples with adversarial samples. Default: 0.5.
eps (float) – Step size of the attack method(FGSM). Default: 0.1.
Examples
>>> net = Net() >>> adv_defense = NaturalAdversarialDefense(net) >>> adv_defense.defense(inputs, labels)
- class mindarmour.adv_robustness.defenses.ProjectedAdversarialDefense(network, loss_fn=None, optimizer=None, bounds=(0.0, 1.0), replace_ratio=0.5, eps=0.3, eps_iter=0.1, nb_iter=5, norm_level='inf')[source]
Adversarial training based on PGD.
Reference: A. Madry, et al., “Towards deep learning models resistant to adversarial attacks,” in ICLR, 2018.
- Parameters
network (Cell) – A MindSpore network to be defensed.
loss_fn (Functions) – Loss function. Default: None.
optimizer (Cell) – Optimizer used to train the nerwork. Default: None.
bounds (tuple) – Upper and lower bounds of input data. In form of (clip_min, clip_max). Default: (0.0, 1.0).
replace_ratio (float) – Ratio of replacing original samples with adversarial samples. Default: 0.5.
eps (float) – PGD attack parameters, epsilon. Default: 0.3.
eps_iter (int) – PGD attack parameters, inner loop epsilon. Default:0.1.
nb_iter (int) – PGD attack parameters, number of iteration. Default: 5.
norm_level (str) – Norm type. ‘inf’ or ‘l2’. Default: ‘inf’.
Examples
>>> net = Net() >>> adv_defense = ProjectedAdversarialDefense(net) >>> adv_defense.defense(inputs, labels)