mindarmour.adv_robustness.defenses

This module includes classical defense algorithms in defencing adversarial examples and enhancing model security and trustworthy.

class mindarmour.adv_robustness.defenses.AdversarialDefense(network, loss_fn=None, optimizer=None)[source]

Adversarial training using given adversarial examples.

Parameters
  • network (Cell) – A MindSpore network to be defensed.

  • loss_fn (Functions) – Loss function. Default: None.

  • optimizer (Cell) – Optimizer used to train the network. Default: None.

Examples

>>> from mindspore.nn.optim.momentum import Momentum
>>> import mindspore.ops.operations as P
>>> from mindarmour.adv_robustness.defenses import AdversarialDefense
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self._softmax = P.Softmax()
...         self._dense = nn.Dense(10, 10)
...         self._squeeze = P.Squeeze(1)
...     def construct(self, inputs):
...         out = self._softmax(inputs)
...         out = self._dense(out)
...         out = self._squeeze(out)
...         return out
>>> net = Net()
>>> lr = 0.001
>>> momentum = 0.9
>>> batch_size = 16
>>> num_classes = 10
>>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=False)
>>> optimizer = Momentum(net.trainable_params(), learning_rate=lr, momentum=momentum)
>>> adv_defense = AdversarialDefense(net, loss_fn, optimizer)
>>> inputs = np.random.rand(batch_size, 1, 10).astype(np.float32)
>>> labels = np.random.randint(10, size=batch_size).astype(np.int32)
>>> labels = np.eye(num_classes)[labels].astype(np.float32)
>>> adv_defense.defense(inputs, labels)
defense(inputs, labels)[source]

Enhance model via training with input samples.

Parameters
Returns

numpy.ndarray, loss of defense operation.

class mindarmour.adv_robustness.defenses.AdversarialDefenseWithAttacks(network, attacks, loss_fn=None, optimizer=None, bounds=(0.0, 1.0), replace_ratio=0.5)[source]

Adversarial training using specific attacking method and the given adversarial examples to enhance model robustness.

Parameters
  • network (Cell) – A MindSpore network to be defensed.

  • attacks (list[Attack]) – List of attack method.

  • loss_fn (Functions) – Loss function. Default: None.

  • optimizer (Cell) – Optimizer used to train the network. Default: None.

  • bounds (tuple) – Upper and lower bounds of data. In form of (clip_min, clip_max). Default: (0.0, 1.0).

  • replace_ratio (float) – Ratio of replacing original samples with adversarial, which must be between 0 and 1. Default: 0.5.

Raises

ValueError – If replace_ratio is not between 0 and 1.

Examples

>>> from mindspore.nn.optim.momentum import Momentum
>>> import mindspore.ops.operations as P
>>> from mindarmour.adv_robustness.attacks import FastGradientSignMethod
>>> from mindarmour.adv_robustness.attacks import ProjectedGradientDescent
>>> from mindarmour.adv_robustness.defenses import AdversarialDefenseWithAttacks
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self._softmax = P.Softmax()
...         self._dense = nn.Dense(10, 10)
...         self._squeeze = P.Squeeze(1)
...     def construct(self, inputs):
...         out = self._softmax(inputs)
...         out = self._dense(out)
...         out = self._squeeze(out)
...         return out
>>> net = Net()
>>> lr = 0.001
>>> momentum = 0.9
>>> batch_size = 16
>>> num_classes = 10
>>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=False)
>>> optimizer = Momentum(net.trainable_params(), learning_rate=lr, momentum=momentum)
>>> fgsm = FastGradientSignMethod(net, loss_fn=loss_fn)
>>> pgd = ProjectedGradientDescent(net, loss_fn=loss_fn)
>>> ead = AdversarialDefenseWithAttacks(net, [fgsm, pgd], loss_fn=loss_fn,
...                                     optimizer=optimizer)
>>> inputs = np.random.rand(batch_size, 1, 10).astype(np.float32)
>>> labels = np.random.randint(10, size=batch_size).astype(np.int32)
>>> labels = np.eye(num_classes)[labels].astype(np.float32)
>>> loss = ead.defense(inputs, labels)
defense(inputs, labels)[source]

Enhance model via training with adversarial examples generated from input samples.

Parameters
Returns

numpy.ndarray, loss of adversarial defense operation.

class mindarmour.adv_robustness.defenses.EnsembleAdversarialDefense(network, attacks, loss_fn=None, optimizer=None, bounds=(0.0, 1.0), replace_ratio=0.5)[source]

Adversarial training using a list of specific attacking methods and the given adversarial examples to enhance model robustness.

Parameters
  • network (Cell) – A MindSpore network to be defensed.

  • attacks (list[Attack]) – List of attack method.

  • loss_fn (Functions) – Loss function. Default: None.

  • optimizer (Cell) – Optimizer used to train the network. Default: None.

  • bounds (tuple) – Upper and lower bounds of data. In form of (clip_min, clip_max). Default: (0.0, 1.0).

  • replace_ratio (float) – Ratio of replacing original samples with adversarial, which must be between 0 and 1. Default: 0.5.

Raises

ValueError – If replace_ratio is not between 0 and 1.

Examples

>>> from mindspore.nn.optim.momentum import Momentum
>>> import mindspore.ops.operations as P
>>> from mindarmour.adv_robustness.attacks import FastGradientSignMethod
>>> from mindarmour.adv_robustness.attacks import ProjectedGradientDescent
>>> from mindarmour.adv_robustness.defenses import EnsembleAdversarialDefense
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self._softmax = P.Softmax()
...         self._dense = nn.Dense(10, 10)
...         self._squeeze = P.Squeeze(1)
...     def construct(self, inputs):
...         out = self._softmax(inputs)
...         out = self._dense(out)
...         out = self._squeeze(out)
...         return out
>>> net = Net()
>>> lr = 0.001
>>> momentum = 0.9
>>> batch_size = 16
>>> num_classes = 10
>>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=False)
>>> optimizer = Momentum(net.trainable_params(), learning_rate=lr, momentum=momentum)
>>> fgsm = FastGradientSignMethod(net, loss_fn=loss_fn)
>>> pgd = ProjectedGradientDescent(net, loss_fn=loss_fn)
>>> ead = EnsembleAdversarialDefense(net, [fgsm, pgd], loss_fn=loss_fn,
...                                  optimizer=optimizer)
>>> inputs = np.random.rand(batch_size, 1, 10).astype(np.float32)
>>> labels = np.random.randint(10, size=batch_size).astype(np.int32)
>>> labels = np.eye(num_classes)[labels].astype(np.float32)
>>> loss = ead.defense(inputs, labels)
class mindarmour.adv_robustness.defenses.NaturalAdversarialDefense(network, loss_fn=None, optimizer=None, bounds=(0.0, 1.0), replace_ratio=0.5, eps=0.1)[source]

Adversarial training based on FGSM.

Reference: A. Kurakin, et al., “Adversarial machine learning at scale,” in ICLR, 2017.

Parameters
  • network (Cell) – A MindSpore network to be defensed.

  • loss_fn (Functions) – Loss function. Default: None.

  • optimizer (Cell) – Optimizer used to train the network. Default: None.

  • bounds (tuple) – Upper and lower bounds of data. In form of (clip_min, clip_max). Default: (0.0, 1.0).

  • replace_ratio (float) – Ratio of replacing original samples with adversarial samples. Default: 0.5.

  • eps (float) – Step size of the attack method(FGSM). Default: 0.1.

Examples

>>> from mindspore.nn.optim.momentum import Momentum
>>> import mindspore.ops.operations as P
>>> from mindarmour.adv_robustness.defenses import NaturalAdversarialDefense
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self._softmax = P.Softmax()
...         self._dense = nn.Dense(10, 10)
...         self._squeeze = P.Squeeze(1)
...     def construct(self, inputs):
...         out = self._softmax(inputs)
...         out = self._dense(out)
...         out = self._squeeze(out)
...         return out
>>> net = Net()
>>> lr = 0.001
>>> momentum = 0.9
>>> batch_size = 16
>>> num_classes = 10
>>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=False)
>>> optimizer = Momentum(net.trainable_params(), learning_rate=lr, momentum=momentum)
>>> nad = NaturalAdversarialDefense(net, loss_fn=loss_fn, optimizer=optimizer)
>>> inputs = np.random.rand(batch_size, 1, 10).astype(np.float32)
>>> labels = np.random.randint(10, size=batch_size).astype(np.int32)
>>> labels = np.eye(num_classes)[labels].astype(np.float32)
>>> loss = nad.defense(inputs, labels)
class mindarmour.adv_robustness.defenses.ProjectedAdversarialDefense(network, loss_fn=None, optimizer=None, bounds=(0.0, 1.0), replace_ratio=0.5, eps=0.3, eps_iter=0.1, nb_iter=5, norm_level='inf')[source]

Adversarial training based on PGD.

Reference: A. Madry, et al., “Towards deep learning models resistant to adversarial attacks,” in ICLR, 2018.

Parameters
  • network (Cell) – A MindSpore network to be defensed.

  • loss_fn (Functions) – Loss function. Default: None.

  • optimizer (Cell) – Optimizer used to train the nerwork. Default: None.

  • bounds (tuple) – Upper and lower bounds of input data. In form of (clip_min, clip_max). Default: (0.0, 1.0).

  • replace_ratio (float) – Ratio of replacing original samples with adversarial samples. Default: 0.5.

  • eps (float) – PGD attack parameters, epsilon. Default: 0.3.

  • eps_iter (int) – PGD attack parameters, inner loop epsilon. Default:0.1.

  • nb_iter (int) – PGD attack parameters, number of iteration. Default: 5.

  • norm_level (Union[int, char, numpy.inf]) – Norm type. 1, 2, np.inf, ‘l1’, ‘l2’, ‘np.inf’ or ‘inf’. Default: ‘inf’.

Examples

>>> from mindspore.nn.optim.momentum import Momentum
>>> import mindspore.ops.operations as P
>>> from mindarmour.adv_robustness.defenses import ProjectedAdversarialDefense
>>> class Net(nn.Cell):
...     def __init__(self):
...         super(Net, self).__init__()
...         self._softmax = P.Softmax()
...         self._dense = nn.Dense(10, 10)
...         self._squeeze = P.Squeeze(1)
...     def construct(self, inputs):
...         out = self._softmax(inputs)
...         out = self._dense(out)
...         out = self._squeeze(out)
...         return out
>>> net = Net()
>>> lr = 0.001
>>> momentum = 0.9
>>> batch_size = 16
>>> num_classes = 10
>>> loss_fn = nn.SoftmaxCrossEntropyWithLogits(sparse=False)
>>> optimizer = Momentum(net.trainable_params(), learning_rate=lr, momentum=momentum)
>>> pad = ProjectedAdversarialDefense(net, loss_fn=loss_fn, optimizer=optimizer)
>>> inputs = np.random.rand(batch_size, 1, 10).astype(np.float32)
>>> labels = np.random.randint(10, size=batch_size).astype(np.int32)
>>> labels = np.eye(num_classes)[labels].astype(np.float32)
>>> loss = pad.defense(inputs, labels)