mindspore.explainer

mindspore.explainer

Provides explanation runner high-level APIs.

class mindspore.explainer.ImageClassificationRunner(summary_dir, data, network, activation_fn)[source]

A high-level API for users to generate and store results of the explanation methods and the evaluation methods.

Update in 2020.11: Adjust the storage structure and format of the data. Summary files generated by previous version will be deprecated and will not be supported in MindInsight of current version.

Parameters
  • summary_dir (str) – The directory path to save the summary files which store the generated results.

  • data (tuple[Dataset, list[str]]) – Tuple of dataset and the corresponding class label list. The dataset should provides [images], [images, labels] or [images, labels, bboxes] as columns. The label list must share the exact same length and order of the network outputs.

  • network (Cell) – The network(with logit outputs) to be explained.

  • activation_fn (Cell) – The activation layer that transforms logits to prediction probabilities. For single label classification tasks, nn.Softmax is usually applied. As for multi-label classification tasks, nn.Sigmoid is usually be applied. Users can also pass their own customized activation_fn as long as when combining this function with network, the final output is the probability of the input.

Raises

TypeError – Be raised for any argument type problem.

Supported Platforms:

Ascend GPU

Examples

>>> from mindspore.explainer import ImageClassificationRunner
>>> from mindspore.explainer.explanation import GuidedBackprop, Gradient
>>> from mindspore.explainer.benchmark import Faithfulness
>>> from mindspore.nn import Softmax
>>> from mindspore.train.serialization import load_checkpoint, load_param_into_net
>>> from mindspore import context
>>>
>>> context.set_context(mode=context.PYNATIVE_MODE)
>>> # The detail of AlexNet is shown in model_zoo.official.cv.alexnet.src.alexnet.py
>>> net = AlexNet(10)
>>> # Load the checkpoint
>>> param_dict = load_checkpoint("/path/to/checkpoint")
>>> load_param_into_net(net, param_dict)
[]
>>>
>>> # Prepare the dataset for explaining and evaluation.
>>> # The detail of create_dataset_cifar10 method is shown in model_zoo.official.cv.alexnet.src.dataset.py
>>>
>>> dataset = create_dataset_cifar10("/path/to/cifar/dataset", 1)
>>> labels = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
>>>
>>> activation_fn = Softmax()
>>> gbp = GuidedBackprop(net)
>>> gradient = Gradient(net)
>>> explainers = [gbp, gradient]
>>> faithfulness = Faithfulness(len(labels), activation_fn, "NaiveFaithfulness")
>>> benchmarkers = [faithfulness]
>>>
>>> runner = ImageClassificationRunner("./summary_dir", (dataset, labels), net, activation_fn)
>>> runner.register_saliency(explainers=explainers, benchmarkers=benchmarkers)
>>> runner.run()
register_hierarchical_occlusion()[source]

Register hierarchical occlusion instances.

Warning

This function can not be invoked more than once on each runner.

Note

Input images are required to be in 3 channels formats and the length of side short must be equals to or greater than 56 pixels.

Raises
  • ValueError – Be raised for any data or settings’ value problem.

  • RuntimeError – Be raised if the function was called already.

register_saliency(explainers, benchmarkers=None)[source]

Register saliency explanation instances.

Warning

This function can not be invoked more than once on each runner.

Parameters
  • explainers (list[Attribution]) – The explainers to be evaluated, see mindspore.explainer.explanation. All explainers’ class must be distinct and their network must be the exact same instance of the runner’s network.

  • benchmarkers (list[AttributionMetric], optional) – The benchmarkers for scoring the explainers, see mindspore.explainer.benchmark. All benchmarkers’ class must be distinct.

Raises
  • ValueError – Be raised for any data or settings’ value problem.

  • TypeError – Be raised for any data or settings’ type problem.

  • RuntimeError – Be raised if this function was invoked before.

register_uncertainty()[source]

Register uncertainty instance to compute the epistemic uncertainty base on the Bayes’ theorem.

Warning

This function can not be invoked more than once on each runner.

Note

Please refer to the documentation of mindspore.nn.probability.toolbox.uncertainty_evaluation for the details. The actual output is standard deviation of the classification predictions and the corresponding 95% confidence intervals. Users have to invoke register_saliency() as well for the uncertainty results are going to be shown on the saliency map page in MindInsight.

Raises

RuntimeError – Be raised if the function was called already.

run()[source]

Run the explain job and save the result as a summary in summary_dir.

Note

User should call register_saliency() once before running this function.

Raises
  • ValueError – Be raised for any data or settings’ value problem.

  • TypeError – Be raised for any data or settings’ type problem.

  • RuntimeError – Be raised for any runtime problem.

mindspore.explainer.explanation

Predefined Attribution explainers.

class mindspore.explainer.explanation.Deconvolution(network)[source]

Deconvolution explanation.

Deconvolution method is a modified version of Gradient method. For the original ReLU operation in the network to be explained, Deconvolution modifies the propagation rule from directly backpropagating gradients to backprpagating positive gradients.

Note

The parsed network will be set to eval mode through network.set_grad(False) and network.set_train(False). If you want to train the network afterwards, please reset it back to training mode through the opposite operations. To use Deconvolution, the ReLU operations in the network must be implemented with mindspore.nn.Cell object rather than mindspore.ops.Operations.ReLU. Otherwise, the results will not be correct.

Parameters

network (Cell) – The black-box model to be explained.

Inputs:
  • inputs (Tensor) - The input data to be explained, a 4D tensor of shape \((N, C, H, W)\).

  • targets (Tensor, int) - The label of interest. It should be a 1D or 0D tensor, or an integer. If it is a 1D tensor, its length should be the same as inputs.

Outputs:

Tensor, a 4D tensor of shape \((N, 1, H, W)\), saliency maps.

Raises
  • TypeError – Be raised for any argument or input type problem.

  • ValueError – Be raised for any input value problem.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore.explainer.explanation import Deconvolution
>>> from mindspore import context
>>>
>>> context.set_context(mode=context.PYNATIVE_MODE)
>>> # The detail of LeNet5 is shown in model_zoo.official.cv.lenet.src.lenet.py
>>> net = LeNet5(10, num_channel=3)
>>> deconvolution = Deconvolution(net)
>>> # parse data and the target label to be explained and get the saliency map
>>> inputs = ms.Tensor(np.random.rand(1, 3, 32, 32), ms.float32)
>>> label = 5
>>> saliency = deconvolution(inputs, label)
>>> print(saliency.shape)
(1, 1, 32, 32)
class mindspore.explainer.explanation.GradCAM(network, layer='')[source]

Provides GradCAM explanation method.

GradCAM generates saliency map at intermediate layer. The attribution is obtained as:

\[ \begin{align}\begin{aligned}\alpha_k^c = \frac{1}{Z} \sum_i \sum_j \frac{\partial{y^c}}{\partial{A_{i,j}^k}}\\attribution = ReLU(\sum_k \alpha_k^c A^k)\end{aligned}\end{align} \]

For more details, please refer to the original paper: GradCAM.

Note

The parsed network will be set to eval mode through network.set_grad(False) and network.set_train(False). If you want to train the network afterwards, please reset it back to training mode through the opposite operations.

Parameters
  • network (Cell) – The black-box model to be explained.

  • layer (str, optional) – The layer name to generate the explanation, usually chosen as the last convolutional layer for better practice. If it is ‘’, the explanation will be generated at the input layer. Default: ‘’.

Inputs:
  • inputs (Tensor) - The input data to be explained, a 4D tensor of shape \((N, C, H, W)\).

  • targets (Tensor, int) - The label of interest. It should be a 1D or 0D tensor, or an integer. If it is a 1D tensor, its length should be the same as inputs.

Outputs:

Tensor, a 4D tensor of shape \((N, 1, H, W)\), saliency maps.

Raises
  • TypeError – Be raised for any argument or input type problem.

  • ValueError – Be raised for any input value problem.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore.explainer.explanation import GradCAM
>>> from mindspore import context
>>>
>>> context.set_context(mode=context.PYNATIVE_MODE)
>>> # The detail of LeNet5 is shown in model_zoo.official.cv.lenet.src.lenet.py
>>> net = LeNet5(10, num_channel=3)
>>> # specify a layer name to generate explanation, usually the layer can be set as the last conv layer.
>>> layer_name = 'conv2'
>>> # init GradCAM with a trained network and specify the layer to obtain attribution
>>> gradcam = GradCAM(net, layer=layer_name)
>>> inputs = ms.Tensor(np.random.rand(1, 3, 32, 32), ms.float32)
>>> label = 5
>>> saliency = gradcam(inputs, label)
>>> print(saliency.shape)
(1, 1, 32, 32)
class mindspore.explainer.explanation.Gradient(network)[source]

Provides Gradient explanation method.

Gradient is the simplest attribution method which uses the naive gradients of outputs w.r.t inputs as the explanation.

\[attribution = \frac{\partial{y}}{\partial{x}}\]

Note

The parsed network will be set to eval mode through network.set_grad(False) and network.set_train(False). If you want to train the network afterwards, please reset it back to training mode through the opposite operations.

Parameters

network (Cell) – The black-box model to be explained.

Inputs:
  • inputs (Tensor) - The input data to be explained, a 4D tensor of shape \((N, C, H, W)\).

  • targets (Tensor, int) - The label of interest. It should be a 1D or 0D tensor, or an integer. If it is a 1D tensor, its length should be the same as inputs.

Outputs:

Tensor, a 4D tensor of shape \((N, 1, H, W)\), saliency maps.

Raises
  • TypeError – Be raised for any argument type problem.

  • ValueError – Be raised for any input value problem.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore.explainer.explanation import Gradient
>>> from mindspore import context
>>>
>>> context.set_context(mode=context.PYNATIVE_MODE)
>>> # The detail of LeNet5 is shown in model_zoo.official.cv.lenet.src.lenet.py
>>> net = LeNet5(10, num_channel=3)
>>> gradient = Gradient(net)
>>> inputs = ms.Tensor(np.random.rand(1, 3, 32, 32), ms.float32)
>>> label = 5
>>> saliency = gradient(inputs, label)
>>> print(saliency.shape)
(1, 1, 32, 32)
class mindspore.explainer.explanation.GuidedBackprop(network)[source]

Guided-Backpropagation explanation.

Guided-Backpropagation method is an extension of Gradient method. On top of the original ReLU operation in the network to be explained, Guided-Backpropagation introduces another ReLU operation to filter out the negative gradients during backpropagation.

Note

The parsed network will be set to eval mode through network.set_grad(False) and network.set_train(False). If you want to train the network afterwards, please reset it back to training mode through the opposite operations. To use GuidedBackprop, the ReLU operations in the network must be implemented with mindspore.nn.Cell object rather than mindspore.ops.Operations.ReLU. Otherwise, the results will not be correct.

Parameters

network (Cell) – The black-box model to be explained.

Inputs:
  • inputs (Tensor) - The input data to be explained, a 4D tensor of shape \((N, C, H, W)\).

  • targets (Tensor, int) - The label of interest. It should be a 1D or 0D tensor, or an integer. If it is a 1D tensor, its length should be the same as inputs.

Outputs:

Tensor, a 4D tensor of shape \((N, 1, H, W)\), saliency maps.

Raises
  • TypeError – Be raised for any argument or input type problem.

  • ValueError – Be raised for any input value problem.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore.explainer.explanation import GuidedBackprop
>>> from mindspore import context
>>>
>>> context.set_context(mode=context.PYNATIVE_MODE)
>>> # The detail of LeNet5 is shown in model_zoo.official.cv.lenet.src.lenet.py
>>> net = LeNet5(10, num_channel=3)
>>> gbp = GuidedBackprop(net)
>>> # feed data and the target label to be explained and get the saliency map
>>> inputs = ms.Tensor(np.random.rand(1, 3, 32, 32), ms.float32)
>>> label = 5
>>> saliency = gbp(inputs, label)
>>> print(saliency.shape)
(1, 1, 32, 32)
class mindspore.explainer.explanation.Occlusion(network, activation_fn, perturbation_per_eval=32)[source]

Occlusion uses a sliding window to replace the pixels with a reference value (e.g. constant value), and computes the output difference w.r.t the original output. The output difference caused by perturbed pixels are assigned as feature importance to those pixels. For pixels involved in multiple sliding windows, the feature importance is the averaged differences from multiple sliding windows.

For more details, please refer to the original paper via: https://arxiv.org/abs/1311.2901.

Parameters
  • network (Cell) – The black-box model to be explained.

  • activation_fn (Cell) – The activation layer that transforms logits to prediction probabilities. For single label classification tasks, nn.Softmax is usually applied. As for multi-label classification tasks,`nn.Sigmoid` is usually be applied. Users can also pass their own customized activation_fn as long as when combining this function with network, the final output is the probability of the input.

  • perturbation_per_eval (int, optional) – Number of perturbations for each inference during inferring the perturbed samples. Within the memory capacity, usually the larger this number is, the faster the explanation is obtained. Default: 32.

Inputs:
  • inputs (Tensor) - The input data to be explained, a 4D tensor of shape \((N, C, H, W)\).

  • targets (Tensor, int) - The label of interest. It should be a 1D or 0D tensor, or an integer. If it is a 1D tensor, its length should be the same as inputs.

Outputs:

Tensor, a 4D tensor of shape \((N, 1, H, W)\), saliency maps.

Raises
  • TypeError – Be raised for any argument or input type problem.

  • ValueError – Be raised for any input value problem.

Supported Platforms:

Ascend GPU

Example

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore.explainer.explanation import Occlusion
>>> from mindspore import context
>>>
>>> context.set_context(mode=context.PYNATIVE_MODE)
>>> # The detail of LeNet5 is shown in model_zoo.official.cv.lenet.src.lenet.py
>>> net = LeNet5(10, num_channel=3)
>>> # initialize Occlusion explainer with the pretrained model and activation function
>>> activation_fn = ms.nn.Softmax() # softmax layer is applied to transform logits to probabilities
>>> occlusion = Occlusion(net, activation_fn=activation_fn)
>>> input_x = ms.Tensor(np.random.rand(1, 3, 32, 32), ms.float32)
>>> label = ms.Tensor([1], ms.int32)
>>> saliency = occlusion(input_x, label)
>>> print(saliency.shape)
(1, 1, 32, 32)
class mindspore.explainer.explanation.RISE(network, activation_fn, perturbation_per_eval=32)[source]

RISE: Randomized Input Sampling for Explanation of Black-box Model.

RISE is a perturbation-based method that generates attribution maps by sampling on multiple random binary masks. The original image is randomly masked, and then fed into the black-box model to get predictions. The final attribution map is the weighted sum of these random masks, with the weights being the corresponding output on the node of interest:

\[attribution = \sum_{i}f_c(I\odot M_i) M_i\]

For more details, please refer to the original paper via: RISE.

Parameters
  • network (Cell) – The black-box model to be explained.

  • activation_fn (Cell) – The activation layer that transforms logits to prediction probabilities. For single label classification tasks, nn.Softmax is usually applied. As for multi-label classification tasks, nn.Sigmoid is usually be applied. Users can also pass their own customized activation_fn as long as when combining this function with network, the final output is the probability of the input.

  • perturbation_per_eval (int, optional) – Number of perturbations for each inference during inferring the perturbed samples. Within the memory capacity, usually the larger this number is, the faster the explanation is obtained. Default: 32.

Inputs:
  • inputs (Tensor) - The input data to be explained, a 4D tensor of shape \((N, C, H, W)\).

  • targets (Tensor, int) - The labels of interest to be explained. When targets is an integer, all of the inputs will generates attribution map w.r.t this integer. When targets is a tensor, it should be of shape \((N, l)\) (l being the number of labels for each sample) or \((N,)\) \(()\).

Outputs:

Tensor, a 4D tensor of shape \((N, l, H, W)\) when targets is a tensor of shape (N, l), otherwise a tensor of shape (N, 1, H, w), saliency maps.

Raises
  • TypeError – Be raised for any argument or input type problem.

  • ValueError – Be raised for any input value problem.

Supported Platforms:

Ascend GPU

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore.explainer.explanation import RISE
>>> from mindspore import context
>>>
>>> context.set_context(mode=context.PYNATIVE_MODE)
>>> # The detail of LeNet5 is shown in model_zoo.official.cv.lenet.src.lenet.py
>>> net = LeNet5(10, num_channel=3)
>>> # initialize RISE explainer with the pretrained model and activation function
>>> activation_fn = ms.nn.Softmax() # softmax layer is applied to transform logits to probabilities
>>> rise = RISE(net, activation_fn=activation_fn)
>>> # given an instance of RISE, saliency map can be generate
>>> inputs = ms.Tensor(np.random.rand(2, 3, 32, 32), ms.float32)
>>> # when `targets` is an integer
>>> targets = 5
>>> saliency = rise(inputs, targets)
>>> print(saliency.shape)
(2, 1, 32, 32)
>>> # `targets` can also be a 2D tensor
>>> targets = ms.Tensor([[5], [1]], ms.int32)
>>> saliency = rise(inputs, targets)
>>> print(saliency.shape)
(2, 1, 32, 32)

mindspore.explainer.benchmark

Predefined XAI metrics.

class mindspore.explainer.benchmark.ClassSensitivity[source]

Class sensitivity metric used to evaluate attribution-based explanations.

Reasonable atrribution-based explainers are expected to generate distinct saliency maps for different labels, especially for labels of highest confidence and low confidence. ClassSensitivity evaluates the explainer through computing the correlation between saliency maps of highest-confidence and lowest-confidence labels. Explainer with better class sensitivity will receive lower correlation score. To make the evaluation results intuitive, the returned score will take negative on correlation and normalize.

Supported Platforms:

Ascend GPU

evaluate(explainer, inputs)[source]

Evaluate class sensitivity on a single data sample.

Parameters
  • explainer (Explanation) – The explainer to be evaluated, see mindspore.explainer.explanation.

  • inputs (Tensor) – A data sample, a 4D tensor of shape \((N, C, H, W)\).

Returns

numpy.ndarray, 1D array of shape \((N,)\), result of class sensitivity evaluated on explainer.

Raises

TypeError – Be raised for any argument type problem.

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore.explainer.benchmark import ClassSensitivity
>>> from mindspore.explainer.explanation import Gradient
>>> from mindspore import context
>>>
>>> context.set_context(mode=context.PYNATIVE_MODE)
>>> # The detail of LeNet5 is shown in model_zoo.official.cv.lenet.src.lenet.py
>>> net = LeNet5(10, num_channel=3)
>>> # prepare your explainer to be evaluated, e.g., Gradient.
>>> gradient = Gradient(net)
>>> input_x = ms.Tensor(np.random.rand(1, 3, 32, 32), ms.float32)
>>> class_sensitivity = ClassSensitivity()
>>> res = class_sensitivity.evaluate(gradient, input_x)
>>> print(res.shape)
(1,)
class mindspore.explainer.benchmark.Faithfulness(num_labels, activation_fn, metric='NaiveFaithfulness')[source]

Provides evaluation on faithfulness on XAI explanations.

Three specific metrics to obtain quantified results are supported: “NaiveFaithfulness”, “DeletionAUC”, and “InsertionAUC”.

For metric “NaiveFaithfulness”, a series of perturbed images are created by modifying pixels on original image. Then the perturbed images will be fed to the model and a series of output probability drops can be obtained. The faithfulness is then quantified as the correlation between the propability drops and the saliency map values on the same pixels (we normalize the correlation further to make them in range of [0, 1]).

For metric “DeletionAUC”, a series of perturbed images are created by accumulatively modifying pixels of the original image to a base value (e.g. a constant). The perturbation starts from pixels with high saliency values to pixels with low saliency values. Feeding the perturbed images into the model in order, an output probability drop curve can be obtained. “DeletionAUC” is then obtained as the area under this probability drop curve.

For metric “InsertionAUC”, a series of perturbed images are created by accumulatively inserting pixels of the original image to a reference image (e.g. a black image). The insertion starts from pixels with high saliency values to pixels with low saliency values. Feeding the perturbed images into the model in order, an output probability increase curve can be obtained. “InsertionAUC” is then obtained as the area under this curve.

For all the three metrics, higher value indicates better faithfulness.

Parameters
  • num_labels (int) – Number of labels.

  • activation_fn (Cell) – The activation layer that transforms logits to prediction probabilities. For single label classification tasks, nn.Softmax is usually applied. As for multi-label classification tasks, nn.Sigmoid is usually be applied. Users can also pass their own customized activation_fn as long as when combining this function with network, the final output is the probability of the input.

  • metric (str, optional) – The specifi metric to quantify faithfulness. Options: “DeletionAUC”, “InsertionAUC”, “NaiveFaithfulness”. Default: ‘NaiveFaithfulness’.

Raises

TypeError – Be raised for any argument type problem.

Supported Platforms:

Ascend GPU

evaluate(explainer, inputs, targets, saliency=None)[source]

Evaluate faithfulness on a single data sample.

Note

Currently only single sample (\(N=1\)) at each call is supported.

Parameters
  • explainer (Explanation) – The explainer to be evaluated, see mindspore.explainer.explanation.

  • inputs (Tensor) – A data sample, a 4D tensor of shape \((N, C, H, W)\).

  • targets (Tensor, int) – The label of interest. It should be a 1D or 0D tensor, or an integer. If targets is a 1D tensor, its length should be the same as inputs.

  • saliency (Tensor, optional) – The saliency map to be evaluated, a 4D tensor of shape \((N, 1, H, W)\). If it is None, the parsed explainer will generate the saliency map with inputs and targets and continue the evaluation. Default: None.

Returns

numpy.ndarray, 1D array of shape \((N,)\), result of faithfulness evaluated on explainer.

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore import nn
>>> from mindspore.explainer.benchmark import Faithfulness
>>> from mindspore.explainer.explanation import Gradient
>>> from mindspore import context
>>>
>>> context.set_context(mode=context.PYNATIVE_MODE)
>>> # init a `Faithfulness` object
>>> num_labels = 10
>>> metric = "InsertionAUC"
>>> activation_fn = nn.Softmax()
>>> faithfulness = Faithfulness(num_labels, activation_fn, metric)
>>> # The detail of LeNet5 is shown in model_zoo.official.cv.lenet.src.lenet.py
>>> net = LeNet5(10, num_channel=3)
>>> gradient = Gradient(net)
>>> inputs = ms.Tensor(np.random.rand(1, 3, 32, 32), ms.float32)
>>> targets = 5
>>> # usage 1: input the explainer and the data to be explained,
>>> # faithfulness is a Faithfulness instance
>>> res = faithfulness.evaluate(gradient, inputs, targets)
>>> print(res.shape)
(1,)
>>> # usage 2: input the generated saliency map
>>> saliency = gradient(inputs, targets)
>>> res = faithfulness.evaluate(gradient, inputs, targets, saliency)
>>> print(res.shape)
(1,)
class mindspore.explainer.benchmark.Localization(num_labels, metric='PointingGame')[source]

Provides evaluation on the localization capability of XAI methods.

Three specific metrics to obtain quantified results are supported: “PointingGame”, and “IoSR” (Intersection over Salient Region).

For metric “PointingGame”, the localization capability is calculated as the ratio of data in which the max position of their saliency maps lies within the bounding boxes. Specifically, for a single datum, given the saliency map and its bounding box, if the max point of its saliency map lies within the bounding box, the evaluation result is 1 otherwise 0.

For metric “IoSR” (Intersection over Salient Region), the localization capability is calculated as the intersection of the bounding box and the salient region over the area of the salient region. The salient region is defined as the region whose value exceeds \(\theta * \max{saliency}\).

Parameters
  • num_labels (int) – Number of classes in the dataset.

  • metric (str, optional) – Specific metric to calculate localization capability. Options: “PointingGame”, “IoSR”. Default: “PointingGame”.

Raises

TypeError – Be raised for any argument type problem.

Supported Platforms:

Ascend GPU

evaluate(explainer, inputs, targets, saliency=None, mask=None)[source]

Evaluate localization on a single data sample.

Note

Currently only single sample (\(N=1\)) at each call is supported.

Parameters
  • explainer (Explanation) – The explainer to be evaluated, see mindspore.explainer.explanation.

  • inputs (Tensor) – A data sample, a 4D tensor of shape \((N, C, H, W)\).

  • targets (Tensor, int) – The label of interest. It should be a 1D or 0D tensor, or an integer. If targets is a 1D tensor, its length should be the same as inputs.

  • saliency (Tensor, optional) – The saliency map to be evaluated, a 4D tensor of shape \((N, 1, H, W)\). If it is None, the parsed explainer will generate the saliency map with inputs and targets and continue the evaluation. Default: None.

  • mask (Tensor, numpy.ndarray) – Ground truth bounding box/masks for the inputs w.r.t targets, a 4D tensor or numpy.ndarray of shape \((N, 1, H, W)\).

Returns

numpy.ndarray, 1D array of shape \((N,)\), result of localization evaluated on explainer.

Raises

ValueError – Be raised for any argument value problem.

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore.explainer.explanation import Gradient
>>> from mindspore.explainer.benchmark import Localization
>>> from mindspore import context
>>>
>>> context.set_context(mode=context.PYNATIVE_MODE)
>>> num_labels = 10
>>> localization = Localization(num_labels, "PointingGame")
>>>
>>> # The detail of LeNet5 is shown in model_zoo.official.cv.lenet.src.lenet.py
>>> net = LeNet5(10, num_channel=3)
>>> gradient = Gradient(net)
>>> inputs = ms.Tensor(np.random.rand(1, 3, 32, 32), ms.float32)
>>> masks = np.zeros([1, 1, 32, 32])
>>> masks[:, :, 10: 20, 10: 20] = 1
>>> targets = 5
>>> # usage 1: input the explainer and the data to be explained,
>>> # localization is a Localization instance
>>> res = localization.evaluate(gradient, inputs, targets, mask=masks)
>>> print(res.shape)
(1,)
>>> # usage 2: input the generated saliency map
>>> saliency = gradient(inputs, targets)
>>> res = localization.evaluate(gradient, inputs, targets, saliency, mask=masks)
>>> print(res.shape)
(1,)
class mindspore.explainer.benchmark.Robustness(num_labels, activation_fn)[source]

Robustness perturbs the inputs by adding random noise and choose the maximum sensitivity as evaluation score from the perturbations.

Parameters
  • num_labels (int) – Number of classes in the dataset.

  • activation_fn (Cell) – The activation layer that transforms logits to prediction probabilities. For single label classification tasks, nn.Softmax is usually applied. As for multi-label classification tasks, nn.Sigmoid is usually be applied. Users can also pass their own customized activation_fn as long as when combining this function with network, the final output is the probability of the input.

Raises

TypeError – Be raised for any argument type problem.

Supported Platforms:

Ascend GPU

evaluate(explainer, inputs, targets, saliency=None)[source]

Evaluate robustness on single sample.

Note

Currently only single sample (\(N=1\)) at each call is supported.

Parameters
  • explainer (Explanation) – The explainer to be evaluated, see mindspore.explainer.explanation.

  • inputs (Tensor) – A data sample, a 4D tensor of shape \((N, C, H, W)\).

  • targets (Tensor, int) – The label of interest. It should be a 1D or 0D tensor, or an integer. If targets is a 1D tensor, its length should be the same as inputs.

  • saliency (Tensor, optional) – The saliency map to be evaluated, a 4D tensor of shape \((N, 1, H, W)\). If it is None, the parsed explainer will generate the saliency map with inputs and targets and continue the evaluation. Default: None.

Returns

numpy.ndarray, 1D array of shape \((N,)\), result of localization evaluated on explainer.

Raises

ValueError – If batch_size is larger than 1.

Examples

>>> import numpy as np
>>> import mindspore as ms
>>> from mindspore import nn
>>> from mindspore.explainer.explanation import Gradient
>>> from mindspore.explainer.benchmark import Robustness
>>> from mindspore import context
>>>
>>> context.set_context(mode=context.PYNATIVE_MODE)
>>> # Initialize a Robustness benchmarker passing num_labels of the dataset.
>>> num_labels = 10
>>> activation_fn = nn.Softmax()
>>> robustness = Robustness(num_labels, activation_fn)
>>>
>>> # The detail of LeNet5 is shown in model_zoo.official.cv.lenet.src.lenet.py
>>> net = LeNet5(10, num_channel=3)
>>> # prepare your explainer to be evaluated, e.g., Gradient.
>>> gradient = Gradient(net)
>>> input_x = ms.Tensor(np.random.rand(1, 3, 32, 32), ms.float32)
>>> target_label = ms.Tensor([0], ms.int32)
>>> # robustness is a Robustness instance
>>> res = robustness.evaluate(gradient, input_x, target_label)
>>> print(res.shape)
(1,)