mindspore.explainer
mindspore.explainer
Provides explanation runner high-level APIs.
- class mindspore.explainer.ImageClassificationRunner(summary_dir, data, network, activation_fn)[source]
A high-level API for users to generate and store results of the explanation methods and the evaluation methods.
Update in 2020.11: Adjust the storage structure and format of the data. Summary files generated by previous version will be deprecated and will not be supported in MindInsight of current version.
- Parameters
summary_dir (str) – The directory path to save the summary files which store the generated results.
data (tuple[Dataset, list[str]]) – Tuple of dataset and the corresponding class label list. The dataset should provides [images], [images, labels] or [images, labels, bboxes] as columns. The label list must share the exact same length and order of the network outputs.
network (Cell) – The network(with logit outputs) to be explained.
activation_fn (Cell) – The activation layer that transforms logits to prediction probabilities. For single label classification tasks, nn.Softmax is usually applied. As for multi-label classification tasks, nn.Sigmoid is usually be applied. Users can also pass their own customized activation_fn as long as when combining this function with network, the final output is the probability of the input.
Examples
>>> from mindspore.explainer import ImageClassificationRunner >>> from mindspore.explainer.explanation import GuidedBackprop, Gradient >>> from mindspore.explainer.benchmark import Faithfulness >>> from mindspore.nn import Softmax >>> from mindspore.train.serialization import load_checkpoint, load_param_into_net >>> # Prepare the dataset for explaining and evaluation, e.g., Cifar10 >>> dataset = get_dataset('/path/to/Cifar10_dataset') >>> labels = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] >>> # load checkpoint to a network, e.g. checkpoint of resnet50 trained on Cifar10 >>> param_dict = load_checkpoint("checkpoint.ckpt") >>> net = resnet50(len(labels)) >>> activation_fn = Softmax() >>> load_param_into_net(net, param_dict) >>> gbp = GuidedBackprop(net) >>> gradient = Gradient(net) >>> explainers = [gbp, gradient] >>> faithfulness = Faithfulness(len(labels), activation_fn, "NaiveFaithfulness") >>> benchmarkers = [faithfulness] >>> runner = ImageClassificationRunner("./summary_dir", (dataset, labels), net, activation_fn) >>> runner.register_saliency(explainers=explainers, benchmarkers=benchmarkers) >>> runner.run()
- register_saliency(explainers, benchmarkers=None)[source]
Register saliency explanation instances.
Note
This function call not be invoked more then once on each runner.
- Parameters
explainers (list[Attribution]) – The explainers to be evaluated, see mindspore.explainer.explanation. All explainers’ class must be distinct and their network must be the exact same instance of the runner’s network.
benchmarkers (list[AttributionMetric], optional) – The benchmarkers for scoring the explainers, see mindspore.explainer.benchmark. All benchmarkers’ class must be distinct.
- Raises
ValueError – Be raised for any data or settings’ value problem.
TypeError – Be raised for any data or settings’ type problem.
RuntimeError – Be raised if this function was invoked before.
- run()[source]
Run the explain job and save the result as a summary in summary_dir.
Note
User should call register_saliency() once before running this function.
- Raises
ValueError – Be raised for any data or settings’ value problem.
TypeError – Be raised for any data or settings’ type problem.
RuntimeError – Be raised for any runtime problem.
mindspore.explainer.explanation
Predefined Attribution explainers.
- class mindspore.explainer.explanation.Deconvolution(network)[source]
Deconvolution explanation.
Deconvolution method is a modified version of Gradient method. For the original ReLU operation in the network to be explained, Deconvolution modifies the propagation rule from directly backpropagating gradients to backprpagating positive gradients.
Note
The parsed network will be set to eval mode through network.set_grad(False) and network.set_train(False). If you want to train the network afterwards, please reset it back to training mode through the opposite operations. To use Deconvolution, the ReLU operations in the network must be implemented with mindspore.nn.Cell object rather than mindspore.ops.Operations.ReLU. Otherwise, the results will not be correct.
- Parameters
network (Cell) – The black-box model to be explained.
- Inputs:
inputs (Tensor) - The input data to be explained, a 4D tensor of shape \((N, C, H, W)\).
targets (Tensor, int) - The label of interest. It should be a 1D or 0D tensor, or an integer. If it is a 1D tensor, its length should be the same as inputs.
- Outputs:
Tensor, a 4D tensor of shape \((N, 1, H, W)\).
Examples
>>> import numpy as np >>> import mindspore as ms >>> from mindspore.explainer.explanation import Deconvolution >>> from mindspore.train.serialization import load_checkpoint, load_param_into_net >>> # init Deconvolution with a trained network. >>> net = resnet50(10) # please refer to model_zoo >>> param_dict = load_checkpoint("resnet50.ckpt") >>> load_param_into_net(net, param_dict) >>> deconvolution = Deconvolution(net) >>> # parse data and the target label to be explained and get the saliency map >>> inputs = ms.Tensor(np.random.rand(1, 3, 224, 224), ms.float32) >>> label = 5 >>> saliency = deconvolution(inputs, label)
- class mindspore.explainer.explanation.GradCAM(network, layer='')[source]
Provides GradCAM explanation method.
GradCAM generates saliency map at intermediate layer. The attribution is obtained as:
\[ \begin{align}\begin{aligned}\alpha_k^c = \frac{1}{Z} \sum_i \sum_j \frac{\partial{y^c}}{\partial{A_{i,j}^k}}\\attribution = ReLU(\sum_k \alpha_k^c A^k)\end{aligned}\end{align} \]For more details, please refer to the original paper: GradCAM.
Note
The parsed network will be set to eval mode through network.set_grad(False) and network.set_train(False). If you want to train the network afterwards, please reset it back to training mode through the opposite operations.
- Parameters
network (Cell) – The black-box model to be explained.
layer (str, optional) – The layer name to generate the explanation, usually chosen as the last convolutional layer for better practice. If it is ‘’, the explantion will be generated at the input layer. Default: ‘’.
- Inputs:
inputs (Tensor) - The input data to be explained, a 4D tensor of shape \((N, C, H, W)\).
targets (Tensor, int) - The label of interest. It should be a 1D or 0D tensor, or an integer. If it is a 1D tensor, its length should be the same as inputs.
- Outputs:
Tensor, a 4D tensor of shape \((N, 1, H, W)\).
Examples
>>> import numpy as np >>> import mindspore as ms >>> from mindspore.explainer.explanation import GradCAM >>> from mindspore.train.serialization import load_checkpoint, load_param_into_net >>> # load a trained network >>> net = resnet50(10) >>> param_dict = load_checkpoint("resnet50.ckpt") >>> load_param_into_net(net, param_dict) >>> # specify a layer name to generate explanation, usually the layer can be set as the last conv layer. >>> layer_name = 'layer4' >>> # init GradCAM with a trained network and specify the layer to obtain attribution >>> gradcam = GradCAM(net, layer=layer_name) >>> inputs = ms.Tensor(np.random.rand(1, 3, 224, 224), ms.float32) >>> label = 5 >>> saliency = gradcam(inputs, label)
- class mindspore.explainer.explanation.Gradient(network)[source]
Provides Gradient explanation method.
Gradient is the simplest attribution method which uses the naive gradients of outputs w.r.t inputs as the explanation.
\[attribution = \frac{\partial{y}}{\partial{x}}\]Note
The parsed network will be set to eval mode through network.set_grad(False) and network.set_train(False). If you want to train the network afterwards, please reset it back to training mode through the opposite operations.
- Parameters
network (Cell) – The black-box model to be explained.
- Inputs:
inputs (Tensor) - The input data to be explained, a 4D tensor of shape \((N, C, H, W)\).
targets (Tensor, int) - The label of interest. It should be a 1D or 0D tensor, or an integer. If it is a 1D tensor, its length should be the same as inputs.
- Outputs:
Tensor, a 4D tensor of shape \((N, 1, H, W)\).
Examples
>>> import numpy as np >>> import mindspore as ms >>> from mindspore.explainer.explanation import Gradient >>> from mindspore.train.serialization import load_checkpoint, load_param_into_net >>> # init Gradient with a trained network >>> net = resnet50(10) # please refer to model_zoo >>> param_dict = load_checkpoint("resnet50.ckpt") >>> load_param_into_net(net, param_dict) >>> gradient = Gradient(net) >>> inputs = ms.Tensor(np.random.rand(1, 3, 224, 224), ms.float32) >>> label = 5 >>> saliency = gradient(inputs, label)
- class mindspore.explainer.explanation.GuidedBackprop(network)[source]
Guided-Backpropagation explanation.
Guided-Backpropagation method is an extension of Gradient method. On top of the original ReLU operation in the network to be explained, Guided-Backpropagation introduces another ReLU operation to filter out the negative gradients during backpropagation.
Note
The parsed network will be set to eval mode through network.set_grad(False) and network.set_train(False). If you want to train the network afterwards, please reset it back to training mode through the opposite operations. To use GuidedBackprop, the ReLU operations in the network must be implemented with mindspore.nn.Cell object rather than mindspore.ops.Operations.ReLU. Otherwise, the results will not be correct.
- Parameters
network (Cell) – The black-box model to be explained.
- Inputs:
inputs (Tensor) - The input data to be explained, a 4D tensor of shape \((N, C, H, W)\).
targets (Tensor, int) - The label of interest. It should be a 1D or 0D tensor, or an integer. If it is a 1D tensor, its length should be the same as inputs.
- Outputs:
Tensor, a 4D tensor of shape \((N, 1, H, W)\).
Examples
>>> import numpy as np >>> import mindspore as ms >>> from mindspore.train.serialization import load_checkpoint, load_param_into_net >>> from mindspore.explainer.explanation import GuidedBackprop >>> # init GuidedBackprop with a trained network. >>> net = resnet50(10) # please refer to model_zoo >>> param_dict = load_checkpoint("resnet50.ckpt") >>> load_param_into_net(net, param_dict) >>> gbp = GuidedBackprop(net) >>> # parse data and the target label to be explained and get the saliency map >>> inputs = ms.Tensor(np.random.rand(1, 3, 224, 224), ms.float32) >>> label = 5 >>> saliency = gbp(inputs, label)
- class mindspore.explainer.explanation.Occlusion(network, activation_fn, perturbation_per_eval=32)[source]
Occlusion uses a sliding window to replace the pixels with a reference value (e.g. constant value), and computes the output difference w.r.t the original output. The output difference caused by perturbed pixels are assigned as feature importance to those pixels. For pixels involved in multiple sliding windows, the feature importance is the averaged differences from multiple sliding windows.
For more details, please refer to the original paper via: https://arxiv.org/abs/1311.2901.
- Parameters
network (Cell) – The black-box model to be explained.
activation_fn (Cell) – The activation layer that transforms logits to prediction probabilities. For single label classification tasks, nn.Softmax is usually applied. As for multi-label classification tasks, nn.Sigmoid is usually be applied. Users can also pass their own customized activation_fn as long as when combining this function with network, the final output is the probability of the input.
perturbation_per_eval (int, optional) – Number of perturbations for each inference during inferring the perturbed samples. Within the memory capacity, usually the larger this number is, the faster the explanation is obtained. Default: 32.
- Inputs:
inputs (Tensor) - The input data to be explained, a 4D tensor of shape \((N, C, H, W)\).
targets (Tensor, int) - The label of interest. It should be a 1D or 0D tensor, or an integer. If it is a 1D tensor, its length should be the same as inputs.
- Outputs:
Tensor, a 4D tensor of shape \((N, 1, H, W)\).
Example
>>> import numpy as np >>> import mindspore as ms >>> from mindspore.explainer.explanation import Occlusion >>> from mindspore.train.serialization import load_checkpoint, load_param_into_net >>> # prepare your network and load the trained checkpoint file, e.g., resnet50. >>> network = resnet50(10) >>> param_dict = load_checkpoint("resnet50.ckpt") >>> load_param_into_net(network, param_dict) >>> # initialize Occlusion explainer with the pretrained model and activation function >>> activation_fn = ms.nn.Softmax() # softmax layer is applied to transform logits to probabilities >>> occlusion = Occlusion(network, activation_fn=activation_fn) >>> input_x = ms.Tensor(np.random.rand(1, 3, 224, 224), ms.float32) >>> label = ms.Tensor([1], ms.int32) >>> saliency = occlusion(input_x, label)
- class mindspore.explainer.explanation.RISE(network, activation_fn, perturbation_per_eval=32)[source]
RISE: Randomized Input Sampling for Explanation of Black-box Model.
RISE is a perturbation-based method that generates attribution maps by sampling on multiple random binary masks. The original image is randomly masked, and then fed into the black-box model to get predictions. The final attribution map is the weighted sum of these random masks, with the weights being the corresponding output on the node of interest:
\[attribution = \sum_{i}f_c(I\odot M_i) M_i\]For more details, please refer to the original paper via: RISE.
- Parameters
network (Cell) – The black-box model to be explained.
activation_fn (Cell) – The activation layer that transforms logits to prediction probabilities. For single label classification tasks, nn.Softmax is usually applied. As for multi-label classification tasks, nn.Sigmoid is usually be applied. Users can also pass their own customized activation_fn as long as when combining this function with network, the final output is the probability of the input.
perturbation_per_eval (int, optional) – Number of perturbations for each inference during inferring the perturbed samples. Within the memory capacity, usually the larger this number is, the faster the explanation is obtained. Default: 32.
- Inputs:
inputs (Tensor) - The input data to be explained, a 4D tensor of shape \((N, C, H, W)\).
targets (Tensor, int) - The labels of interest to be explained. When targets is an integer, all of the inputs will generates attribution map w.r.t this integer. When targets is a tensor, it should be of shape \((N, l)\) (l being the number of labels for each sample) or \((N,)\) \(()\).
- Outputs:
Tensor, a 4D tensor of shape \((N, ?, H, W)\).
Examples
>>> import numpy as np >>> import mindspore as ms >>> from mindspore.explainer.explanation import RISE >>> from mindspore.nn import Sigmoid >>> from mindspore.train.serialization import load_checkpoint, load_param_into_net >>> # prepare your network and load the trained checkpoint file, e.g., resnet50. >>> network = resnet50(10) >>> param_dict = load_checkpoint("resnet50.ckpt") >>> load_param_into_net(network, param_dict) >>> # initialize RISE explainer with the pretrained model and activation function >>> activation_fn = ms.nn.Softmax() # softmax layer is applied to transform logits to probabilities >>> rise = RISE(network, activation_fn=activation_fn) >>> # given an instance of RISE, saliency map can be generate >>> inputs = ms.Tensor(np.random.rand(2, 3, 224, 224), ms.float32) >>> # when `targets` is an integer >>> targets = 5 >>> saliency = rise(inputs, targets) >>> # `targets` can also be a 2D tensor >>> targets = ms.Tensor([[5], [1]], ms.int32) >>> saliency = rise(inputs, targets)
mindspore.explainer.benchmark
Predefined XAI metrics.
- class mindspore.explainer.benchmark.ClassSensitivity[source]
Class sensitivity metric used to evaluate attribution-based explanations.
Reasonable atrribution-based explainers are expected to generate distinct saliency maps for different labels, especially for labels of highest confidence and low confidence. ClassSensitivity evaluates the explainer through computing the correlation between saliency maps of highest-confidence and lowest-confidence labels. Explainer with better class sensitivity will receive lower correlation score. To make the evaluation results intuitive, the returned score will take negative on correlation and normalize.
- evaluate(explainer, inputs)[source]
Evaluate class sensitivity on a single data sample.
- Parameters
explainer (Explanation) – The explainer to be evaluated, see mindspore.explainer.explanation.
inputs (Tensor) – A data sample, a 4D tensor of shape \((N, C, H, W)\).
- Returns
numpy.ndarray, 1D array of shape \((N,)\), result of class sensitivity evaluated on explainer.
Examples
>>> import mindspore as ms >>> from mindspore.explainer.benchmark import ClassSensitivity >>> from mindspore.explainer.explanation import Gradient >>> from mindspore.train.serialization import load_checkpoint, load_param_into_net >>> # prepare your network and load the trained checkpoint file, e.g., resnet50. >>> network = resnet50(10) >>> param_dict = load_checkpoint("resnet50.ckpt") >>> load_param_into_net(network, param_dict) >>> # prepare your explainer to be evaluated, e.g., Gradient. >>> gradient = Gradient(network) >>> input_x = ms.Tensor(np.random.rand(1, 3, 224, 224), ms.float32) >>> class_sensitivity = ClassSensitivity() >>> res = class_sensitivity.evaluate(gradient, input_x)
- class mindspore.explainer.benchmark.Faithfulness(num_labels, activation_fn, metric='NaiveFaithfulness')[source]
Provides evaluation on faithfulness on XAI explanations.
Three specific metrics to obtain quantified results are supported: “NaiveFaithfulness”, “DeletionAUC”, and “InsertionAUC”.
For metric “NaiveFaithfulness”, a series of perturbed images are created by modifying pixels on original image. Then the perturbed images will be fed to the model and a series of output probability drops can be obtained. The faithfulness is then quantified as the correlation between the propability drops and the saliency map values on the same pixels (we normalize the correlation further to make them in range of [0, 1]).
For metric “DeletionAUC”, a series of perturbed images are created by accumulatively modifying pixels of the original image to a base value (e.g. a constant). The perturbation starts from pixels with high saliency values to pixels with low saliency values. Feeding the perturbed images into the model in order, an output probability drop curve can be obtained. “DeletionAUC” is then obtained as the area under this probability drop curve.
For metric “InsertionAUC”, a series of perturbed images are created by accumulatively inserting pixels of the original image to a reference image (e.g. a black image). The insertion starts from pixels with high saliency values to pixels with low saliency values. Feeding the perturbed images into the model in order, an output probability increase curve can be obtained. “InsertionAUC” is then obtained as the area under this curve.
For all the three metrics, higher value indicates better faithfulness.
- Parameters
num_labels (int) – Number of labels.
activation_fn (Cell) – The activation layer that transforms logits to prediction probabilities. For single label classification tasks, nn.Softmax is usually applied. As for multi-label classification tasks, nn.Sigmoid is usually be applied. Users can also pass their own customized activation_fn as long as when combining this function with network, the final output is the probability of the input.
metric (str, optional) – The specifi metric to quantify faithfulness. Options: “DeletionAUC”, “InsertionAUC”, “NaiveFaithfulness”. Default: ‘NaiveFaithfulness’.
Examples
>>> from mindspore import nn >>> from mindspore.explainer.benchmark import Faithfulness >>> # init a `Faithfulness` object >>> num_labels = 10 >>> metric = "InsertionAUC" >>> activation_fn = nn.Softmax() >>> faithfulness = Faithfulness(num_labels, activation_fn, metric)
- evaluate(explainer, inputs, targets, saliency=None)[source]
Evaluate faithfulness on a single data sample.
Note
Currently only single sample (\(N=1\)) at each call is supported.
- Parameters
explainer (Explanation) – The explainer to be evaluated, see mindspore.explainer.explanation.
inputs (Tensor) – A data sample, a 4D tensor of shape \((N, C, H, W)\).
targets (Tensor, int) – The label of interest. It should be a 1D or 0D tensor, or an integer. If targets is a 1D tensor, its length should be the same as inputs.
saliency (Tensor, optional) – The saliency map to be evaluated, a 4D tensor of shape \((N, 1, H, W)\). If it is None, the parsed explainer will generate the saliency map with inputs and targets and continue the evaluation. Default: None.
- Returns
numpy.ndarray, 1D array of shape \((N,)\), result of faithfulness evaluated on explainer.
Examples
>>> import numpy as np >>> import mindspore as ms >>> from mindspore.explainer.explanation import Gradient >>> # init an explainer with a trained network, e.g., resnet50 >>> gradient = Gradient(network) >>> inputs = ms.Tensor(np.random.rand(1, 3, 224, 224), ms.float32) >>> targets = 5 >>> # usage 1: input the explainer and the data to be explained, >>> # calculate the faithfulness with the specified metric >>> res = faithfulness.evaluate(gradient, inputs, targets) >>> # usage 2: input the generated saliency map >>> saliency = gradient(inputs, targets) >>> res = faithfulness.evaluate(gradient, inputs, targets, saliency)
- class mindspore.explainer.benchmark.Localization(num_labels, metric='PointingGame')[source]
Provides evaluation on the localization capability of XAI methods.
Three specific metrics to obtain quantified results are supported: “PointingGame”, and “IoSR” (Intersection over Salient Region).
For metric “PointingGame”, the localization capability is calculated as the ratio of data in which the max position of their saliency maps lies within the bounding boxes. Specifically, for a single datum, given the saliency map and its bounding box, if the max point of its saliency map lies within the bounding box, the evaluation result is 1 otherwise 0.
For metric “IoSR” (Intersection over Salient Region), the localization capability is calculated as the intersection of the bounding box and the salient region over the area of the salient region. The salient region is defined as the region whose value exceeds \(\theta * \max{saliency}\).
- Parameters
Examples
>>> from mindspore.explainer.benchmark import Localization >>> num_labels = 100 >>> localization = Localization(num_labels, "PointingGame")
- evaluate(explainer, inputs, targets, saliency=None, mask=None)[source]
Evaluate localization on a single data sample.
Note
Currently only single sample (\(N=1\)) at each call is supported.
- Parameters
explainer (Explanation) – The explainer to be evaluated, see mindspore.explainer.explanation.
inputs (Tensor) – A data sample, a 4D tensor of shape \((N, C, H, W)\).
targets (Tensor, int) – The label of interest. It should be a 1D or 0D tensor, or an integer. If targets is a 1D tensor, its length should be the same as inputs.
saliency (Tensor, optional) – The saliency map to be evaluated, a 4D tensor of shape \((N, 1, H, W)\). If it is None, the parsed explainer will generate the saliency map with inputs and targets and continue the evaluation. Default: None.
mask (Tensor, numpy.ndarray) – Ground truth bounding box/masks for the inputs w.r.t targets, a 4D tensor or numpy.ndarray of shape \((N, 1, H, W)\).
- Returns
numpy.ndarray, 1D array of shape \((N,)\), result of localization evaluated on explainer.
Examples
>>> import numpy as np >>> import mindspore as ms >>> from mindspore.explainer.explanation import Gradient >>> # init an explainer with a trained network, e.g., resnet50 >>> gradient = Gradient(network) >>> inputs = ms.Tensor(np.random.rand(1, 3, 224, 224), ms.float32) >>> masks = np.zeros([1, 1, 224, 224]) >>> masks[:, :, 65: 100, 65: 100] = 1 >>> targets = 5 >>> # usage 1: input the explainer and the data to be explained, >>> # calculate the faithfulness with the specified metric >>> res = localization.evaluate(gradient, inputs, targets, mask=masks) >>> # usage 2: input the generated saliency map >>> saliency = gradient(inputs, targets) >>> res = localization.evaluate(gradient, inputs, targets, saliency, mask=masks)
- class mindspore.explainer.benchmark.Robustness(num_labels, activation_fn)[source]
Robustness perturbs the inputs by adding random noise and choose the maximum sensitivity as evaluation score from the perturbations.
- Parameters
num_labels (int) – Number of classes in the dataset.
activation_fn (Cell) – The activation layer that transforms logits to prediction probabilities. For single label classification tasks, nn.Softmax is usually applied. As for multi-label classification tasks, nn.Sigmoid is usually be applied. Users can also pass their own customized activation_fn as long as when combining this function with network, the final output is the probability of the input.
Examples
>>> from mindspore import nn >>> from mindspore.explainer.benchmark import Robustness >>> # Initialize a Robustness benchmarker passing num_labels of the dataset. >>> num_labels = 10 >>> activation_fn = nn.Softmax() >>> robustness = Robustness(num_labels, activation_fn)
- evaluate(explainer, inputs, targets, saliency=None)[source]
Evaluate robustness on single sample.
Note
Currently only single sample (\(N=1\)) at each call is supported.
- Parameters
explainer (Explanation) – The explainer to be evaluated, see mindspore.explainer.explanation.
inputs (Tensor) – A data sample, a 4D tensor of shape \((N, C, H, W)\).
targets (Tensor, int) – The label of interest. It should be a 1D or 0D tensor, or an integer. If targets is a 1D tensor, its length should be the same as inputs.
saliency (Tensor, optional) – The saliency map to be evaluated, a 4D tensor of shape \((N, 1, H, W)\). If it is None, the parsed explainer will generate the saliency map with inputs and targets and continue the evaluation. Default: None.
- Returns
numpy.ndarray, 1D array of shape \((N,)\), result of localization evaluated on explainer.
- Raises
ValueError – If batch_size is larger than 1.
Examples
>>> import numpy as np >>> import mindspore as ms >>> from mindspore.explainer.explanation import Gradient >>> from mindspore.explainer.benchmark import Robustness >>> from mindspore.train.serialization import load_checkpoint, load_param_into_net >>> # prepare your network and load the trained checkpoint file, e.g., resnet50. >>> network = resnet50(10) >>> param_dict = load_checkpoint("resnet50.ckpt") >>> load_param_into_net(network, param_dict) >>> # prepare your explainer to be evaluated, e.g., Gradient. >>> gradient = Gradient(network) >>> input_x = ms.Tensor(np.random.rand(1, 3, 224, 224), ms.float32) >>> target_label = ms.Tensor([0], ms.int32) >>> # robustness is a Robustness instance >>> res = robustness.evaluate(gradient, input_x, target_label)