MindSpore Armour Documents
MindSpore Armour is an AI security and privacy protection tool, which provides AI model security assessment, model obfuscation, and privacy data protection capabilities.
AI is the catalyst for change but also faces challengs in security and privacy protection. MindSpore Armour provides adversarial robustness, model security tests, differential privacy training, privacy risk assessment, and data drift detection.
Code repository address: <https://gitee.com/mindspore/mindarmour>
Typical Application Scenarios
-
Includes capabilities such as white and black box adversarial attacks, adversarial training, and adversarial example detection, to help personnel generate adversarial examples and evaluate the robustness of AI models.
-
Uses algorithms such as membership inference attack and model inversion attack to assess the privacy risk for models.
-
Emhances model privacy and protects user data using differential training and protection suppression mechanisms.
-
Detects data distribution changes in time and predicts the symptoms of model failure in advance, which is of great significance for the timely adjustment of the AI model through multiple data drift detection algorithms.
-
Provides a coverage-guided fuzzing tool that features flexible, customizable test policies and metrics, and uses neuron coverage to guide input mutation so that the input can activate neurons and distribute neuron values in a wider range. In this way, we can discover different types of model output results and incorrect behaviors.
-
Uses the symmetric encryption algorithm to encrypt the parameter files or inference models to protect the model files. Directly loads the ciphertext model to implement inference or incremental training when using the algorithm.
-
The structure of the AI model is transformed and obfuscated using a control flow obfuscation algorithm, so that the obfuscated model will not reveal the real structure and weights even if it is stolen. When loading the obfuscated model, as long as the correct password or custom function is passed in, the model can be used normally for inference, and the accuracy of the inference results is not compromised.
- mindarmour
- mindarmour.adv_robustness.attacks
- mindarmour.adv_robustness.defenses
- mindarmour.adv_robustness.detectors
- mindarmour.adv_robustness.evaluations
- mindarmour.fuzz_testing
- mindarmour.natural_robustness.transform.image
- mindarmour.privacy.diff_privacy
- mindarmour.privacy.evaluation
- mindarmour.privacy.sup_privacy
- mindarmour.reliability
- mindarmour.utils