Managing Vulnerabilities in Machine Learning and Artificial Intelligence Systems
Software Engineering Institute (SEI) Podcast Series - En podcast af Members of Technical Staff at the Software Engineering Institute
Kategorier:
The robustness and security of artificial intelligence, and specifically machine learning (ML), is of vital importance. Yet, ML systems are vulnerable to adversarial attacks. These can range from an attacker attempting to make the ML system learn the wrong thing (data poisoning), do the wrong thing (evasion attacks), or reveal the wrong thing (model inversion). Although there are several efforts to provide detailed taxonomies of the kinds of attacks that can be launched against a machine learning system, none are organized around operational concerns. In this podcast, Jonathan Spring, Nathan VanHoudnos, and Allen Householder, all researchers at the Carnegie Mellon University Software Engineering Institute, discuss the management of vulnerabilities in ML systems as well as the Adversarial ML Threat Matrix, which aims to close this gap between academic taxonomies and operational concerns.