AI Model Security: Reverse Engineering Machine Learning Models

This talk focuses on the confidentiality of AI models. The spotlight of AI security research is still around challenges such as adversarial machine learning. Comparatively, the security research community spends less attention on the protection of critical AI assets, which are machine learning algorithms and data.

Although cloud based AI services are available, applications are still embedding deep learning functions at the client side. The latest trend in mobile device providers including Samsung, Huawei, and Qualcomm is that — all of them provide dedicated AI hardware or enhanced AI engines at their latest smartphone models. Deep learning processing at the client are in fact on the rise.

Using real AI mobile Apps as examples, this talk shows how machine learning models and parameters can be reverse-engineered from implementations. Even obfuscations are used, this talk shows machine learning code often exhibits strong behaviors that can be easily recognized by reverse engineers. In addition, in this talk, we will present our effort of reverse engineering NPU models that are translated to vendor specific NPU hardware.

This talk will also survey the possible reverse engineering attacks against cloud based AI services. In the past we have demonstrated real threats to compromise a cloud-based AI service through software vulnerabilities. This talk will covers additional research that infer AI model and training data information through a cloud interface.

Location: Date: November 27, 2018 Time: 4:30 pm - 5:30 pm Kang Li