Cloud service providers offer Machine–Learning–as–a–Service platforms, enabling companies to leverage the power of scalability & reliability while performing ML operations. With huge adoption of such systems worldwide, the security posture of the platform itself often may go unnoticed as it has been observed in previous research about vulnerabilities in Google’s AI Hub and AWS’s Sagemaker Jupyter Notebook services.
We investigated Azure ML, a managed MLaaS offering from Microsoft. We found five 0days over three broad classes of security issues namely:
Insecure logging of sensitive information – We found five instances of credentials leaking in cleartext on Compute Instances, due to insecure usage of open-source components and insecure system design of how the environment was being provisioned.
MLSee: A vulnerability allowing sensitive information disclosure – We found a case of exposed APIs in cloud middleware leaking sensitive information from Compute Instances. The vulnerability could be leveraged by network-adjacent attackers after initial access to laterally move or snoop in on the commands executed using Jupyter terminal on a Compute Instance.
Achieving Persistence – While reversing cloud middleware to decipher their functionality, we found two ways to achieve persistence in AML environments. First, An attacker could fetch the Storage Account access key and the Azure AD JWT of the system-assigned managed identity assigned to the Compute Instance, even from non-Azure environments.
Through this talk, the attendees will learn about the different issues that were found in AML, which may extend to other Cloud-based MLaaS platforms. As we take a deep dive into the security issues, we will be demonstrating various techniques we adopted while researching the service, giving the attendees a glimpse of how the security of managed services like AML can be assessed when there are blurred lines in the shared responsibility model…