AI is worth the hype. Deep learning particularly has proved its wide practical implementation in different areas, especially in the field of new interfaces between users and programs. If you look at technologies retrospectively, it becomes clear that each decade was marked by the development of certain technology, subsequently the advent of attacks and then defenses. While the 90s were a decade of network security, the 2000s were dedicated to the endpoint, and applications were topical in 2010. For 2020s, the consecutive move from the network level to user interactions is AI.
The threat landscape is changing, and hackers will use new techniques to tamper with AI solutions such as self-driving cars, facial recognition systems, drones, voice assistants, robots, financial algorithms, and other solutions, which will use machine learning or AI. Most of the mentioned AI solutions were analyzed with the focus on cybersecurity, and they appeared to be vulnerable. Our research aims at telling people about AI security from 2004 (the moment it was first introduced) till today. The goal of this research is to make companies aware of insecure and malicious AI by sharing insights formed based on almost 2000 research papers.