In security, every new technology brings a new wave of threats and new opportunities for startups. Today’s most-hyped technology is generative AI, which employs large language models to generate content, including text, images, and code.
Chances are you’ve played with Chat-GPT or even written code with GitHub Copilot. These applications are impressive, and it’s no surprise that companies are jumping on the opportunity to leverage these models for copywriting, design, code generation, and more. But what does this mean for the cybersecurity industry?
Let’s break it down from three angles:
- Threats and how attackers might leverage generative AI
- How security vendors can leverage generative AI
- Protecting AI applications and new security concerns for organizations building AI products
In each section, I’ll explore the opportunities for new cybersecurity products.
Generative AI for offense
It’s almost a certainty that generative AI will be leveraged by bad actors, but does this create new opportunities for security vendors? Many have pointed out how generative AI could aid attackers in their social engineering efforts by writing more personalized and compelling phishing emails. Writing better phishing emails might help with social engineering, but it doesn’t change the mechanics of a phishing attack. Existing anti-phishing protection should be just as successful defending against an AI-assisted phishing attack as a human-generated one. But phishing isn’t a new attack vector, and while we haven’t managed to eradicate phishing, the use of generative AI to write phishing messages doesn’t change the mechanics of the attack.
The same logic applies to malware and exploits generated by AI. Researchers have shown that generative AI can create keyloggers and ransomware. While this might lower the barrier to entry for less-skilled attackers and increase the volume of attacks, it doesn’t change the fundamentals of how organizations defend against garden-variety threats.
In the near term, it’s unlikely that we’ll see generative AI produce novel threats that require new categories of security tools. For both attackers and defenders, the most interesting threats are likely not those produced by generative AI, but those that are used to exploit AI models and infrastructure. I will turn to that topic later when I address “Protecting AI.”
The same capabilities that make generative AI useful for attackers can create value for both offensive security and security awareness. Founders building in the offensive security space could, for example, leverage generative AI for automated penetration testing as part of an offensive security toolkit. Generative AI could also be used to automate security awareness tasks, like generating content for phishing awareness campaigns.
TOP TAKEAWAYS
• Threat actors will leverage generative AI to augment social engineering attacks and malware, but the use of generative AI won’t fundamentally alter the threat landscape.
• Generative AI could be used to automate tasks within offensive security.
Generative AI for defense
Before diving into how defenders will use generative AI, let’s consider the role of ML and AI in security products to date. Machine learning became the cybersecurity marketing buzzword du jour a decade ago when companies like Cylance (now part of Blackberry) tried to upend the traditional signature-based antivirus by training machine learning models to identify malicious files. Today, nearly every security product involved in detection leverages ML for identifying malicious artifacts and anomalous activity, but ML models haven’t taken the place of rule-based and signature-based detection. AI is a useful tool in the detection toolkit, but importantly, it didn’t create new categories of security products.
I predict that Generative AI will follow a similar pattern. There will not be a “Generative AI Security” company, but it is likely that vendors across major categories will look to incorporate generative AI into their platforms for tasks involving user text or code input. Generative AI might be used in an XDR (extended detection and response) tool to generate threat-hunting queries, it could analyze data and automatically summarize the events of an attack, it could write remediation code for vulnerabilities, it could write YARA rules for detection. Orca, a cloud security vendor, recently announced that it will incorporate ChatGPT to analyze vulnerabilities and generate written remediation plans. Many incumbent vendors will look to incorporate generative AI into their security tools where it can improve user productivity.
Generative AI does present an opportunity for new security startups to gain an advantage over incumbents through superior AI-native user workflows. Generative AI will initially augment human-driven workflows rather than fully automating them. It’s a new and unproven technology within security, and human expertise will be required to review, modify, and approve AI-generated text and code. Startups that build their user experience around human-AI interactivity may have an advantage over vendors looking to fit AI into existing user workflows.
The categories of security tools where startups are likely to have the greatest impact are the categories that are the most workflow-driven, such as SIEM (security information and event management) and SOAR (security orchestration, automation, and response). Traditional SIEM has already been under threat from emerging XDR products and the rise of security data lakes. Done well, generative AI for writing detection, writing threat hunting queries, and suggesting remediations could greatly improve SOC efficiency.
TOP TAKEAWAYS
• Generative AI won’t be a category creator in the security industry, but it can be used to augment existing security workflows that require users to input text or code.
• Existing security vendors will incorporate generative AI for capabilities like writing detection engineering, writing threat hunting queries, creating remediation plans, and suggesting code to remediate vulnerabilities.
• Startups may have an advantage over incumbent vendors because they can build user experiences with AI-human interactivity in mind. This is particularly relevant when applied to workflow-heavy categories like SIEM and SOAR.
Protecting AI
The most interesting new opportunities for security startups are in protecting AI applications, AI models, and the infrastructure used to build and train them. These opportunities are applicable to all machine learning applications and infrastructure — not just generative AI — but the explosion of startups building within generative AI will create strong tailwinds. Three significant opportunities stand out when it comes to AI security: data security, securing the ML supply chain, and threat detection.
Data Security for ML and AI
While there are numerous data security products, they are not built to support data science workflows and data science toolsets. Training machine learning models requires giving data science teams access to data. Once in the realm of data science, security lacks visibility and control, increasing risk of a data breach or data leakage if devices or data science notebook servers are compromised. Locking down data isn’t an option — the increasing business value of AI and ML applications means that organizations must adapt their data security programs for data science workflows and infrastructure. This requires visibility into where data is going, and stronger data access controls within the data science toolkit. Such visibility and control will also serve data privacy and compliance needs as privacy regulations evolve to address AI.
ML Supply-Chain Security
The ML supply chain spans datasets, open-source language libraries, ML software, and open-source or proprietary models. Just as in the software supply chain, each dependency in the supply chain introduces security risk. The recent PyTorch compromise is an example of an ML supply chain attack where malicious packages masquerading as legitimate PyTorch packages were listed in the PyPI (Python package index), a popular repository for the Python language commonly used for machine learning. When installed, the malicious package was designed to read and upload files to the attacker’s remote server.
Companies that leverage ML will need to implement controls within the ML supply chain to prevent the installation of risky or malicious packages, manage and monitor their ML dependencies, and identify and remediate known vulnerabilities. Analogous tools have emerged for software supply-chain security, but most don’t cover ML libraries and the ML development lifecycle.
Threat Detection
Another area where startups can have significant impact is in visibility and detection of attacks on machine learning models. Adversaries can use techniques such as model inversion, model poisoning, and prompt injection to manipulate and attack machine learning models. These attacks can have a significant impact on the accuracy and reliability of the models, and can lead to security breaches. New security startups can play a crucial role in detecting and preventing these attacks by developing products that provide visibility into the behavior of machine learning models, and that can detect anomalies that may indicate an attack.
TOP TAKEAWAYS
• Data security and governance becomes even more important, and vendors must adapt to cover data science workflows and infrastructure.
• Dependencies and vulnerabilities in the ML supply chain introduce risks for organizations, just as they do in the software supply chain. There is an opportunity for new products that cover the ML development lifecycle.
• New startups will emerge to detect and prevent attacks on machine learning models.
I’ve highlighted some of the opportunities for security startups and vendors to leverage generative AI for offensive and defensive security, and to play a role in protecting ML models and infrastructure. We are in the very early innings of AI, and I’m sure we will see many new challenges and opportunities emerge.
If you are a founder building in this space or have an interest in this field, I would love to hear from you — please email me at allison@unusual.vc. I’m excited to see how this technology will shape the security industry over the coming years.
Read more about generative AI
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.