November 29, 2023
Portfolio
Unusual

How do you navigate AI security in an evolving landscape?

No items found.
How do you navigate AI security in an evolving landscape? How do you navigate AI security in an evolving landscape?
All posts
Editor's note: 

AI is everywhere. Big banks, retailers, and healthcare institutions use AI for predictive analysis, customer service, and other crucial business processes. Anyone with an internet connection can prompt AI to write an essay or generate art. AI applications touch everything from rote requests to the most sensitive data.


Yet, while AI applications are booming, AI security is in a transient stage. “We’re co-evolving the systems we build with their security,” said Dr. Hyrum Anderson, CTO of Robust Intelligence. It’s an exciting moment but, as AI infrastructure is being built in real time, it presents ambiguity about how to secure it. 

Hyrum predicts that, over the next year, it will be relatively easy for hackers to ferret out security holes in AI. While there are inherent risks in building and using AI, Hyrum said, the first step is acceptance. From there, you can control the risk through security layers to deter attackers and mitigate potential vulnerabilities. 

We talked with Hyrum about the AI security landscape, emerging threats, and how to minimize risk. Hyrum has directed research projects at MIT Lincoln Laboratory, Sandia National Laboratories, and FireEye, among others. He was Chief Scientist at Endgame and Principal Architect of Trustworthy Machine Learning at Microsoft. He’s also the co-author of Not With A Bug, But With A Sticker: Attacks on Machine Learning Systems and What To Do About Them.

What does AI risk look like?

“It’s more than just bad guys hacking, it’s also about vulnerabilities that could be hacked,” Hyrum said. He described two categories of risk: Intentional and unintentional failure. Intentional failure takes place during a targeted attack. These failures compromise security, confidentiality, integrity, and system availability. 

Unintentional failure happens when AI simply messes up — but it can be just as damaging as intentional failure. “Artificial intelligence is, in fact, artificial, and it makes mistakes all the time,” Hyrum said. He pointed to Zillow's iBuyer AI as an example. This algorithm would value and purchase real estate on the company’s behalf, basing its decision off of market data. Though the AI was built well, the project was shut down in 2021 because it resulted in $304 million in losses, a nearly 20% stock plunge, and a 25% cut to Zillow’s workforce. Why? The data the algorithm was trained on became outdated in real time. 

Zillow's iBuyer AI was built well, but the project was shut down in 2021 because it resulted in $304 million in losses, a nearly 20% stock plunge, and a 25% cut to Zillow’s workforce. Why? The data the algorithm was trained on became outdated in real time.

Of course, bad actors can also take advantage of unintentional failures. “Security is about business risk, and business risk can happen, especially with AI, when it falls over all on its own,” Hyrum said. “But when there’s a motivated adversary involved, that becomes even more pronounced.”

The AI attack surface is broad — and evolving with the emergence of generative AI

AI presents a new set of questions for security experts when it comes to attack surfaces, starting with the AI supply chain. “How do you know that the base model you’re getting from the outset is safe, that it operates like it claims to operate, that it’s useful for your application?” Hyrum asked. He noted that red team exercises against even the most respected models have highlighted persistent security concerns. Security challenges then extend to interconnected components, especially when it comes to the emerging marketplace for custom GPTs. “The security challenges aren't restricted to the model alone,” he said. “The glaring issues here are the interconnected components that you're using to build your application.” And, as a result, the entire application can be susceptible to attack.

“Hackers are economically motivated; they want to do the easiest hack for the highest reward,” Hyrum said. The barrier to entry is low for hacking generative AI because hackers can hack in natural language — no coding required. As LLMs are integrated into apps and other systems, the security risk (and the reward for hackers) will multiply, making it an attractive target. 

When building with LLMs, treat them like untrusted users

Hyrum suggests establishing trust boundaries with LLMs, essentially treating them as untrusted human users. “If you're building your own [LLM applications] and you wouldn't want somebody with hands-on keyboard access to get the data, then you shouldn't expose the large language model either,” he said. “The large language model is now a conduit between a user and the data, and we should just be more thoughtful about creating a trust boundary.” This trust boundary should also apply to LLM outputs, which can easily be manipulated by adversarial user input. LLM outputs should be sanitized before they interact with other systems to prevent both intentional or unintentional abuse.

Ethical risks are now security risks

With generative AI, a new risk has emerged: The abuse and misuse of AI systems for unethical purposes. AI has forced security leaders to think about ethical risks and operational risks alongside more traditional security risks. However, the duty goes beyond the security team. “The CEO [is also] responsible for this,” Hyrum said. “It’s forcing communication across the organization about the more holistic risk perspectives.” Hyrum predicted a shift in organizational structure within the next two to three years to better integrate and address these interconnected risks.

Following security best practices will help you stay ahead of regulations

As regulation emerges regarding AI security, privacy and ethics, organizations should start preparing for when legislation hits — not just with security tools, but with new processes and a shift in mindset. “The [2023 executive order on AI safety and security standards] is not binding legislation, but a harbinger of things that might come,” Hyrum said. With this in mind, he noted that companies will need to be able to answer the following accountability questions:

  • Who on your team or organization is responsible for your AI supply chain? 
  • Who is measuring the risk of the AI components you're using in your software? 
  • What information are you making available to engineering and company leadership about these risks? What policies, people, processes, and technology will help you manage that? 

Hyrum said that the most successful teams building AI map out their process, following the whole machine-learning pipeline in their product development and then scoring themselves on each step. He suggests redoing that self-scoring every six months to catch issues and ensure processes are improving. “The hard work that you do in mapping this will result in you, most importantly, doing the right thing,” Hyrum said. “And, secondly, when legislation comes, you'll be in the right spot for it.”

Check out our full conversation with Dr. Hyrum Anderson.

Read more on AI

What will the security stack for generative AI applications look like?

Generative AI is blowing up. What does this mean for cybersecurity?

Whose responsibility is responsible AI?

All posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

All posts
November 29, 2023
Portfolio
Unusual

How do you navigate AI security in an evolving landscape?

No items found.
How do you navigate AI security in an evolving landscape? How do you navigate AI security in an evolving landscape?
Editor's note: 

AI is everywhere. Big banks, retailers, and healthcare institutions use AI for predictive analysis, customer service, and other crucial business processes. Anyone with an internet connection can prompt AI to write an essay or generate art. AI applications touch everything from rote requests to the most sensitive data.


Yet, while AI applications are booming, AI security is in a transient stage. “We’re co-evolving the systems we build with their security,” said Dr. Hyrum Anderson, CTO of Robust Intelligence. It’s an exciting moment but, as AI infrastructure is being built in real time, it presents ambiguity about how to secure it. 

Hyrum predicts that, over the next year, it will be relatively easy for hackers to ferret out security holes in AI. While there are inherent risks in building and using AI, Hyrum said, the first step is acceptance. From there, you can control the risk through security layers to deter attackers and mitigate potential vulnerabilities. 

We talked with Hyrum about the AI security landscape, emerging threats, and how to minimize risk. Hyrum has directed research projects at MIT Lincoln Laboratory, Sandia National Laboratories, and FireEye, among others. He was Chief Scientist at Endgame and Principal Architect of Trustworthy Machine Learning at Microsoft. He’s also the co-author of Not With A Bug, But With A Sticker: Attacks on Machine Learning Systems and What To Do About Them.

What does AI risk look like?

“It’s more than just bad guys hacking, it’s also about vulnerabilities that could be hacked,” Hyrum said. He described two categories of risk: Intentional and unintentional failure. Intentional failure takes place during a targeted attack. These failures compromise security, confidentiality, integrity, and system availability. 

Unintentional failure happens when AI simply messes up — but it can be just as damaging as intentional failure. “Artificial intelligence is, in fact, artificial, and it makes mistakes all the time,” Hyrum said. He pointed to Zillow's iBuyer AI as an example. This algorithm would value and purchase real estate on the company’s behalf, basing its decision off of market data. Though the AI was built well, the project was shut down in 2021 because it resulted in $304 million in losses, a nearly 20% stock plunge, and a 25% cut to Zillow’s workforce. Why? The data the algorithm was trained on became outdated in real time. 

Zillow's iBuyer AI was built well, but the project was shut down in 2021 because it resulted in $304 million in losses, a nearly 20% stock plunge, and a 25% cut to Zillow’s workforce. Why? The data the algorithm was trained on became outdated in real time.

Of course, bad actors can also take advantage of unintentional failures. “Security is about business risk, and business risk can happen, especially with AI, when it falls over all on its own,” Hyrum said. “But when there’s a motivated adversary involved, that becomes even more pronounced.”

The AI attack surface is broad — and evolving with the emergence of generative AI

AI presents a new set of questions for security experts when it comes to attack surfaces, starting with the AI supply chain. “How do you know that the base model you’re getting from the outset is safe, that it operates like it claims to operate, that it’s useful for your application?” Hyrum asked. He noted that red team exercises against even the most respected models have highlighted persistent security concerns. Security challenges then extend to interconnected components, especially when it comes to the emerging marketplace for custom GPTs. “The security challenges aren't restricted to the model alone,” he said. “The glaring issues here are the interconnected components that you're using to build your application.” And, as a result, the entire application can be susceptible to attack.

“Hackers are economically motivated; they want to do the easiest hack for the highest reward,” Hyrum said. The barrier to entry is low for hacking generative AI because hackers can hack in natural language — no coding required. As LLMs are integrated into apps and other systems, the security risk (and the reward for hackers) will multiply, making it an attractive target. 

When building with LLMs, treat them like untrusted users

Hyrum suggests establishing trust boundaries with LLMs, essentially treating them as untrusted human users. “If you're building your own [LLM applications] and you wouldn't want somebody with hands-on keyboard access to get the data, then you shouldn't expose the large language model either,” he said. “The large language model is now a conduit between a user and the data, and we should just be more thoughtful about creating a trust boundary.” This trust boundary should also apply to LLM outputs, which can easily be manipulated by adversarial user input. LLM outputs should be sanitized before they interact with other systems to prevent both intentional or unintentional abuse.

Ethical risks are now security risks

With generative AI, a new risk has emerged: The abuse and misuse of AI systems for unethical purposes. AI has forced security leaders to think about ethical risks and operational risks alongside more traditional security risks. However, the duty goes beyond the security team. “The CEO [is also] responsible for this,” Hyrum said. “It’s forcing communication across the organization about the more holistic risk perspectives.” Hyrum predicted a shift in organizational structure within the next two to three years to better integrate and address these interconnected risks.

Following security best practices will help you stay ahead of regulations

As regulation emerges regarding AI security, privacy and ethics, organizations should start preparing for when legislation hits — not just with security tools, but with new processes and a shift in mindset. “The [2023 executive order on AI safety and security standards] is not binding legislation, but a harbinger of things that might come,” Hyrum said. With this in mind, he noted that companies will need to be able to answer the following accountability questions:

  • Who on your team or organization is responsible for your AI supply chain? 
  • Who is measuring the risk of the AI components you're using in your software? 
  • What information are you making available to engineering and company leadership about these risks? What policies, people, processes, and technology will help you manage that? 

Hyrum said that the most successful teams building AI map out their process, following the whole machine-learning pipeline in their product development and then scoring themselves on each step. He suggests redoing that self-scoring every six months to catch issues and ensure processes are improving. “The hard work that you do in mapping this will result in you, most importantly, doing the right thing,” Hyrum said. “And, secondly, when legislation comes, you'll be in the right spot for it.”

Check out our full conversation with Dr. Hyrum Anderson.

Read more on AI

What will the security stack for generative AI applications look like?

Generative AI is blowing up. What does this mean for cybersecurity?

Whose responsibility is responsible AI?

All posts

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.