Nvidia, a leading manufacturer of processors for AI applications, is not only at the forefront of AI hardware but also actively develops and utilizes its own large language models (LLMs). As a key player in the generative AI (GenAI) revolution, Nvidia operates proprietary LLMs and various internal AI applications. These include the NeMo platform for building and deploying LLMs, and AI-based applications like object simulation and DNA reconstruction from extinct species.
At the upcoming Black Hat USA conference, Richard Harang, Nvidia’s principal security architect for AI/ML, will present a session titled “Practical LLM Security: Takeaways From a Year in the Trenches.” He will share insights on lessons learned from red-teaming these systems and the evolving cyberattack tactics targeting LLMs. Harang emphasizes that while LLMs pose unique risks due to their privileged nature, existing security practices can be adapted to address these threats.
“We’ve learned a lot about securing LLMs and building security from first principles, rather than adding it as an afterthought,” Harang stated. “We have valuable practical experience to share.”
Recognizable Issues with a Twist
Businesses are increasingly integrating AI agents capable of taking privileged actions into their applications. Security and AI researchers have identified potential vulnerabilities in these environments, such as AI-generated code expanding attack surfaces and chatbots inadvertently disclosing sensitive information. However, Harang notes that attackers do not necessarily need new techniques to exploit these weaknesses, as they are often extensions of known threats.
“A lot of the issues with LLMs are similar to those seen in other systems,” Harang explained. “The challenge lies in understanding the unique attack surface of LLMs and securing them accordingly.”
Despite these challenges, GenAI applications require the same security attributes as other software: confidentiality, integrity, and availability. Software engineers must perform standard security practices, such as defining security and trust boundaries and analyzing data flow within the system.
The Role of Randomness and Agency
AI systems often inject randomness to enhance creativity, making them less deterministic. This unpredictability can reduce the reliability of exploits compared to conventional information security settings. “The reliability of LLM exploits is generally lower than conventional exploits,” Harang noted.
One significant difference between AI environments and traditional IT systems is their capacity for autonomous action. Companies increasingly seek AI applications that can not only automate tasks but also take actions. This capability, known as agentic AI, introduces additional risks. If an attacker can manipulate an LLM to perform unintended actions, the consequences can be severe.
“We’ve observed instances where tool use led to unexpected LLM activity or information disclosure,” Harang said. “As AI capabilities evolve, the industry will continue to learn and adapt.”
A Manageable Challenge
Harang stresses that while the risks associated with GenAI are significant, they are manageable. He rejects alarmist views, advocating for practical approaches to harness GenAI’s benefits. Harang frequently uses GenAI to locate specific programming information and summarize academic papers.
“We’ve made significant strides in understanding and securing LLM-integrated applications,” he concluded. “Our knowledge has grown substantially over the past year.”
By continuously refining their strategies and learning from experience, Nvidia aims to navigate the complex landscape of AI security effectively.
Explore the newest supply chain news at The Supply Chain Report. Visit ADAMftd.com for free international trade tools.
#NvidiaLLMSecurity #LLMSecurity #AIAndCybersecurity #GenAI #AIRevolution #NvidiaInnovation #CybersecurityTrends #LLMSecurityInsights #PracticalAI #BlackHatUSA #AIApplicationSecurity #SecureLLMs #AIInTech #LLMThreats #NvidiaAI #AIHardware #AIAndSecurity