Page 1 of 1

Dangers to Consider

Posted: Thu Feb 06, 2025 4:42 am
by rakhirhif8963
Large Language Models and Cybersecurity
08.11.2023
With generative AI ( GenAI ) potentially becoming a game changer for legitimate businesses and cybercriminal groups alike, it’s important to understand the double-edged sword that large language models ( LLMs ) pose for cybersecurity. Experts interviewed by Information Age discuss the risks that LLMs can pose to business security, how attackers are using the technology, and how security teams can effectively mitigate AI-enabled attacks.

The LLMs that underpin GenAI have recently seen significant investment and have attracted the attention of virtually every business and technical function within an organization. At the same time, cybersecurity teams must consider the growing use of LLMs by threat actors and be wary of insider threats when optimizing their strategies.

A Cybsafe study into how generative AI is cyprus mobile database employee behaviour has found that employees are using AI tools to share sensitive company information that they know they shouldn’t share, even with friends in a social environment outside of the workplace. More than half (52%) of UK office workers have fed work-related information into generative AI, with 38% admitting to sharing data they wouldn’t casually share with friends at the pub. Sharing sensitive information with LLMs could help threat actors gain access to company systems, breaching cybersecurity measures.

“There are two main risks associated with LLM,” says Andrew Whaley, senior director of engineering at Promon. “First, GenAI could turn specialized and expensive skills into something anyone could do through automated ‘bots.’ Second, and more worryingly, there’s a risk that these models could understand the static code obfuscation techniques that are now common. This understanding could lead to a tool that could de-obfuscate secure applications, exposing their structure and making them vulnerable to manipulation.”

To combat this threat, he says, it is critical to develop innovative “dynamic” obfuscation frameworks that protect against code mutation at runtime. “The dynamic nature will make it impossible to understand the code in a static context. Implementing such dynamic obfuscation techniques is essential to counteracting potential cybersecurity risks associated with the misuse of GenAI,” Whaley explains.