Introduction

In what is becoming a bit of a tradition at F5 Labs, we once again dust off our crystal ball and wave our collective hands wildly above it as we dive into our cybersecurity predictions for 2026. The pace of technological change and advancement continues to accelerate as the potential of AI is realized, the benefits of which are being utilized both for good and bad.

Not to be overly dramatic, but the real-world implications on the cybersecurity landscape of these advancements are staggering and thus reflected in our predictions for next year. Cybersecurity professionals will have a whole new vector to consider and will need to be prepared to defend at a scale never before seen.

With that out of the way, we can lighten the mood and take a look at how we did in our 2025 predictions. Keeping up with our past performance, we’ve done quite well again. Okay we might have missed mind-controlled tech wearables, health-monitoring toilets and neuromorphic brain hacking, but otherwise our record remains strong.

Looking Back at 2025

Our 2024 predictions scored an impressive 80% success rate, and yes, we were marking our own homework, but nevertheless a truly impressive feat in a dynamic year of emerging tech and a diverse set of ambitious forecasts.

So, red pen out, how have we done this year?

AI Powered Botnets - True

AI-powered botnets are now a proven reality. On the back of years of threat tracking by F5 Labs, we predicted the rise of LLM-coordinated attacks leveraging massive networks of compromised IoT devices. In 2025, this prediction materialized, autonomous AI swarms launched adaptive DDoS campaigns, API-based bots used reinforcement learning to evade defenses, and bot defense technologies matured to counter these evolving threats.

Putting the AI Into API - True

AI model APIs have become a prime attack surface. In 2025, we’ve seen prompt injection, data exfiltration, and model manipulation through exposed inference endpoints. Misconfigured or overly permissive AI APIs allowed attackers to bypass controls, extract sensitive training data, and even poison models.1 Securing these interfaces now demands strict authentication, input sanitization, and continuous anomaly monitoring.

Attackers Use AI to Discover New Vulnerabilities - Kinda

2025 has seen AI start to play its part in vulnerability discovery, but perhaps not at the scale many anticipated. LLM-based code checkers are helping developers identify and remediate insecure code, and LLM-in-the-loop fuzzers have proven effective for generating diverse inputs and improving coverage.2 However, evidence that AI is replacing human vulnerability researchers remains scarce. This feels more like a question of when rather than if, so watch this space.

AI Beats Quantum Cracking Crypto - True

AI deep learning side-channel attacks are reshaping the cryptographic risk landscape. In 2025, AI attacks demonstrated performance equal to, and sometimes better than, traditional methods.3 Open-source tooling has emerged that is accelerating adoption of these new techniques and signaling that AI-driven analysis is moving from theory to practice. This doesn’t diminish the looming quantum risk, but instead highlights the need to shift near-term focus towards prepping for crypto agility and resilience on multiple fronts.

Russia Disconnects - False (more like, Russia Disrupts)

Roskomnadzor’s authority has expanded to allow isolation or rerouting of internet traffic inside Russia starting March 2026. Senior officials insist there are no plans to sever global connectivity, yet this comes amid widespread reports of mobile and home internet disruptions4 and aggressive throttling or blocking of websites5. These developments signal a steady move toward a segmented, state-controlled digital ecosystem even if full disconnection remains officially denied.

State Sponsored Hacking Competitions Withhold Vulnerabilities - True

Withheld vulnerability disclosures by nation states is not easy to prove, but what we do know is that participants in Chinese hacking and CTF competitions are legally required to hand over discovered vulnerabilities to the state rather than disclose them publicly. With multiple government agencies running these events, seven-figure prizes on offer, and little transparency or publication of findings, I’m happy to sign this one off as True.

APTs Will Make Attacks Look Like They Come from Hacktivists - True

In 2025, APT groups increasingly emulated hacktivist methods, leaking data via activist facades, deploying DDoS campaigns with fake grassroots personas, and sharing tools with hacktivist networks. As we observe the transition from low complexity (albeit headline grabbing) DDoS attacks characteristic of hacktivist groups, towards complex large-scale cyber operations, you may start to wonder who’s actually behind this previously uncharacteristic activity. 2025 saw a renewed enthusiasm for the “cyber proxy” whereby APTs are coordinating with, supplying, and benefiting from hacktivism groups merging political aims with criminal practices.6

Cloud Comes Home - True

2025 saw outages from every major cloud provider seeing impacts on banking, travel, communications apps, social media, email, enterprise services, gaming, AI assistants, I could go on.7 At a minimum multi-cloud & resilience-driven infrastructure is now the name of the game for 76% of enterprises.8 Ballooning investment in private datacentre capacity reflects strategic diversification away from single-cloud dependency, reinforcing hybrid, sovereignty-driven architectures designed to withstand regional disruptions and enhance digital supply-chain resilience.

Roundup

Let’s call it 6.5/8, solid performance by any measure.

Cyber Security Predictions for 2026

1. MCPs Will Change the Threat Model

Key themes: MCP becomes the universal connector for AI agents, attack surface expands dramatically, critical vulnerabilities already observed, threats expected to industrialize in 2026. Difficulty of mitigating complex attacks like the “lethal trifecta”.

MCP is fast becoming the USB-C of AI agents, a standard socket that works almost everywhere, for better and for worse. It will be embedded across everyday tools, IDEs, browsers, collaboration suites and enterprise backends. As agents find and call capabilities through standard manifests and shared context, the threat model shifts from isolated apps to connected chains of decisions. The incidents that make the news will not hinge on a single flaw. They will come from normal actions that interact in ways nobody intended.

MCP formalizes how an agent states what it can do, how it invokes tools, and how it passes structured memory to the next step. That is very useful for developers, it also makes life easier for attackers who can influence content, coax an agent into calling an over-scoped tool, and rely on the agent to treat the tool’s output as correct; the so-called Lethal Trifecta. As MCP servers become the connective tissue for AI tooling inside large enterprises, they will accumulate OAuth tokens, filesystem reach and workflow privileges. This will move the risk from human-driven APIs to machine-to-machine paths that legacy monitoring rarely inspects.

Expect a visible market around agent provenance and orchestration monitoring: signed capability manifests, scans that look a lot like container image checks, and EDR-style telemetry for agent-to-tool-to-data pivots. We have already seen early faults in 2025: remote code execution in poorly isolated tool runners, authentication bypass in capability discovery, tool poisoning that skews outputs, and prompt driven data exfiltration. Expect these to be industrialized as MCP goes mainstream. The headline is simple. AI agents will become first class subjects in an attacker’s playbook and organizations will need to treat agent behavior as something that can be tracked, constrained and investigated.

2. A Slice of the PII

Key themes: Mandatory ID checks drive reliance on third-party verification, third-party networks become prime targets, normalization of Personally Identifiable Information (PII) uploads, opportunity for large-scale exploitation, sell-off of (hard-won) privacy and cybersecurity practices.

Mandatory ID checks are shifting from niche to normal. In 2026, third-party verification hubs will increasingly sit between brands and their users, handling passports, driving licenses, face videos and liveness detection at scale. These services outsource onboarding and age verification, yet they also centralize risk. Breach one of these networks and you do not just get raw data, you get validated identities with proofs that pass downstream checks with ease. We predict that these networks will become prime breach targets, with validated identities leaked at a previously unseen scale, and that attackers will exploit the normalization of ID uploads through “re‑verify” ClickFix tactics.

Under the hood, these systems chain document OCR, biometric matching and liveness detection, then issue attestations through SDKs and browser flows. As uploads become routine, attackers will exploit that familiarity by guiding people through fake re‑verification journeys that end in unintended access grants. They will lean on the same human tendencies as classic social engineering: authority cues from trusted brands, urgency around account lockouts, habits built by repeating the same steps, and trust transference to third-party portals, all compounded by consent fatigue. When the flow includes convincing liveness checks, CAPTCHAs and staged security steps, scrutiny drops. Technical weak points will look familiar too: replay of old proofs, misconfigured storage, over-permissive client keys, and liveness checks pressured by ever more convincing deepfakes.

What we expect next year is a split in strategy. Privacy-preserving credentials and zero retention models will gain ground, turning “we never keep your images or documents, only short-lived proofs” into a competitive message. At the same time, some sectors will continue to retain artefacts for convenience, and that will fuel a black market for pre-verified personas that sail through downstream controls. With the expansion of mandatory ID verification requirements in law across Europe in 2026, and similar moves elsewhere, the hard-won privacy and security practices that quietly protected everyone from fraud, identity theft, PII exposure, and malicious actors in the past, risk being eroded by lawmakers who do not appear to grasp the technological implications of these new measures.

3. Motive-Based Security

Key themes: Identity Delegation and Trust, blurring Human vs Bot Boundaries, new challenges for existing Bot Defense models, future security controls will need to assess bot motives, not just identity.

We predict that 2026 will shift bot defense from proving humanity to judging purpose. AI browsers and agentic assistants acting on behalf of users will be part of normal digital life. The challenge for security teams is to decide what a session is trying to achieve and whether that goal aligns with policy, not to block assistants outright.

As delegated identity becomes common, sessions will carry real user tokens and run tasks users have authorized. That changes the signals defenders rely on. We expect “on behalf of” standards to appear, with explicit actor claims and scoped grants that define what an agent is allowed to do and for how long. Organizations will begin to pilot purpose‑bound sessions: a customer approves an assistant to check order status or update an address, and the platform ties tool access to that declared mission.

Motive will surface through behavior rather than labels. Sequences, cadence and semantics reveal whether a flow is exploring a catalogue, attempting bulk extraction, or queuing high‑value purchases at speed. The same assistant framework can serve both helpful and harmful goals; the difference is outcome. Expect SOCs to track motive drift, fraud teams to fuse economic loss indicators with intent models, and policies that constrain what delegated agents can accomplish even when identity and device look perfect.

Defenses will evolve to match. Bot defense and application gateways will add visibility into agent activity, infer intent from context and apply friction or additional proof when motives move into higher‑risk territory. The aim is enablement with guardrails: let legitimate assistants work, keep automation inside agreed bounds, and judge sessions by what they are trying to do.

4. Q-Day: Doomsday or Y2K?

Key themes: Quantum hype vs reality, investor pressure and credibility gap, 2026 as a turning point (either quantum sceptics are proven right or wrong), end of the debate.

2026 will be the credibility checkpoint for quantum computing. The current split between headline claims of near‑term breakthroughs and academic skepticism cannot continue at this pitch. Either we see tangible progress toward fault tolerance that scales, or the market corrects and some startups with over‑ambitious promises begin to fold. In either case, the argument shifts from hype to evidence.

Security will move regardless of who wins the debate. “Harvest now, decrypt later” remains a real concern for data with long confidentiality lifetimes, so organizations will press ahead with post‑quantum pilots. Expect mainstream trials of PQ key exchange in major TLS stacks, hybrid modes in production between large cloud properties, and early PQ signatures appearing in niche certificate chains and code‑signing ecosystems. Crypto agility becomes a board‑level metric rather than a slogan, with vendor roadmaps and contract language reflecting the need to switch algorithms without breaking everything else.

The near‑term risk is less about overnight cryptographic collapse and more about implementation drag. AI‑assisted side‑channel analysis improved markedly in 2025, and we anticipate more attention on timing leaks, cache behavior and brittle fallbacks in hybrid stacks. Integration will be the story: performance surprises, compatibility hiccups and the odd “migration‑induced outage” as teams retool protocols, keys and certificates at scale. Gateways, browsers, CDNs and application servers will add algorithm agility, better telemetry and controlled rollout features. PQC standards will tighten, test harnesses will mature, and operational playbooks will become part of everyday practice. If Q‑Day ultimately feels closer to Y2K than doomsday, it will be because the industry treated 2026 as the year to execute calmly, prove what works and keep moving as the science and the tooling evolve.

5. The Overreliance Era

Key themes: Blind Trust in AI outputs, risk from cascading AI decisions, erosion of traditional QA, rise in AI supply chain attacks (e.g. model poisoning attacks), Overreliance bubbling to the top of the OWASP LLM top 10.

We predict that Overreliance on AI outputs will become one of the most significant security concerns in 2026. As GenAI is woven into everyday workflows, models and agents will do more than suggest next steps; they will take them. That speeds up operations until a confident but wrong answer sets off a chain reaction: an alert is downplayed, a correlated signal is missed, a “safe” configuration change weakens a control, and automation tidies up the evidence because the system believes its own judgement. The issue is not that AI is inherently unsafe, it is that unqualified trust in its outputs turns routine errors into incidents.

The technical roots are straightforward. Many deployments lack calibrated confidence, abstain behaviors and clearly bounded autonomy, so agents act even when uncertainty is high. As assistants plug into CI/CD, IT service management and security tooling, their decisions propagate at machine speed. The AI supply chain adds pressure, for example, fine‑tuned models informed by tainted data can bias recommendations in subtle ways, promoting unsafe libraries or skipping validation paths while appearing perfectly reasonable. Without provenance, reproducibility and runtime checks, these behaviors look like ordinary drift rather than a fault that needs investigation.

What we expect next year is a visible shift in practice. Incident reports will start to tag “AI‑originated change” and SOCs will introduce autonomy reviews that sit outside model judgement, with tripwires that enforce policy even when a model insists otherwise. Governance will harden around models and data: tracking sources, training runs and signed artefacts so teams can verify what is running and why it behaves as it does. Standards will reflect the trend too. OWASP’s LLM guidance already highlights Overreliance, and we anticipate it featuring prominently as organizations share lessons from real deployments.

6. ClickFix and Chill

Key themes: Expansion in “low-tech” ClickFix social engineering attacks defeating high-tech defenses, human-test confusion as a vector, copy-and-paste culture fuels risk, bypassing traditional security controls.

Low-tech solutions to high-tech defenses will continue in 2026. Attackers will take advantage of user confusion around new types of human-tests and the growing culture of copy-and-paste IT. The tactic is to convince users to input malicious code directly into command lines on their own devices under the guise of gaining access or fixing a problem quickly. These attacks will often appear as timed verification and urgent troubleshooting steps shared in collaboration tools or chat threads, framed as official guidance to resolve account lockouts, compliance checks, or system errors.

The psychology behind ClickFix is rooted in urgency and familiarity. As security workflows become more complex with layered defenses like MFA and zero trust prompts, users increasingly look for shortcuts. Attackers exploit this fatigue by mimicking helpdesk language and embedding “quick fixes” that feel routine. Each step looks harmless, but together they bypass controls and grant attackers the access they need.

Expect technical enablers such as fake CAPTCHAs and liveness checks that appear legitimate, pre-filled scripts disguised as repair commands, and crafted sequences that exploit self-service portals. Defenders will need to move beyond payload detection and focus on behavioral signals like repeated paste actions, scripted fixes in chat, and sudden privilege escalations. Security awareness must evolve from “don’t click links” to “don’t copy commands blindly”. In 2026, the weakest link may not be the technology but the human habit of fixing fast without thinking.

Conclusion

2025 confirmed what we have been saying for years… innovation does not arrive quietly. AI-driven automation, identity verification mandates, and quantum computing hype have all reshaped the threat landscape. The predictions for 2026 demonstrate a profound shift where complexity is accelerating, and the attack surface is no longer defined by single points of failure but by chains of interconnected decisions.

Several themes stand out. First, the rise of machine-to-machine trust models means attackers will increasingly exploit automation rather than bypass it. Second, human behavior remains a critical weak point, whether through copy-and-paste shortcuts or fatigue with layered security checks. Third, regulatory and market pressures are creating centralized identity hubs and cryptographic transitions that will test resilience at scale.

Yet the story is not all negative. The same forces driving these challenges also create opportunities for better security. Provenance tracking, intent-based controls, privacy-preserving identity, and quantum agility are becoming practical responses. Organizations that adopt these measures early will not only reduce exposure but gain operational advantage. The future is not something to fear; it is something to shape. 2026 will reward those who treat security as a design principle, not an afterthought. If we learn from these predictions and act decisively, the next wave of technology can be a foundation for resilience and not a catalyst for chaos.

Authors & Contributors

Adam Metcalfe-Pearce (Author)

Threat Researcher, F5

David Warburton (Contributor)

Director, F5 Labs, F5

Ken Arora (Contributor)

Distinguished Engineer, Office of the CTO,

Darien Kindlund (Contributor)

Director, AI/ML Engineering, F5

Malcolm Heath (Contributor)

Principal Threat Researcher, F5

Keiron Shepherd (Contributor)

Senior Solution Architect, F5