The future of eCommerce is agentic AI

Industry Trends | December 02, 2025

Retailers are developing agentic AI tools tailored to specific tasks to drive real business impact across the customer experience and operations for their online platforms.

As personal shopping agents emerge, organizations are evolving their infrastructure to support machine-led shopping, which has serious security implications.

Retailers need to adapt risk strategies and security defenses to properly govern autonomous agents.

Defenders are drowning in data, alerts, and logs. They need actionable context to reduce complexity and respond efficiently by prioritizing the most critical threats before an Indicator of Compromise is found in the environment.

In this blog, we dive into how autonomous agents can go rogue, and how security and risk teams need to be careful about what actions are bestowed.

The reality of retail

For retailers, the potential for agentic AI is extraordinary. A leading eCommerce provider is already deploying complex personal shoppers that understand customer buying signals and dynamic store environments in real time.

While the business value and competitive advantage of streamlined customer engagement are compelling drivers, the security implications are significant. Unlike AI agents, agentic AI is designed to perceive, reason, and act independently . That autonomy comes with questions around traceability, accountability, and governance.

Leading eCommerce providers are already deploying complex personal shoppers.

The eCommerce sector already has one of the highest proportions of advanced attacks across both web and mobile APIs, with reseller bots representing one in five add-to-cart transactions . Adversaries will continue to employ bots to scale cyberattacks against backend large language models (LLMs), and hacking organizations, which have already disrupted major online services such as click-to-collect at major retailers, will undoubtedly retrofit their tactics, techniques, and procedures (TTPs) to target agentic workflows.

Retailers need a new approach to ensure safe deployment of agentic AI. Failing to adapt risk strategies and security defenses to account for the unique architecture and behavior of autonomous agents is an enterprise liability.

Leave traditional tools in the shed

Ah, the good ol’ days! Protect the perimeter. Deploy security controls such as network and web application firewalls. Conduct static and dynamic testing. While viable in client-server architectures where north-south traffic flows reigned and traditional web app security could stave off risk, rapid application modernization, the rise of API-based systems, and highly interconnected AI ecosystems have led to an expansive risk surface and exponential amount of machine-to-machine (east-west) traffic.

Then many businesses introduced natural language processing (NLP) interfaces. As security teams scrambled to catch up, businesses are now rolling out agentic AI.

Security best practices, frameworks, and regulatory oversight are being stress tested by agentic AI.

Existing paradigms such as defense-in-depth are still applicable, as novel AI attacks on the front end often led to classic exploits like remote code execution (RCE) on the backend.

However, established security best practices such as ethical disclosure, existing frameworks such as common vulnerabilities and exposures (CVEs), and regulatory oversight, are struggling to keep pace in the agentic AI arms race.

Who’s down with MCP?

Model Context Protocol to the rescue! The standard proposed by Anthropic as well as the Agent2Agent (A2A) protocol proposed by Google help address model-to-resources and agent-to-agent communication, respectively, but neither solves the complete set of interoperability requirements for agentic AI.

MCP and A2A are great steps forward for standardization and interoperability, but what about security? Do your existing security controls have MCP fluency?

The explosion of tools and applications is leading to shadow AI, and AI agents cannot detect emerging attack prompts—making them vulnerable by design and suspectable to sensitive data leakage.

Defenders must act to ensure governance by retrofitting existing controls and deploying new ones.

Security and risk teams must act to ensure governance by retrofitting existing controls and deploying new ones to get a handle on agentic risk. In particular, web application firewalls and API security solutions must support MCP. In addition, bot management tools must be able to maintain resilience as automation toolkits get enhanced with AI and attackers are able to move quickly to weaponize their attacks. For example, in order to prevent scraping of LLM training data and content in retrieval-augmented generation (RAG) pipelines, bot management tools need to discover new attacker techniques using ML models that detect malicious patterns and use encryption and obfuscation to prevent bypass.

Defenders also need to employ vulnerability testing for AI models and apps in conjunction with runtime inspection. These enhancements are imperative for traceability and providing the insights necessary for human-in-the-loop (HITL) oversight.

The north star: See. Set. Synthesize. Save.

The recipe for agentic security comes down to reducing time to detection, speeding up remediation, and offsetting the burden on security teams.

The choice of AI models can compromise your entire AI system, and thus, they need to be hardened before deployment and at runtime. F5 AI Red Team coupled with F5 AI Guardrails provide constant testing and continuous behavioral baselines that can detect potential malicious activity.

Risk management for agentic AI requires seeing what could go wrong, setting appropriate guardrails, and synthesizing patterns to automate incident response.

Instead of mitigating specific, named attacks that have been publicly disclosed, leaving opportunity for the same core weakness to be exploited, a proactive approach focused on discovering fundamental weaknesses through continuous testing will provide superior protection.

Red teaming can simulate adversarial attacks to pressure-test AI systems, ensuring risks are addressed before deployment. When combined with live monitoring and runtime protection systems, security controls can continuously detect and automatically respond to anomalous behavior and attacks like data exfiltration.

With regulations like the EU AI Act and frameworks from NIST becoming mandatory, AI security postures will need to be demonstratable.

By seeing what could go wrong, setting appropriate guardrails, and synthesizing patterns to automate incident response, retailers will save time, and ultimately, money.

The future is now

Risks to agentic AI cannot be fully addressed by traditional tools, which leave gaps in protection and governance.

To address this, eCommerce providers need to secure AI workflows for all stages of AI maturity. This includes proactive attack testing, runtime inspection and enforcement, and centralized observability and compliance to meet emerging regulatory standards.

By bringing red teaming and runtime guardrail capabilities to the F5 Application Delivery and Security Platform (ADSP) , retailers can secure agentic AI rollouts continuously—from pilot to production.

For more, go to F5 solutions for eCommerce.

Share

About the Author

Mani Gadde
Mani GaddeSenior Manager, Industry Marketing

More blogs by Mani Gadde

Related Blog Posts

Fueling the AI data pipeline with F5 and S3-compatible storage
Industry Trends | 11/24/2025

Fueling the AI data pipeline with F5 and S3-compatible storage

When S3-compatible storage is paired with F5 BIG-IP, organizations can ensure their AI data pipelines stay resilient, secure, and always flowing at peak performance.

How AI inference changes application delivery
Industry Trends | 11/19/2025

How AI inference changes application delivery

Learn how AI inference reshapes application delivery by redefining performance, availability, and reliability, and why traditional approaches no longer suffice.

Extranets aren’t dead; they just need an upgrade
Industry Trends | 09/17/2025

Extranets aren’t dead; they just need an upgrade

Modernize your extranet: Explore how F5 and Equinix deliver repeatable, secure, and scalable partner connectivity without traditional network complexity.

Navigating higher education during a time of tightening budgets: How F5 can help
Industry Trends | 09/12/2025

Navigating higher education during a time of tightening budgets: How F5 can help

Explore how F5 helps higher education institutions optimize costs and strengthen cybersecurity despite shrinking budgets.

When the agents walk in, your security model walks out
Industry Trends | 09/03/2025

When the agents walk in, your security model walks out

Agentic AI demands new security controls. Explore shifts in threat modeling, runtime policies, identity management, and observability to protect evolving infrastructures.

Why XOps is critical for the technology sector
Industry Trends | 08/25/2025

Why XOps is critical for the technology sector

XOps treats AI as a platform capability, integrating AI into infrastructure, security, and operational strategies to better support a company’s applications.

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us
The future of eCommerce is agentic AI | F5