Our understanding of technology and how it works is undergoing a fundamental transformation on multiple fronts. To see how organizations are navigating these changes, F5 partnered with SlashData, a leading technology-focused market research company, to survey hundreds of IT, security, and software development leaders and practitioners across diverse industries and global regions.
Participants included users of NGINX open source projects, NGINX Open Source, and F5 NGINX commercial products. What we learned validated the sense of urgency expressed by organizations seeking to make immediate changes in containerization, security, developer platforms, and AI. Below are the key findings from our 2025 NGINX Annual Survey.
Emerging trends: The agentic, non-CPU future
F5 NGINX has emerged as the default front door of choice for AI infrastructure, with leading AI hardware and software providers adopting it as their primary or recommended reverse proxy and delivery controller for AI applications. Survey findings revealed strong momentum among NGINX users—and the broader application delivery community— toward embracing an AI-driven future.
AI agent use: When asked about engagement with agentic AI tasks (see Figure 1), respondents indicated that “Configuring NGINX” had the highest current adoption at 25%, followed closely by “network traffic optimization and load balancing” at 24%. “Infrastructure deployment and scaling”—along with “security vulnerability remediation”—are currently being used by 23% of respondents.
Combined engagement: When combining responses from respondents currently using and experimenting with agentic AI, “log analysis and proactive troubleshooting” leads at 48%. “Configuring NGINX” and “infrastructure deployment and scaling” tied at 46% each, while “network traffic optimization” follows at 45%.
Strongest future interest: “Drift detection and correction” generated the strongest future interest at 33%, followed by “monitoring NGINX deployments” at 32% and “incident alerting and triage” at 31%.
Specialized hardware: 25% percent of survey respondents run workloads on GPUs, TPUs, or FPGAs—notable for infrastructure traditionally focused on general compute (see Figure 2). We expect this to continue to increase due to several key factors, including improvements in smaller AI models, ease of use in running smaller AI models in cloud-native infrastructure like Docker and cloud hyperscalers, and the offload of application delivery processes (such as encryption and SSL offload to NPUs and FPGA, AI inference deployed to GPU and TPU, etc.)
Top barriers: Survey respondents listed security concerns (26%) and lack of trust in accuracy (24%) as the primary barriers to AI integration. Integration complexity, compliance/regulatory restrictions, and limited understanding of agent capabilities all landed at 17%.
Cloud native is ubiquitous but remains a work in progress
The cloud-native revolution continues to gain momentum well over a decade after it kicked off with containers and Kubernetes. For key elements of cloud native, even if adoption is nearly universal, full penetration remains distant (see Figure 3).
Specifically, our survey found:
- Multi-environment reality is really here: 66% of organizations use at least one cloud option, but on-premises still leads deployment locations at 39% vs. public cloud at 38% and hybrid at 36%. This reflects the complex, distributed nature of modern infrastructure rather than a simple "cloud migration" story.
- Container adoption still has room to run: Only 42% of respondents run workloads on containers, and microservices adoption sits at just 31%. Virtual machines dominate at 60%, suggesting many organizations are still in earlier stages of cloud-native transformation.
- Kubernetes fragmentation: While Kubernetes (self-managed) leads container orchestration at 24%, Red Hat OpenShift accounts for 21% combined (self and fully managed), while managed services from Amazon Web Services (21%), Microsoft Azure (17%), and Google Cloud Platform (17%) show significant traction.
The API and microservices security management gap is huge
Over the past decade, many technology organizations have moved to adopt APIs as a core connection mechanism for both internal and external operations. In cloud-native architectures, API-first is a core design principle. Correspondingly, 86% of respondents deploy API gateways to manage their API infrastructures (see Figure 4).
Despite this widespread shift to APIs and the strong presence of API gateways, the majority of organizations still have immature API security practices (see Figure 5). While 86% of organizations use APIs, only 34% have implemented API security. This is a massive exposure in modern application infrastructure.
Less than half of respondents are focused on API traffic analysis (43%) and observability (38%), two other core elements of API security. This gap likely indicates challenges in observing and managing APIs writ large. In fact, 23% of respondents use different API management approaches across teams, a fragmentation that likely stems from difficulties getting the entire organization on the classic “golden path” for API management.
Platform engineering has evolved, but is still immature
Platform engineering as a buzz word has crossed over into the mainstream. In fact, 65% of respondents have begun implementing platform engineering capabilities and responsibilities (see Figure 6). This result spans large organizations with dedicated platform teams down to single team members serving as platform engineering leads. Approximately 27% have small, dedicated platform engineering teams, and 21% of organizations have individual platform engineering role within development/operation teams. Another 13% of organizations have large, dedicated platform engineering teams. So, clearly, the platform engineering value proposition has spread and taken hold.
That said, across all team sizes, survey responses to questions about platform challenges reveal early-stage struggles. Only 20% of respondents reported that they had no significant challenges. The remainder indicated that a wide variety of platform engineering challenges were actively impacting their organizations, including issues with security and compliance (18%), documentation maintenance (16%), keeping technology current (16%), and resource constraints (14%).
The survey did find clear differentiation between larger and smaller teams regarding priorities for service delivery and value proposition. Not surprisingly, larger teams focused on more sophisticated areas while smaller teams resembled traditional DevOps in focus. Large, dedicated platform engineering teams are more likely to provide sophisticated services such as Database as a Service (54%), observability (52%), API management (51%), configuration and management of firewalls (54%), and CI/CD pipeline tooling (50%).
Keeping up (with cloud, AI, and APIs) is hard to do
Companies are deploying technology faster than they can manage it. The survey shows that organizations are eager to adopt containers, APIs, and platform engineering, yet they’re currently unprepared to secure and operate what they're building. The most critical failure is API security. Nearly every organization runs APIs, but two-thirds lack basic protection. This isn't a future risk; it's an active vulnerability sitting in production right now.
Meanwhile, real progress is happening in specific areas. AI agents are on the verge of handling actual infrastructure tasks at scale, not just demos. GPUs and specialized processors are becoming the standard. Platform engineering teams exist in most organizations, even if they're still figuring out their mandate.
What matters now is execution discipline. Our recommendations? Don’t add new capabilities until you can adequately manage what you already have. Standardize your API security and management practices. Build the observability you need to understand your systems. Give platform teams clear responsibilities and the resources to deliver.
The technology exists to build modern, efficient infrastructure. Now is the time to do the unglamorous but necessary work required to make it secure and sustainable.
To learn more about F5 NGINX products, visit our webpage.
About the Author

Related Blog Posts
Secure Your API Gateway with NGINX App Protect WAF
As monoliths move to microservices, applications are developed faster than ever. Speed is necessary to stay competitive and APIs sit at the front of these rapid modernization efforts. But the popularity of APIs for application modernization has significant implications for app security.
How Do I Choose? API Gateway vs. Ingress Controller vs. Service Mesh
When you need an API gateway in Kubernetes, how do you choose among API gateway vs. Ingress controller vs. service mesh? We guide you through the decision, with sample scenarios for north-south and east-west API traffic, plus use cases where an API gateway is the right tool.
Deploying NGINX as an API Gateway, Part 2: Protecting Backend Services
In the second post in our API gateway series, Liam shows you how to batten down the hatches on your API services. You can use rate limiting, access restrictions, request size limits, and request body validation to frustrate illegitimate or overly burdensome requests.
New Joomla Exploit CVE-2015-8562
Read about the new zero day exploit in Joomla and see the NGINX configuration for how to apply a fix in NGINX or NGINX Plus.
Why Do I See “Welcome to nginx!” on My Favorite Website?
The ‘Welcome to NGINX!’ page is presented when NGINX web server software is installed on a computer but has not finished configuring
