Our understanding of technology and how it works is undergoing a fundamental transformation on multiple fronts. To see how organizations are navigating these changes, F5 partnered with SlashData, a leading technology-focused market research company, to survey hundreds of IT, security, and software development leaders and practitioners across diverse industries and global regions.
Participants included users of NGINX open source projects, NGINX Open Source, and F5 NGINX commercial products. What we learned validated the sense of urgency expressed by organizations seeking to make immediate changes in containerization, security, developer platforms, and AI. Below are the key findings from our 2025 NGINX Annual Survey.
F5 NGINX has emerged as the default front door of choice for AI infrastructure, with leading AI hardware and software providers adopting it as their primary or recommended reverse proxy and delivery controller for AI applications. Survey findings revealed strong momentum among NGINX users—and the broader application delivery community— toward embracing an AI-driven future.
AI agent use: When asked about engagement with agentic AI tasks (see Figure 1), respondents indicated that “Configuring NGINX” had the highest current adoption at 25%, followed closely by “network traffic optimization and load balancing” at 24%. “Infrastructure deployment and scaling”—along with “security vulnerability remediation”—are currently being used by 23% of respondents.
Combined engagement: When combining responses from respondents currently using and experimenting with agentic AI, “log analysis and proactive troubleshooting” leads at 48%. “Configuring NGINX” and “infrastructure deployment and scaling” tied at 46% each, while “network traffic optimization” follows at 45%.
Strongest future interest: “Drift detection and correction” generated the strongest future interest at 33%, followed by “monitoring NGINX deployments” at 32% and “incident alerting and triage” at 31%.
Specialized hardware: 25% percent of survey respondents run workloads on GPUs, TPUs, or FPGAs—notable for infrastructure traditionally focused on general compute (see Figure 2). We expect this to continue to increase due to several key factors, including improvements in smaller AI models, ease of use in running smaller AI models in cloud-native infrastructure like Docker and cloud hyperscalers, and the offload of application delivery processes (such as encryption and SSL offload to NPUs and FPGA, AI inference deployed to GPU and TPU, etc.)
Top barriers: Survey respondents listed security concerns (26%) and lack of trust in accuracy (24%) as the primary barriers to AI integration. Integration complexity, compliance/regulatory restrictions, and limited understanding of agent capabilities all landed at 17%.
The cloud native revolution continues to gain momentum well over a decade after it kicked off with containers and Kubernetes. For key elements of cloud native, even if adoption is nearly universal, full penetration remains distant (see Figure 3).
Specifically, our survey found:
Over the past decade, many technology organizations have moved to adopt APIs as a core connection mechanism for both internal and external operations. In cloud-native architectures, API-first is a core design principle. Correspondingly, 86% of respondents deploy API gateways to manage their API infrastructures (see Figure 4).
Despite this widespread shift to APIs and the strong presence of API gateways, the majority of organizations still have immature API security practices (see Figure 5). While 86% of organizations use APIs, only 34% have implemented API security. This is a massive exposure in modern application infrastructure.
Less than half of respondents are focused on API traffic analysis (43%) and observability (38%), two other core elements of API security. This gap likely indicates challenges in observing and managing APIs writ large. In fact, 23% of respondents use different API management approaches across teams, a fragmentation that likely stems from difficulties getting the entire organization on the classic “golden path” for API management.
Platform engineering as a buzz word has crossed over into the mainstream. In fact, 65% of respondents have begun implementing platform engineering capabilities and responsibilities (see Figure 6). This result spans large organizations with dedicated platform teams down to single team members serving as platform engineering leads. Approximately 27% have small, dedicated platform engineering teams, and 21% of organizations have individual platform engineering role within development/operation teams. Another 13% of organizations have large, dedicated platform engineering teams. So, clearly, the platform engineering value proposition has spread and taken hold.
That said, across all team sizes, survey responses to questions about platform challenges reveal early-stage struggles. Only 20% of respondents reported that they had no significant challenges. The remainder indicated that a wide variety of platform engineering challenges were actively impacting their organizations, including issues with security and compliance (18%), documentation maintenance (16%), keeping technology current (16%), and resource constraints (14%).
The survey did find clear differentiation between larger and smaller teams regarding priorities for service delivery and value proposition. Not surprisingly, larger teams focused on more sophisticated areas while smaller teams resembled traditional DevOps in focus. Large, dedicated platform engineering teams are more likely to provide sophisticated services such as Database as a Service (54%), observability (52%), API management (51%), configuration and management of firewalls (54%), and CI/CD pipeline tooling (50%).
Companies are deploying technology faster than they can manage it. The survey shows that organizations are eager to adopt containers, APIs, and platform engineering, yet they’re currently unprepared to secure and operate what they're building. The most critical failure is API security. Nearly every organization runs APIs, but two-thirds lack basic protection. This isn't a future risk; it's an active vulnerability sitting in production right now.
Meanwhile, real progress is happening in specific areas. AI agents are on the verge of handling actual infrastructure tasks at scale, not just demos. GPUs and specialized processors are becoming the standard. Platform engineering teams exist in most organizations, even if they're still figuring out their mandate.
What matters now is execution discipline. Our recommendations? Don’t add new capabilities until you can adequately manage what you already have. Standardize your API security and management practices. Build the observability you need to understand your systems. Give platform teams clear responsibilities and the resources to deliver.
The technology exists to build modern, efficient infrastructure. Now is the time to do the unglamorous but necessary work required to make it secure and sustainable.
To learn more about F5 NGINX products, visit our webpage.