Service providers, as the initial point of connectivity for many AI applications and devices, are in the perfect position to offer the edge AI technologies and monetize their investment if they start building, optimizing, and securing their edge AI infrastructure today. AI applications are everywhere and growing at an exponential rate. Edge AI technologies are critical for AI applications that need the responsiveness and accuracy businesses require to stay competitive and relevant.
The opportunity for service providers
According to a recent report from STL Partners, businesses are expected to increase their edge AI spending by 285% by 2030, reaching over $157 billion. All of these AI applications and projects will need to access edge AI infrastructure.
This surge in edge AI application development and expansion will generate over $2.5 billion of revenue for service providers that provide edge AI services to their customers. Service providers that build their edge AI infrastructure early will be the ones that gain the most value from this opportunity.
The need for edge AI environments
The connected car is an ideal use case for edge AI where localized and responsive information is necessary for the cars to be safe and utilize optimized driving patterns. The number of AI-connected cars is increasing rapidly and auto manufacturers are just starting to take advantage of edge AI technologies. Connected cars will be doing 50% of their AI workloads through edge AI, according to the STL Partners report.
“This surge in edge AI application development and expansion will generate over $2.5 billion of revenue for service providers that provide edge AI services to their customers.”
The evolution of the modern smart city is another example where edge AI will play an important role. Smart cities utilize real-time traffic monitoring, video surveillance, and resource management to optimize their infrastructure and security. Reduced latency and localized information will enhance their ability to operate efficiently and securely. By 2030, 54% of the typical smart city’s AI workload will be processed in the edge, according to the STL Partners report, moving away from on-device and on-premises processing.
Developing an AI edge infrastructure
Today, many AI applications use the cloud, but the cloud is impractical for many AI applications. The cloud cannot support real-time. mission-critical AI applications and has significant backhaul costs. Many AI applications, such as self-driving cars and video surveillance. require near-real time data to work effectively. Exposing proprietary data to the cloud is also a concern. Leaking personal information or proprietary intellectual property is not an option, and security must be top of mind when designing an edge AI architecture.
Instead of the cloud, AI application owners are looking for edge AI resources that improve the reliability, latency, and security of their AI communications. The service provider will be one of the first places they will look towards to access edge AI capabilities.
How F5 can help
For service providers to gain the confidence and trust of their customers utilizing their edge AI resources, a robust, secure, and optimized edge AI architecture is essential. With our F5 Application Delivery and Security Platform (ADSP) that secures and optimizes AI resources, F5 is ready to be your AI partner as you design and build out your edge AI infrastructure.
To learn more, read about F5’s AI solutions.
About the Author
Related Blog Posts

Accelerate Kubernetes and AI workloads with F5 BIG-IP and AWS EKS
The F5 BIG-IP Next for Kubernetes software will soon be available in AWS Marketplace to accelerate managed Kubernetes performance on AWS EKS.

The everywhere attack surface: EDR in the network is no longer optional
All endpoints can become an attacker’s entry point. That’s why your network needs true endpoint detection and response (EDR), delivered by F5 and CrowdStrike.
F5 NGINX Gateway Fabric is a certified solution for Red Hat OpenShift
F5 collaborates with Red Hat to deliver a solution that combines the high-performance app delivery of F5 NGINX with Red Hat OpenShift’s enterprise Kubernetes capabilities.

F5 accelerates and secures AI inference at scale with NVIDIA Cloud Partner reference architecture
F5’s inclusion within the NVIDIA Cloud Partner (NCP) reference architecture enables secure, high-performance AI infrastructure that scales efficiently to support advanced AI workloads.
F5 Silverline Mitigates Record-Breaking DDoS Attacks
Malicious attacks are increasing in scale and complexity, threatening to overwhelm and breach the internal resources of businesses globally. Often, these attacks combine high-volume traffic with stealthy, low-and-slow, application-targeted attack techniques, powered by either automated botnets or human-driven tools.
Phishing Attacks Soar 220% During COVID-19 Peak as Cybercriminal Opportunism Intensifies
David Warburton, author of the F5 Labs 2020 Phishing and Fraud Report, describes how fraudsters are adapting to the pandemic and maps out the trends ahead in this video, with summary comments.