Democratizes AI with F5 Distributed Cloud Services, a Singapore-based AI and data platform provider, is democratizing access to AI for businesses of all sizes. F5 Distributed Cloud Services helped streamline’s dispersed operations, lowering operating costs while empowering real-time AI processing at the edge.

Business Challenge’s journey as an AI innovator has been marked by remarkable success. In pursuit of its vision of empowering diverse industries with transformative AI solutions, seeks to provide real-time insights that help drive effective decision making. The company’s AI solutions are revolutionizing industries, empowering enterprises to unleash the transformative power of AI. The company’s Sentient Marketplace is a hub of innovative AI services that allow businesses to easily integrate AI into their existing workflows. The Sentient Marketplace enables customers across ASEAN countries and Japan to harness the power of AI.

However, rapid growth and an expanding customer base brought several complex challenges, namely the complexity of multi-cloud environments and the limitations of traditional cloud-native processing. Individual cloud-provider solutions bring inherent challenges when operating across multiple providers. Each offers a proprietary Kubernetes distribution for example (GKE or AKS), which can prove daunting for organizations that need to leverage these platforms in their respective clouds environments.

Further compounding the challenges was the cumbersome deployment process for AI models at customer sites. The manual approach of downloading models, setting up on-premises Kubernetes environments for each edge site, and handling updates manually was not only time-consuming but also error-prone, resulting in delays and inconsistent model deployments. Additionally, synchronizing new models from the central Sentient Marketplace to customer sites was a very manual process which led to increased operational deficiencies across customer sites.

As’s AI applications gained traction with customers, they realized the challenges of operating and deploying these services across distributed environments—on-premises, public cloud, and private cloud. This resulted in a longer time-to-value cycle for their offerings with increased operational complexity and inconsistent workflow orchestration and security policies. Moreover, they couldn't find a solution that would work across any cloud for secure deployment and delivery of their AI applications. 


Recognizing the need for a more streamlined and efficient approach, partnered with F5 using F5 Distributed Cloud Services to offer turnkey, enterprise-grade AI “as a service” solutions to customers across a variety of verticals to unlock the power of AI for their businesses.

F5 Distributed Cloud App Stack is the pivotal service that they enabled on the platform to enable real-time data processing at the edge. Distributed Cloud App Stack enables localized inference data to be delivered closer to the application, reducing dependencies on Internet-based or mobile links for inference data retrieval. 

Distributed Cloud App Stack unified their application environment, eliminating the need to manage multiple Kubernetes distributions and streamline how they deploy and managed key AI services. This simplification resulted in many operational improvements and allowed to easily deploy AI models across any cloud providers with increased efficiency. 

“Thanks to F5 Distributed Cloud App Stack, how we deliver AI infrastructure to our customers has been transformed,” says Christopher Yeo, CEO, “Processing data at the edge has unlocked real-time insights, empowering our customers to make instant, informed decisions—no matter the data volume or where they are. This shift has revolutionized our business, placing us at the forefront of AI innovation.” 

Moving forward, is also looking to explore F5 Distributed Cloud Services to secure its LLM solutions. “Our experience with F5 has given us the confidence to explore F5 Distributed Cloud Web Application and API Protection (WAAP) to holistically secure our LLM services and mitigate LLM threats such as denial of service and sensitive information disclosure,” says Yeo. is able to speed time-to-value for enterprises seeking innovative AI/ML solutions with F5. 


Unified PaaS platform

F5 Distributed Cloud Services eliminated the need to manage multiple cloud environments, simplifying operations and improving efficiency. This allowed to deploy AI models across multiple cloud providers with increased agility and scalability

Real-time insights at the edge

The solution streamlined Sentient’s operations, reducing latency and enabling real-time AI processing at the edge. Delivering inference at the edge eliminates network and bandwidth constraints due to geographical location and ensures immediate delivery of inference to applications in real-time. This shift in model deployment enabled to deliver high performing AI applications to their customers with a faster time to value.

Distributed Cloud App Stack includes built-in GPU capabilities, which reduces the need for Sentient to deploy additional GPU resources across every inference site.

Reduced costs and enhanced security

By streamlining operations onto the Distributed Cloud Platform, they were able to optimize resource allocation and reduce overall operational costs. Native integration and support for application and API security on the platform ensured that all inference apps were protected.

The collaboration also delivered significant cost savings. Managing multiple cloud platforms required dedicated teams and incurred substantial resource costs. With F5 Distributed Cloud Services, Sentient simplified operations, cutting costs by optimizing resources and simplifying application management, freeing up resources for strategic initiatives.

Yeo says, “Here’s the thing about F5 Distributed Cloud Services—it’s not just a powerhouse for AI; it’s our secret weapon for data security. It’s all about trust. With F5, isn’t just innovating, it’s also ensuring that data privacy and integrity are rock solid.”

See all customer stories

  • Consistent workflows for orchestrating AI services
  • Simplified operations for thousands of edge sites
  • Secure delivery of training data for inference purposes
  • Faster time to market for AI deployments

  • Increased complexity across multiple clouds
  • Inconsistent workflows for delivering AI data
  • Ensuring the security of distributed AI infrastructure