Advanced Kubernetes Deployment Strategies

NGINX | March 24, 2022

This blog is the fifth in our five‑part series about Kubernetes networking for Microservices March 2022:

Also be sure to download our free eBook, Managing Kubernetes Traffic with NGINX: A Practical Guide, for detailed guidance on implementing Kubernetes networking with NGINX.

Once you get Kubernetes into production, you have to keep it there! In Unit 4, we address how Kubernetes networking can increase uptime and improve customer experiences.

Three activities guide you progressively from a high‑level overview to practical application. We suggest you complete all three to get the best experience.

Step 1: Watch the Livestream (1 Hour)

Each Microservices March livestream provides a high‑level overview of the topic featuring subject matter experts from learnk8s and NGINX. If you miss the live airing on March 28 – don’t worry! You can catch it on demand.

In this episode, we cover how to implement zero‑downtime deployments using tactics such as:

  • Traffic splitting
  • Blue-green deployments
  • Tracing
  • Mapping the flow in real time

 

Step 2: Deepen Your Knowledge (1–2 Hours)

We expect you’ll have more questions after the livestream –that’s why we curated a collection of relevant reading and videos. This Unit’s deep dive covers two topics: traffic management to boost resilience and improving visibility.

Blog | How to Improve Resilience in Kubernetes with Advanced Traffic Management
Improve the resilience of Kubernetes apps with the traffic control and splitting methods discussed in this blog – rate limiting, circuit breaking, debug routing, A/B testing, and canary and blue‑green deployments – and learn how NGINX products make them easier to implement.

Video | How to Do Traffic Splitting in Kubernetes
When it’s time to move from an old service to the new version, you don’t want to move all your traffic at once in case there are any issues with the new service. That’s why traffic splitting (including circuit breaking and canary and blue‑green deployments) is a valuable tool for ensuring resilience. In this video, you learn about best practices and use cases for north‑south and east‑west traffic splits, and see two traffic splitting demos.

Blog | How to Improve Visibility in Kubernetes
There are two types of visibility data that provide crucial insights into application and Kubernetes performance: live data and historical data. In this blog, we discuss how you can use this data – gleaned from an Ingress controller or service mesh – to troubleshoot common Kubernetes problems.

Video | How to Improve Visibility in Kubernetes with Prometheus, Grafana, and NGINX
In this video, our microservices experts demonstrate how to improve visibility in Kubernetes by leveraging live monitoring of key load‑balancing and performance metrics, exporting metrics to Prometheus, and using Grafana to create a view of cumulative performance.

Webinar | Strengthen Security and Traffic Visibility on Amazon EKS with NGINX
Zipwhip (acquired by Twilio) planned to launch a new SaaS app, but their legacy infrastructure couldn’t provide the required stability and agility. In this webinar, we sit down with their principal architect to learn about the strategic and technical steps they took to adopt Kubernetes, as well as the outcomes they achieved using Amazon EKS and NGINX Ingress Controller.

Bonus Research

If you’re keen to deepen your knowledge on security and service mesh –and have more than 1–2 hours to spend – then we suggest two additional resources to get you started.

eBook | 97 Things Every SRE Should Know
This O’Reilly eBook is a curated set of insights, tips, and tricks for Site Reliability Engineers (SREs), including concepts every SRE needs to understand, how to build an effective SRE practice, and how to interact with stakeholder teams.

Webinar | Control Kubernetes Ingress and Egress Together with NGINX
While Kubernetes ingress traffic gets most of the attention, how you handle egress traffic is just as important – and it’s a critical part of a Zero‑Trust Architecture. Watch this webinar to learn how to simplify traffic management by controlling ingress and egress in a single configuration.

Step 3: Get Hands On (1 Hour)

Even with all the best webinars and research, there’s nothing quite like getting your hands on the tech. The labs will run you through common scenarios to reinforce your learning.

In our fourth self‑paced lab, Improve Kubernetes Uptime and Resilience with a Canary Deployment, you use NGINX Service Mesh to split traffic between two versions of a backend service and then gradually transition traffic from version 1 to version 2.

To access the lab, you need to register for Microservices March 2022. If you’re already registered, the email you received with the Unit 4 Learning Guide includes access instructions. Alternatively, you can try out the lab in your own environment, using NGINX Tutorial: Improve Uptime and Resilience with a Canary Deployment as a guide.

 

Why Register for Microservices March?

While some of the activities (the livestreams and blogs) are freely available, we need to collect just a little personal information to get you set up with the full experience. Registration gives you:

  • Access to four self‑paced labs where you can get hands‑on with the tech via common scenarios
  • Membership in the Microservices March Slack channel for asking questions of the experts and networking with fellow participants
  • Weekly learning guides to help you stay on top of the agenda
  • Calendar invites for the livestreams

For detailed guidance on implementing Kubernetes networking with NGINX, download our eBook, Managing Kubernetes Traffic with NGINX: A Practical Guide.


Share

About the Author

Jenn Gile
Jenn GileHead of Product Marketing, NGINX

More blogs by Jenn Gile

Related Blog Posts

Automating Certificate Management in a Kubernetes Environment
NGINX | 10/05/2022

Automating Certificate Management in a Kubernetes Environment

Simplify cert management by providing unique, automatically renewed and updated certificates to your endpoints.

Secure Your API Gateway with NGINX App Protect WAF
NGINX | 05/26/2022

Secure Your API Gateway with NGINX App Protect WAF

As monoliths move to microservices, applications are developed faster than ever. Speed is necessary to stay competitive and APIs sit at the front of these rapid modernization efforts. But the popularity of APIs for application modernization has significant implications for app security.

How Do I Choose? API Gateway vs. Ingress Controller vs. Service Mesh
NGINX | 12/09/2021

How Do I Choose? API Gateway vs. Ingress Controller vs. Service Mesh

When you need an API gateway in Kubernetes, how do you choose among API gateway vs. Ingress controller vs. service mesh? We guide you through the decision, with sample scenarios for north-south and east-west API traffic, plus use cases where an API gateway is the right tool.

Deploying NGINX as an API Gateway, Part 2: Protecting Backend Services
NGINX | 01/20/2021

Deploying NGINX as an API Gateway, Part 2: Protecting Backend Services

In the second post in our API gateway series, Liam shows you how to batten down the hatches on your API services. You can use rate limiting, access restrictions, request size limits, and request body validation to frustrate illegitimate or overly burdensome requests.

New Joomla Exploit CVE-2015-8562
NGINX | 12/15/2015

New Joomla Exploit CVE-2015-8562

Read about the new zero day exploit in Joomla and see the NGINX configuration for how to apply a fix in NGINX or NGINX Plus.

Why Do I See “Welcome to nginx!” on My Favorite Website?
NGINX | 01/01/2014

Why Do I See “Welcome to nginx!” on My Favorite Website?

The ‘Welcome to NGINX!’ page is presented when NGINX web server software is installed on a computer but has not finished configuring

Deliver and Secure Every App
F5 application delivery and security solutions are built to ensure that every app and API deployed anywhere is fast, available, and secure. Learn how we can partner to deliver exceptional experiences every time.
Connect With Us