Home

Deploying Apps Through a Few Strategies

A Service like istio is developed to help address networking issues that may be "beyond the scope" of kubernetes. Istio, or something like it, can help manage network traffic for things like canary deployments.

Recreate with Downtime

First, destroy all existing versions of the app.
Then, release new versions.
Problem: App goes down during the process.

Rolling Update without Downtime

Take down 1 instance of the app at a time.
App never goes down, better than recreate. This is the default in k8s :)

Deploy Then Test Then Redirect Traffic With Blue-Green

The new app gets deployed along-side the old one.
Routing helps here - tests are run on the new version. Once tests pass, routing is redirected to the new app.
Service Meshes can be helpful here.

How does this work in K8s?

Understand the Current state

  • a deployment of pods of the app is present
  • a service routing traffic to the pods is also present

Deployment
Service

Include labels to the Deployment and the service

Here something like "version:1". Note the wrapper boxes here are just to "tie" the labels to the objects visually.
The label on the deployment should also be included in the selector in the service definition file. This step can also be done when creating the objects initially: maybe including a version: 1 on both the deployment and service to begin with.

Prod Service
version:1
Service Object
Deployment
version:1
Deployment Object

Create A New Deployment With The New App Version

Deploy a new Deployment of the new version of the app.
Have the deploymet accessible through a new service, accessible by some folks to test with, perphaps internal dogfooding or something.

Prod Service
version:1
Service Object
Prod Deployment
version:1
Deployment Object
QA/Test Service
version:2
Service Object
QA/Test Deployment
version:2
Deployment Object

Adjust the Label Selector on the Production Service

Tell the Production service to matchLabels for the new version.
This will leave...

  • the qa service talking to the qa/latest instance of the app stil
  • the prod service talking to the latest instance of the app
  • the previous deployment of the app "dangling" without incoming traffic

Prod Service
version:1
Service Object
(was) Prod Deployment
version:1
Deployment Object
QA/Test Service
version:2
Service Object
(was) QA/Test Deployment
version:2
Deployment Object

See This In Code

App Deployment Def V1

apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp-deployment
  labels:
    app: webapp
    type: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      version: 1
  template:
    metadata:
      name: webapp-pod
      labels:
        version: 1
    spec:
      containers:
        - name: webapp-box
          image: web-api:1

App Service Def V1

apiVersion: v1
kind: Service
metadata:
  name: webapp-service
spec:
  selector:
    version: 1

App Deployment Def V2

This would be the new version of the deployment.
Here, Colors can be used to decipher the deployments from one-another. ROYGBIV might be useful for matching colors to objects in "order".
The differences here from the first version are:

  • the dpeloyment name
  • the deployment selector label value
  • the pod label
  • the container image version
apiVersion: apps/v1
kind: Deployment
metadata:
  name: webapp-deployment-green
  labels:
    app: webapp
    type: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      version: 2
  template:
    metadata:
      name: webapp-pod
      labels:
        version: 2
    spec:
      containers:
        - name: webapp-box
          image: web-api:2

Change Label On Service Def

Same file as v1 above, just a new spec:selector:version value.

apiVersion: v1
kind: Service
metadata:
  name: webapp-service
spec:
  selector:
    version: 2

Canary

  • Canary, like blue-green, deploys both current + "future" instances of the app
  • Canary, unlike blue-green, routes traffic to both the "current" and "future" version of the app && the "future" version is referred to as the canary version
  • Canary routes a little bit of traffic to the new version for a time, retaining traffic to the current version as well

A Point of Knowledge here: Services distribute traffic "evenly" across pods: 4 pods, each get 25% of the traffic - 5 pods, each get 20%, etc.

  • spin up the current version deployment, the current service, and a new deployment with the new canary version of the pods/apps
  • Add a common label to both deployments, something like midCanary: true or something
  • here, if both deployments have the same number of pods, the service will route 50/50 to each deployment
    • One way to do this: leverage the "natural order" of how services route traffic by having something like 4 pods in the "current" deployment and 1 pod in the "canary" deployment
    • Another way,
Tags: