An Example Of A Growing Tech Stack

An App

An app, maybe an online e-commerce platform.
Build it into a docker img.
Deploy it on a K8s Pod, in a K8s Deployment object.

A DB And A Service To Allow the Two to Talk

The db needs a db, so a db gets built.
A new Pod gets deployed on the same node.
The App pod needs to talk to the db, so a new K8s service gets created to allow this pod-to-pod communication.

K8s Node

Web-App Pod

ClusterIP Service - Inner-Node-Communication

Database Pod

Make it Available with a NodePort Service

Another service is made, a NodePort service, that opens the app to the world via a url+port, at something like http://--the-k8s-node-ip--:--the-nodePort-port--.

K8s Node

Unsupported markdown: link

End-User

NodePort Service: port 38080

Web-App Pod

ClusterIP Service - Inner-Node-Communication

Database Pod

Address App Scaling Needs With ReplicaSets

Once Traffic gets busy enough, build a ReplicaSet to scale the App pods. the NodePort service will split traffic between the replicated pods.

K8s Node

Unsupported markdown: link

ReplicaSet

Web-App Pod

Web-App Pod

Web-App Pod

End-User

NodePort Service: port 38080

ClusterIP Service - Inner-Node-Communication

Database Pod

Allow For Friendly URL with DNS Config

Configure the dns server in use to redirect my-demo-app.com to <the-k8s-node-ip>. Now, users can access the app at http://my-demo-app.com:--the-nodePort-port--`.

K8s Node

Unsupported markdown: link

ReplicaSet

Web-App Pod

Web-App Pod

Web-App Pod

(DNS Server: routing logic)

demo-app to k8s-node-ip

End-User

NodePort Service: port 38080

ClusterIP Service - Inner-Node-Communication

Database Pod

Remove the Need For the Port in the URL with A Proxy Server

Configure a proxy-server to sit between the world and the DNS, so that the world can access the url without the port. The Proxy server will forward port 80 (open ot the world) to the nodePort service port.

K8s Node

Unsupported markdown: link (port 80)

ReplicaSet

Web-App Pod

Web-App Pod

Web-App Pod

(routing logic)

Proxy-Server: port 80 to 38080

demo-app to k8s-node-ip

End-User

NodePort Service: port 38080

ClusterIP Service - Inner-Node-Communication

Database Pod

Host K8s In A Cloud Platform

Take GCP as an example. A few things get updated:

  • the NodePort service can be converted to a LoadBalancer service type, which ...
    • allows NodePort service details to exist (make the node available through a port)
    • K8s sends req to GCP to provision a network load-balancer for the service: where GCP deploys a load-balancer to split traffic across multiple nodes (if that ever happens)
    • the GCP load-balancer comes with an external ip
  • The DNS needs to be updated to change my-app-url to the gcp-ip-addr

google cloud platform

K8s Node

routing logic

Unsupported markdown: link (port 80)

WebApp ReplicaSet

Web-App Pod

Web-App Pod

Web-App Pod

End-User

DNS: demo-app to gcp-provided LoadBalancer IP

GCP LoadBalancer: - IP Provided + port proxying included - fwd req from 80 to 38080

LoadBalancer Service: port 38080

ClusterIP Service - Inner-Node-Communication

Database Pod

The Product Grows And A new Application Gets Introduced

Following the mock product example, the Org wants to host instructional videos about the products it sells. The efforts involved here become their own workload, with unique people, unique goals, and unique needs. This develops into its own:

  • url (demo-app.com/how-to)
  • application codebase (vid-app)
  • containers + pods + replica set
  • K8s Service Object
  • accessible port on the K8s Node
  • a new load-balancer on the GCP Platform

google cloud platform

K8s Node

routing logic

Unsupported markdown: link (port 80)

Watch-App-ReplicaSet

Pod

Pod

Pod

Webapp-ReplicaSet

Pod

Pod

Pod

End-User

DNS: demo-app to gcp-provided LoadBalancer IP

Root Route: GCP LoadBalancer: - IP Provided + port proxying included - fwd req from 80 to 38080 - paid
How-To Route: GCP LoadBalancer: - IP Provided + port proxying included - fwd req from 80 to 38080 - paid

Root: LoadBalancer Service: port 38080

How-To: LoadBalancer Service: port 38282

ClusterIP Service - Inner-Node-Communication

Database Pod

Also. Want more protection & security? Configure Https. Many options there.

This is where Ingress Helps

  • Cloud-Native Load-Balancers cost $$
  • Route-Based Load-Balancers become redundant "config"
  • firewall config
  • https config

K8s offers Ingress to create yet another object with a definition file. Ingress provides A single externally accessible url, configurable to route to differente services within a cluster. Ingress allows ssl config as well. Ingress still needs to be exposed - published as a nodePort or a cloud-native load-balancer. Load-balancing, Auth, SSL, Url-Based Routing all happen in an ingress controller.