Home

An Example Of A Growing Tech Stack

An App

An app, maybe an online e-commerce platform.
Build it into a docker img.
Deploy it on a K8s Pod, in a K8s Deployment object.

A DB And A Service To Allow the Two to Talk

The db needs a db, so a db gets built.
A new Pod gets deployed on the same node.
The App pod needs to talk to the db, so a new K8s service gets created to allow this pod-to-pod communication.

K8s Node
ClusterIP Service - Inner-Node-Communication
Web-App Pod
Database Pod

Make it Available with a NodePort Service

Another service is made, a NodePort service, that opens the app to the world via a url+port, at something like http://:.

www.k8s-node-ip:38080
K8s Node
Web-App Pod
NodePort Service: port 38080
ClusterIP Service - Inner-Node-Communication
Database Pod
End-User

Address App Scaling Needs With ReplicaSets

Once Traffic gets busy enough, build a ReplicaSet to scale the App pods. the NodePort service will split traffic between the replicated pods.

www.k8s-node-ip:38080
K8s Node
ReplicaSet
Web-App Pod
Web-App Pod
Web-App Pod
ClusterIP Service - Inner-Node-Communication
Database Pod
NodePort Service: port 38080
End-User

Allow For Friendly URL with DNS Config

Configure the dns server in use to redirect my-demo-app.com to <the-k8s-node-ip>. Now, users can access the app at http://my-demo-app.com:`.

K8s Node
www.demo-app.com:38080
ReplicaSet
Web-App Pod
Web-App Pod
Web-App Pod
NodePort Service: port 38080
ClusterIP Service - Inner-Node-Communication
Database Pod
(DNS Server: routing logic)
demo-app to k8s-node-ip
End-User

Remove the Need For the Port in the URL with A Proxy Server

Configure a proxy-server to sit between the world and the DNS, so that the world can access the url without the port.
The Proxy server will forward port 80 (open ot the world) to the nodePort service port.

K8s Node
www.demo-app.com (port 80)
ReplicaSet
Web-App Pod
Web-App Pod
Web-App Pod
NodePort Service: port 38080
ClusterIP Service - Inner-Node-Communication
Database Pod
(routing logic)
demo-app to k8s-node-ip
Proxy-Server: port 80 to 38080
End-User

Host K8s In A Cloud Platform

Take GCP as an example. A few things get updated:

  • the NodePort service can be converted to a LoadBalancer service type, which ...
    • allows NodePort service details to exist (make the node available through a port)
    • K8s sends req to GCP to provision a network load-balancer for the service: where GCP deploys a load-balancer to split traffic across multiple nodes (if that ever happens)
    • the GCP load-balancer comes with an external ip
  • The DNS needs to be updated to change my-app-url to the gcp-ip-addr

google cloud platform
routing logic
K8s Node
www.demo-app.com (port 80)
WebApp ReplicaSet
Web-App Pod
Web-App Pod
Web-App Pod
LoadBalancer Service: port 38080
ClusterIP Service - Inner-Node-Communication
Database Pod
GCP LoadBalancer: - IP Provided + port proxying included - fwd req from 80 to 38080
DNS: demo-app to gcp-provided LoadBalancer IP
End-User

The Product Grows And A new Application Gets Introduced

Following the mock product example, the Org wants to host instructional videos about the products it sells. The efforts involved here become their own workload, with unique people, unique goals, and unique needs.
This develops into its own:

  • url (demo-app.com/how-to)
  • application codebase (vid-app)
  • containers + pods + replica set
  • K8s Service Object
  • accessible port on the K8s Node
  • a new load-balancer on the GCP Platform

google cloud platform
routing logic
K8s Node
www.demo-app.com (port 80)
Webapp-ReplicaSet
Pod
Pod
Pod
Root: LoadBalancer Service: port 38080
Watch-App-ReplicaSet
Pod
Pod
Pod
How-To: LoadBalancer Service: port 38282
ClusterIP Service - Inner-Node-Communication
Database Pod
Root Route: GCP LoadBalancer: - IP Provided + port proxying included - fwd req from 80 to 38080 - $$ paid $$
DNS: demo-app to gcp-provided LoadBalancer IP
How-To Route: GCP LoadBalancer: - IP Provided + port proxying included - fwd req from 80 to 38080 - $$ paid $$
End-User

Also.
Want more protection & security? Configure Https.
Many options there.

This is where Ingress Helps

  • Cloud-Native Load-Balancers cost $$
  • Route-Based Load-Balancers become redundant "config"
  • firewall config
  • https config

K8s offers Ingress to create yet another object with a definition file.
Ingress provides A single externally accessible url, configurable to route to differente services within a cluster.
Ingress allows ssl config as well.
Ingress still needs to be exposed - published as a nodePort or a cloud-native load-balancer.
Load-balancing, Auth, SSL, Url-Based Routing all happen in an ingress controller.

Tags: