- An Example Of A Growing Tech Stack
- The Product Grows And A new Application Gets Introduced
- This is where Ingress Helps
An Example Of A Growing Tech Stack
An App
An app, maybe an online e-commerce platform.
Build it into a docker img.
Deploy it on a K8s Pod, in a K8s Deployment object.
A DB And A Service To Allow the Two to Talk
The db needs a db, so a db gets built.
A new Pod gets deployed on the same node.
The App pod needs to talk to the db, so a new K8s service gets created to allow this pod-to-pod communication.
Make it Available with a NodePort Service
Another service is made, a NodePort service, that opens the app to the world via a url+port, at something like http://
Address App Scaling Needs With ReplicaSets
Once Traffic gets busy enough, build a ReplicaSet to scale the App pods. the NodePort service will split traffic between the replicated pods.
Allow For Friendly URL with DNS Config
Configure the dns server in use to redirect my-demo-app.com
to <the-k8s-node-ip>
. Now, users can access the app at http://my-demo-app.com:
Remove the Need For the Port in the URL with A Proxy Server
Configure a proxy-server to sit between the world and the DNS, so that the world can access the url without the port.
The Proxy server will forward port 80 (open ot the world) to the nodePort service port.
Host K8s In A Cloud Platform
Take GCP as an example. A few things get updated:
- the NodePort service can be converted to a
LoadBalancer
service type, which ...- allows NodePort service details to exist (make the node available through a port)
- K8s sends req to GCP to provision a network load-balancer for the service: where GCP deploys a load-balancer to split traffic across multiple nodes (if that ever happens)
- the GCP load-balancer comes with an external ip
- The DNS needs to be updated to change my-app-url to the gcp-ip-addr
The Product Grows And A new Application Gets Introduced
Following the mock product example, the Org wants to host instructional videos about the products it sells. The efforts involved here become their own workload, with unique people, unique goals, and unique needs.
This develops into its own:
- url (demo-app.com/how-to)
- application codebase (vid-app)
- containers + pods + replica set
- K8s Service Object
- accessible port on the K8s Node
- a new load-balancer on the GCP Platform
Also.
Want more protection & security? Configure Https.
Many options there.
This is where Ingress Helps
- Cloud-Native Load-Balancers cost $$
- Route-Based Load-Balancers become redundant "config"
- firewall config
- https config
K8s offers Ingress to create yet another object with a definition file.
Ingress provides A single externally accessible url, configurable to route to differente services within a cluster.
Ingress allows ssl config as well.
Ingress still needs to be exposed - published as a nodePort or a cloud-native load-balancer.
Load-balancing, Auth, SSL, Url-Based Routing all happen in an ingress controller.