Kubernetes: End-to-End

A visual mental model for how Kubernetes takes desired state (YAML) and turns it into running Pods, networking, and self-healing workloads across nodes.

1. You: YAML + kubectl

You describe the cluster’s desired state. Kubernetes continuously reconciles it.

# Typical demo apply order
kubectl apply -f mongo-config.yaml
kubectl apply -f mongo-secret.yaml
kubectl apply -f mongo.yaml
kubectl apply -f webapp.yml

Key idea

K8s is declarative: you say is, it works to keep it should.

2. Control Plane

The “brain” that stores state, decides placements, and drives reconciliation.

API Server
Entry point for kubectl, UI, automation.
/apis authz
etcd
Source of truth for cluster state.
snapshots state
Scheduler
Selects a Node for each new Pod.
resources constraints
Controller Manager
Reconciles desired vs actual.
deployments nodes

Mental model

API Server is the front door. etcd is memory. Controllers + scheduler are the decision engines.

3. Worker Nodes

Where workloads actually run.

kubelet (node agent) talks to API server
Container runtime runs containers (via Pods)
CNI networking gives Pods IPs + routing

Key idea

Kubernetes makes many machines feel like one cluster computer.

Networking: Pod IPs vs Service IPs
Pod
ephemeral IP

Pods can be recreated; their IPs change.

Treat Pods as cattle, not pets.

Service
stable virtual IP

A stable endpoint + load balancer to a set of Pods.

Traffic: service → matching Pods (labels).

Ingress
HTTP routing

Routes external HTTP(S) to Services (often via an Ingress Controller).

Browser → Ingress → Service → Pods

Core objects you’ll use daily
Deployment
Desired replicas + rolling updates (stateless apps).
replicas rollout
ReplicaSet
Ensures the right number of Pods exist.
created by Deployment
Pod
Smallest unit: one or more containers sharing network/storage.
ip per pod
ConfigMap
Non-secret configuration (env vars / files).
DB_URL
Secret
Sensitive config (base64 in manifests; real secrecy is RBAC + storage).
USER PWD
Volume
Data persistence is external to Pods.
PV/PVC mount
Example stack (from your notes)

MongoDB + WebApp, configured via ConfigMap/Secret and exposed externally for local testing.

Manifests
mongo-config.yaml mongo-secret.yaml mongo.yaml webapp.yml

This is the exact “MongoDB + webapp” Minikube demo layout.

Apply order (important)
kubectl apply -f mongo-config.yaml
kubectl apply -f mongo-secret.yaml
kubectl apply -f mongo.yaml
kubectl apply -f webapp.yml

Reason: Deployments reference ConfigMap/Secret; they must exist first.

End-to-end traffic path (demo)
Browser
User
External request
NodePort Service
webapp-service:30100
Exposes webapp externally (dev/demo)
WebApp Pod
nanajanashia/k8s-demo-app
Reads env from ConfigMap/Secret
ClusterIP Service
mongo-service:27017
Stable internal DB endpoint
MongoDB Pod
mongo:5.0
Auth via Secret env vars
Config & credentials flow (2 steps)
1) Create first
ConfigMap Secret

These must exist before Pods start, because Deployments reference them.

kubectl apply -f mongo-config.yaml
kubectl apply -f mongo-secret.yaml
2) Inject into workloads
ConfigMap DB_URLWebApp
Secret USER_NAME/USER_PWDWebApp
Secret MONGO_INITDB_ROOT_*MongoDB

Most commonly injected as environment variables via valueFrom.

Local access (Minikube on macOS Docker driver)
Note: minikube ip + NodePort (e.g., http://<ip>:30100) often won’t work on macOS with the Docker driver — use one of the options below.
Option A: Minikube URL
minikube service webapp-service --url
Gives a reachable 127.0.0.1:<port> tunnel.
Option B: Port-forward
kubectl port-forward svc/webapp-service 3000:3000
Stable local port; great for debugging.
The daily loop (observe → change → observe)
Apply
kubectl apply -f ...

Create/update desired state.

Inspect
kubectl get all
kubectl describe ...

See what exists and why.

Logs
kubectl logs -f pod/...

Debug runtime behavior.

Access
port-forward
minikube service

Reach your Service from your laptop.