Lesson 2 of 28
Module 1 · Task — Stand up a cluster and deploy podinfo (via Claude)
The task
Create a local Kubernetes cluster with kind, deploy podinfo, expose it with a Service, and confirm you can reach it from your laptop — driving the work through Claude Code rather than copy-pasting YAML from a tutorial.
Acceptance test: curl http://localhost:9898/healthz returns HTTP 200 with a JSON body like {"status":"OK"}. You can re-run this any time to prove the deployment is still healthy.
Ground rule (applies to every task lesson in this course): you drive through Claude, but you read every file Claude writes before you run it. If a field is opaque, ask Claude to explain it. Skipping the read step defeats the course — the skill you build in the next lesson is valuable only if you understand the domain underneath it. AI acceleration is never a license to disengage.
Setup
Open a fresh working directory and a new Claude Code session. Confirm your preflight:
docker psshould not error (Docker daemon is running).kubectl version --clientandkind versionboth print versions.
If either kubectl or kind is missing, ask Claude to give you the install command for your OS — and read the commands before running them. Installers that pipe shell scripts from the internet are a footgun; Claude's first suggestion is usually fine, but you verify.
Drive it through Claude
Work through the following prompts one at a time. After each, read what Claude produced, then ask it to explain anything opaque before you run anything.
Create the cluster config. Send Claude:
"Create a
kindcluster config filekind-config.yamlthat forwards host port 9898 to container port 30080 on the control-plane node. Then give me the exactkind create clustercommand to use it, cluster namedevops-ready."Open
kind-config.yaml. Ask Claude: why do we needextraPortMappings? what happens if I remove it? Only run the cluster-create command after you can explain the answer in your own words.Create the namespace. Send:
"Create a namespace called
demowith kubectl."Read the command, run it.
Write the Deployment + Service. Send:
"Write a single YAML file
podinfo.yamlwith a Deployment (2 replicas, imageghcr.io/stefanprodan/podinfo:6.6.2, readiness + liveness probes hitting/readyzand/healthzon port 9898) and a NodePort Service exposing port 9898 vianodePort: 30080. Both in thedemonamespace. Labels:app: podinfo."Read the generated YAML top to bottom. Ask Claude: why does the Service
selectorhave to match the Podlabels? what breaks if the labels drift? Thenkubectl apply -f podinfo.yaml.Wait and verify. Send:
"Watch the pods until both are Ready, then run the acceptance test and show me the output."
Claude will typically run
kubectl -n demo get pods -w, wait, thencurl http://localhost:9898/healthz.
Break it on purpose
Before moving on, deliberately break the setup so you see the shape of a realistic failure. This is the material for the skill's boundary statement in the next lesson.
- Edit
podinfo.yaml: change the ServicetargetPortfrom9898to9090(a port the container isn't listening on). Save. - Predict in writing (one sentence, literally write it down): what will
curl http://localhost:9898/healthzreturn? - Run
kubectl apply -f podinfo.yaml, thencurl http://localhost:9898/healthz. Compare to your prediction. - Run
kubectl -n demo describe svc podinfoandkubectl -n demo get endpoints podinfo. Notice what the endpoints look like when the selector matches but the port is wrong — this failure is silent at the Service level and only surfaces when traffic hits it. - Revert (
targetPort: 9898), reapply, confirmcurlworks again.
The point isn't the bug — it's recognising the failure mode. The next lesson's skill will be expected to catch this.
Acceptance test
curl http://localhost:9898/healthz
Returns {"status":"OK"} with HTTP 200.
If not, ask Claude to walk through: kubectl -n demo get pods, kubectl -n demo describe pod -l app=podinfo, kubectl -n demo get svc podinfo, kubectl -n demo get endpoints podinfo. Module 6 covers this debugging loop in depth.
A note on identity — we're cutting a corner here
Your Deployment is running as the namespace's default ServiceAccount, which has no RBAC bindings and therefore no API access. That's fine for today — podinfo doesn't talk to the Kubernetes API — but in real cloud-platform work the pattern is:
- Create a dedicated ServiceAccount per workload.
- Bind it to a narrowly-scoped Role (or ClusterRole) via a RoleBinding.
- Never use
defaultfor anything that reads or writes cluster state.
Module 7 covers this in depth — including IAM on GCP/AWS and Workload Identity Federation for pods that need cloud credentials. For now, notice that we sidestepped the entire identity question and make a mental note: any Deployment that calls kubectl, talks to a cloud API, or touches secrets needs a real SA + least-priv binding, not the default.
Tear down (optional)
kind delete cluster --name devops-ready when you're done for the day. You'll reuse this cluster in modules 2–5, so leaving it running is also fine.
What to keep for the next lesson
Keep the Claude transcript open. Also keep your Break it on purpose notes — the prediction, what actually happened, which command surfaced the mismatch. In the next lesson you'll codify this session as .claude/skills/kind-cluster-bootstrap/SKILL.md and explicitly tell the skill about the targetPort mismatch, so a fresh Claude session runs the skill and fails loudly on that class of bug instead of deploying a silently-broken Service.