Lesson 6 of 28
Module 2 · Task — Write a Helm chart for podinfo (via Claude)
The task
Write a Helm chart that deploys podinfo (same image as module 1), parameterising the image tag, replica count, and service port. Install it into your kind cluster. Prove that overriding values at the CLI actually changes the running resources. Drive the work through Claude Code; read the templates before you apply them.
Acceptance test: helm upgrade podinfo ./podinfo-chart -n demo --set replicaCount=3 --set image.tag=6.7.0 --wait completes, then kubectl -n demo get pods -l app.kubernetes.io/name=podinfo-chart shows 3 pods running image tag 6.7.0.
Same ground rule as module 1: read every template Claude writes before you helm install. Templates are leverage — a bug in a template is a bug in every deployment.
Setup
From the same working directory as module 1 (or a sibling — it doesn't matter, Helm only needs the chart dir). Preflight:
kindcluster from module 1 is up, or recreate it with thekind create cluster ...command Claude wrote for you.helm versionprints a version. If not, ask Claude for the install command for your OS.- The
demonamespace exists.
Drive it through Claude
Scaffold the chart. Send Claude:
"Scaffold a Helm chart called
podinfo-chartusinghelm create. Keep the default layout. Show me the directory tree."Inspect the tree. Most of the scaffold is useful; the autoscaling and ingress templates you won't touch today.
Adjust
values.yaml. Send:"Edit
podinfo-chart/values.yamlso theimageblock usesghcr.io/stefanprodan/podinfotag6.6.2,replicaCount: 1, and theserviceblock istype: NodePort,port: 9898, with a new fieldnodePort: 30080. Setingress.enabled: falseandautoscaling.enabled: false."Open
values.yaml. Ask Claude: why is a NodePort Service appropriate forkindbut never for production? what would I use instead on GKE or EKS? You don't need to implement the alternative — just understand the trade-off before you ship the chart.Wire
nodePortinto the Service template. Send:"In
podinfo-chart/templates/service.yaml, addnodePort: {{ .Values.service.nodePort }}and settargetPort: 9898(numeric, not named) on the single port entry. Keepport: {{ .Values.service.port }}as-is."Read
templates/service.yaml. Ask: why istargetPorthardcoded to9898instead of parameterised? what goes wrong iftargetPortand the container'scontainerPortdrift? (Remember the break-on-purpose failure from module 1.)Wire the container port + probes. Send:
"In
podinfo-chart/templates/deployment.yaml, setcontainerPort: 9898and changelivenessProbeto hit/healthzon porthttp,readinessProbeto hit/readyzon porthttp."Clear the raw-YAML podinfo from module 1. Send:
"Delete the raw Deployment and Service named
podinfoin thedemonamespace so the Helm install doesn't collide."Claude will run
kubectl delete -n demo deployment podinfoandkubectl delete -n demo service podinfo. Confirm withkubectl -n demo get all.Install and verify. Send:
"Run
helm install podinfo ./podinfo-chart -n demo --wait, then curlhttp://localhost:9898/healthzto confirm."
Break it on purpose
Helm's power is that one bad override ships to every environment. Probe that now.
- Upgrade with a mismatched
nodePort:helm upgrade podinfo ./podinfo-chart -n demo --set service.nodePort=31080 --wait - Predict: what happens when you
curl http://localhost:9898/healthznow? Write the answer in one sentence. - Run the curl. Compare.
- Inspect what changed:
kubectl -n demo get svc podinfo-chart -o yaml | grep nodePort. The Service is happily listening on 31080; thekindcluster config only forwards host port 9898 to node port 30080, so traffic to localhost:9898 goes nowhere and the failure is silent until you trace it. - Revert:
helm upgrade podinfo ./podinfo-chart -n demo --set service.nodePort=30080 --wait.
The failure mode you're internalising: a value override can break the deployment without breaking the chart. Charts lint fine. Templates render fine. Pods run fine. Traffic still doesn't land. Your skill in the next lesson will need to know about this class of failure.
Acceptance test
helm upgrade podinfo ./podinfo-chart -n demo \
--set replicaCount=3 \
--set image.tag=6.7.0 \
--wait
kubectl -n demo get pods -l app.kubernetes.io/name=podinfo-chart
Expect 3 pods, all Running. Confirm the image tag:
kubectl -n demo get deployment podinfo-chart -o jsonpath='{.spec.template.spec.containers[0].image}'
That should print ghcr.io/stefanprodan/podinfo:6.7.0. If you see something else, the upgrade didn't take.
Troubleshooting
Error: UPGRADE FAILED: another operation ... is in progress— a previous helm command crashed mid-apply. Ask Claude to runhelm rollback podinfo 0 -n demoorhelm uninstall podinfo -n demoand retry.- Pods Running but
curlfails —helm get manifest podinfo -n demoshows the rendered YAML. ChecknodePortandtargetPort. IfnodePortis blank, the template isn't picking up.Values.service.nodePort— re-inspectservice.yaml.
What to keep for the next lesson
Keep the chart directory, the Claude transcript, and your Break it on purpose notes on the nodePort mismatch. In the next lesson you'll codify .claude/skills/helm-chart-scaffold/ and explicitly teach it that overrides can silently break a deploy even when every command succeeds.