❯phase 00before you build
for anyone whose job got eaten by ai. what an llm is, how to talk to one, and what this course is going to ask of you.
before you build
for anyone whose job got eaten by ai. what an llm is, how to talk to one, and what this course is going to ask of you.
+ch 00before you build
if you're here because ai took your job, this chapter is the one. names the situation honestly, installs the mental model the rest of the course assumes, and gets you typing your first line of code without pretending the last six months didn't happen.
- 01·the situation, named honestly8
- 02·what an llm actually is9
- 03·how to talk to one8
- 04·where this course goes (and where you fit)9
❯phase 01foundations
variables, functions, lists, dicts, loops, conditionals, tracebacks, mutation
foundations
variables, functions, lists, dicts, loops, conditionals, tracebacks, mutation
+ch 01variables
when ai writes python, the first thing it does is name things. learn to read those names on sight, and to write a few yourself.
- 01·naming things you'll point ai at8
- 02·the four types you'll see daily9
- 03·print, repr, and the f-string9
+ch 02functions
ai writes functions constantly, and silently forgets the `return` line about a third of the time. learn to spot the missing return on sight.
- 01·functions and the missing return9
- 02·arguments, defaults, and the silent wrong-order bug9
- 03·closures and the @ symbol — what ai is doing when it stacks decorators9
+ch 03lists and dicts
every json response you've ever copied out of chatgpt or a rest api is some mix of two things: lists and dicts. read them on sight.
- 01·lists, dicts, and the shape of an api response9
- 02·the one-liner python writes when it's showing off9
- 03·nested dicts and lists — digging through the json ai just dumped9
+ch 04loops
ai writes a loop every time you say *for each*. half the time it's wrong by one. read it before you trust it.
- 01·loops — read it, predict it, then trust it10
- 02·while, break, and the infinite loop ai ships9
- 03·enumerate and zip — the loops ai writes when it's not lazy9
+ch 05conditionals
`if` looks simple. the traps inside it — empty values, `==` vs `is`, the difference between `0` and `None` — are where ai quietly ships wrong code.
- 01·conditionals and the truthiness traps8
- 02·elif chains and the match statement cursor reaches for9
+ch 06tracebacks
when python crashes, it tells you exactly what happened and where. most non-engineers panic at the wall of text. you're going to learn to read it.
- 01·tracebacks — read the wall of text8
- 02·diagnose any crash in one read9
- 03·print and breakpoint — finding the bad value before ai does9
+ch 07mutation
when a list inside a function changes the list outside the function, that's mutation. ai does this constantly without flagging it, and it's the bug class that takes the longest to find.
- 01·mutation and the action-at-a-distance bug8
- 02·shallow copy, deep copy, and the nested-dict bug9
❯phase 02real python
modules, error handling, files & i/o, classes, http
real python
modules, error handling, files & i/o, classes, http
+ch 08modules, imports, and why your venv hates you
half of `pip install x` failures are environment confusion, not python bugs. learn what `import` actually does, what a virtual env is for, and why your script can't find the package you just installed.
- 01·imports, pip, and the venv that can't see your package8
- 02·aliases, multi-imports, and the `np.` you'll see everywhere9
+ch 09error handling
ai loves a happy path. the moment a file isn't there or an api blinks, the script blows up. `try/except` is how you keep the program alive long enough to log what went wrong.
- 01·try/except — catching what ai didn't9
- 02·catching the right error — not 'anything that goes wrong'9
- 03·raising errors — making your code fail loudly on purpose9
+ch 10files and i/o
reading a csv, writing a log, parsing a json dump. the first thing ai does in any real project is touch a file. learn the few patterns it reaches for and the one it forgets.
- 01·open() — and the with-block ai keeps forgetting9
- 02·pathlib — the file api ai should reach for first9
- 03·csv and jsonl — the two formats ai moves data in9
+ch 11classes
ai ships classes constantly: sqlalchemy models, fastapi schemas, custom exceptions. you don't need to design them. you need to read one without flinching.
- 01·class, __init__, self — the three keywords ai uses every time9
- 02·instance vs class attributes — and the bug ai ships every time9
- 03·@dataclass — the class shape ai ships in every modern python project9
+ch 12http and apis
every ai script eventually calls an api. learn the shape of `httpx.get`, what a status code means, and how to pull a value out of the json that comes back.
- 01·get, status, json — the call ai makes 100 times a day9
- 02·status codes and error handling — what ai's api calls do when the wire blinks9
- 03·parsing nested api responses — without crashing on a missing key9
❯phase 03llm apis
calling models, structured output, mcp, agent loops
llm apis
calling models, structured output, mcp, agent loops
+ch 13llm apis
every ai feature you ship eventually calls a model api. learn the messages pattern, how to read the response, and the four lines ai writes every single time.
- 01·messages, roles, and the response — the call ai ships every time9
- 02·reading the response — content blocks, stop_reason, usage9
- 03·the model picker — when sonnet is wrong and haiku is right8
+ch 14structured output
free-form text breaks every pipeline. learn the schema-first pattern ai uses to get reliable json back, validate it with pydantic, and catch the model's lies before they hit prod.
- 01·schemas, pydantic, and validation — making the model return real data9
- 02·why schemas eat prompts — the boundary contract pattern8
+ch 15mcp
mcp is the new standard for plugging tools and data sources into ai agents. learn what an mcp server actually is, how claude code lists tools, and why this is replacing one-off integrations everywhere.
- 01·servers, tools, and the protocol — how ai agents plug into your stack9
- 02·writing a tiny mcp server — registries, dispatch, and the response shape9
- 03·why mcp won — the protocol wars of 2024-258
+ch 16agent loops
an agent isn't a magic. it's a while loop. learn the actual cycle claude code, cursor, and every other agent uses: model returns tool_use, you run the tool, you send the result back, repeat until end_turn.
- 01·stop_reason, tool_use, tool_result — the loop every agent runs9
- 02·multi-step tools — when one tool isn't enough9
- 03·routing — pick the path before doing the work9
- 04·evaluator-optimizer — write a draft, let a judge critique it, revise9
- 05·why every framework is the same thirty lines — and what that means for buy-vs-build9
❯phase 04shipping discipline
git, secrets, prompting, traces, evals, retrieval, tradeoffs
shipping discipline
git, secrets, prompting, traces, evals, retrieval, tradeoffs
+ch 17git and github cli
cursor and claude code commit on your behalf. reading those commits — and undoing the bad ones — is your job. learn the four-state model, the commands you'll run every day, and what `gh` does that `git` can't.
- 01·working tree, staging, commit — the model ai breaks first9
- 02·three git disasters ai shipped — and what got rotated8
+ch 18secrets
ai ships keys to github all the time. learn the .env pattern, why os.getenv is non-negotiable, what to do when a key leaks, and the gitignore lines you need on day one.
- 01·.env, os.getenv, and the leak recovery you'll do at least once9
+ch 19prompting cursor and claude code effectively
the difference between a one-shot ai session and a four-hour debugging spiral is almost always the first prompt. learn the structure that gets you usable code.
- 01·the six-knob prompt for shipping code9
- 02·few-shot and reasoning — examples that work, and the cot trap on reasoning models9
- 03·claude.md, agents.md, .cursor/rules — the system prompt your agent reads every session9
- 04·what aged: the 2023 prompting tricks that became 2026 traps8
+ch 20reading agent traces and telemetry
when an agent fails, the trace tells you exactly where. learn to read tool calls, tool results, and stop reasons — the json breadcrumbs every agent leaves behind.
- 01·what an agent leaves behind9
- 02·trace-driven debugging — turn a 4-hour panic into a 20-minute investigation9
+ch 21eval-driven ai development
if you can't test it, you can't ship it. learn the simple-but-strict eval patterns that separate ai features that work from ones that just feel like they do.
- 01·assertions on ai output, not vibes9
- 02·llm-as-judge — when the judge is another model9
- 03·how evals went from research curiosity to the only thing that ships — a five-year history8
+ch 22context and retrieval
rag without the overengineering. chunking, embeddings, vector search, and the small set of patterns that make a model answer from your data instead of its training set.
- 01·chunking that respects structure — don't shred your own documents9
- 02·embedding that fits the budget — pick a model that matches your corpus9
- 03·retrieval that finds the right thing — top-k, thresholds, and the rerank step everyone skips9
- 04·rag vs long context vs fine-tune — the decision that's killed more ai startups than any model swap9
+ch 23production tradeoffs
the three numbers every shipped llm feature lives or dies by. token math, caching, streaming, batching, and the small set of decisions that move the product more than a model swap ever will.
- 01·prompt caching correctly — the variable input goes last9
- 02·read the token bill — what your llm feature actually costs9
- 03·the 2023-2026 model price war — and why your cost model is already obsolete8
+ch 24debugging broken ai output
when the model lies to your customer. the methodology for narrowing down what went wrong, the four most-common breakage classes, and the discipline that separates 'we shipped a fix' from 'we blamed the model and shrugged'.
- 01·read the trace, not the chat — find the broken turn before reading the user's complaint9
- 02·the four breakage classes — sort any llm failure before you touch the prompt9
- 03·five postmortems — three public, two composite — and the one fix that would have caught all of them9
❯phase 05capstone
ship a working cli agent in 12 steps. ~100 lines of python.
capstone
ship a working cli agent in 12 steps. ~100 lines of python.
+ch 25capstone
wire it all together. context, retrieval, the prompt, the call, the trace, the eval, the cost. less a tutorial demo, more the smallest end-to-end llm feature you could ship to a real user.
- 01·why most beginner agents die in production — and how to pick one that ships7
- 02·wire it all together — a cli agent in 12 steps12
- 03·wire the real model — swap fake_llm for the anthropic sdk shape9
- 04·validate tool inputs — when the model invents arguments9
- 05·add evals and traces — measure the agent, don't trust it9
- 06·wire an mcp tool — load tools from a server, not a registry9
❯phase 06applied builds
agent harnesses, ai image gen, ai video gen, programmatic design, harness engineering
applied builds
agent harnesses, ai image gen, ai video gen, programmatic design, harness engineering
+ch 26agent harnesses
claude code, cursor, aider, codex cli — they're all the same four layers wrapped around the same model api. learn what those layers are, what each adds, and what you'd build yourself if you had to.
- 01·what a harness is — the four layers under every coding agent9
- 02·architecting an ai-native workflow — a 5-step playbook in code9
- 03·five industries walked through — what ai-native looks like in the wild9
+ch 27ai image generation
the 2026 image model landscape, the prompts that work, and the pipeline that turns one good idea into a hundred ready-to-ship images. nano-banana, flux, midjourney, ideogram — when each wins and what they cost.
- 01·the image model landscape — six families and what each is for9
- 02·prompting for real output — composition, control, and the eight knobs that matter8
- 03·the image pipeline — turning one idea into a hundred shipped assets7
+ch 28ai video generation
video is the hardest content type to generate, the most expensive, and the most strategically interesting. learn the 2026 model lineup, the camera-control patterns that separate slop from craft, and the cost math that decides whether your idea is viable.
- 01·the 2026 video model lineup — what each one is actually good at9
- 02·camera control and motion — what separates slop from craft8
- 03·the cost math — when ai video is viable and when it bankrupts you6
+ch 29programmatic design
ai generates raw assets; code stitches them into something shippable. hyperframes, remotion, claude design — when each tool wins, how they combine, and the data-driven workflows that turn one template into a hundred videos.
- 01·why programmatic video — when ai gen alone isn't enough7
- 02·hyperframes and remotion — the two tools that own this space9
- 03·the ai-native design pipeline — concept to shipped mp4 in one workflow8
+ch 30harness engineering
every coding agent is a model plus a harness. the model is bought; the harness is engineered. learn the craft: how to ratchet rules from failures, fight context rot, design long-horizon loops, wire hooks as enforcement, and read the haas shift that's reshaping what you build vs buy.
- 01·the harness-engineering mindset — agent equals model plus harness8
- 02·the ratchet — every mistake becomes a rule9
- 03·context engineering — fighting the rot9
- 04·long-horizon execution — loops, planning, splits, hooks10
- 05·the haas shift — harness-as-a-service and what to build vs buy9