Why programmatic video — when AI gen alone isn't enough — step 2 of 7
Three jobs only code does well
AI gen models will keep getting better at single-shot synthesis. They will not, in any version, become good at the three jobs below. These are the jobs that code has always owned and will keep owning, because they're not about taste — they're about structure.
Job 1 — Parametric variation
The canonical example: a YouTube channel ships a "your-year-in-review" video to every subscriber, personalized with their watch history. One template, millions of renders. Spotify Wrapped does this. Netflix's "your top 10 of the year" does this. So does every Shopify store running TikTok ads on autopilot.
The shape is always the same:
template = load_template("ad.html")
for product in catalog:
video = render(template, product=product)
upload(video, channel=product.platform)
The model can't do this. You'd have to call the model 100 times with 100 different prompts, and each output would have inconsistent fonts, inconsistent pacing, inconsistent brand. The variation is the point — but the frame around the variation needs to be invariant. That frame is code.
Job 2 — Data-driven rendering
The NFL highlight reel that updates after every Sunday's games. The trading-platform "your portfolio this week" video. The weekly weather forecast in your local-news app. These videos pull from a live database, render a composition against the current data, and ship before the data is stale.
You can't generate them with a prompt because the inputs change every render. The model has no concept of "yesterday's box score." Code reads from the DB, formats the numbers, lays them out on a template, and renders.
The Remotion docs and the HyperFrames data-in-motion.md reference are entire pattern libraries for this — sparklines, animated counters, leaderboards, choropleth maps. All driven by JSON or a database query, none generated by a model.
Job 3 — Reproducibility under review
This one is boring but it's why agencies actually adopt programmatic. When a client asks "can you change the CTA color from red to orange and re-render all 47 versions of the ad?", you need:
- The exact same composition, byte-identical except for the one change.
- A deterministic render — no
Math.random(), no model nondeterminism, no "well, the AI decided to do it slightly differently this time." - Re-running last month's render with the same inputs and getting the same MP4.
HyperFrames is explicit about this: the rules forbid Math.random(), Date.now(), and time-based logic for exactly this reason. Use a seeded PRNG. The render must be deterministic so review cycles converge instead of diverging.
Try doing review cycles with a generative model. Every "re-roll" produces a different result. The client never gets the same video twice, and you never converge on "approved."
The one-line test
If you can rephrase the brief as "do X to a template with these inputs," it's a code job. If you can rephrase it as "imagine a beautiful scene where X," it's a model job. If it's both ("imagine a beautiful scene of X, then use that shot inside our standard template"), it's hybrid — and the pipeline matters more than any single tool.
The rest of this chapter is about building that pipeline.
Why programmatic video — when AI gen alone isn't enough — step 2 of 7
Three jobs only code does well
AI gen models will keep getting better at single-shot synthesis. They will not, in any version, become good at the three jobs below. These are the jobs that code has always owned and will keep owning, because they're not about taste — they're about structure.
Job 1 — Parametric variation
The canonical example: a YouTube channel ships a "your-year-in-review" video to every subscriber, personalized with their watch history. One template, millions of renders. Spotify Wrapped does this. Netflix's "your top 10 of the year" does this. So does every Shopify store running TikTok ads on autopilot.
The shape is always the same:
template = load_template("ad.html")
for product in catalog:
video = render(template, product=product)
upload(video, channel=product.platform)
The model can't do this. You'd have to call the model 100 times with 100 different prompts, and each output would have inconsistent fonts, inconsistent pacing, inconsistent brand. The variation is the point — but the frame around the variation needs to be invariant. That frame is code.
Job 2 — Data-driven rendering
The NFL highlight reel that updates after every Sunday's games. The trading-platform "your portfolio this week" video. The weekly weather forecast in your local-news app. These videos pull from a live database, render a composition against the current data, and ship before the data is stale.
You can't generate them with a prompt because the inputs change every render. The model has no concept of "yesterday's box score." Code reads from the DB, formats the numbers, lays them out on a template, and renders.
The Remotion docs and the HyperFrames data-in-motion.md reference are entire pattern libraries for this — sparklines, animated counters, leaderboards, choropleth maps. All driven by JSON or a database query, none generated by a model.
Job 3 — Reproducibility under review
This one is boring but it's why agencies actually adopt programmatic. When a client asks "can you change the CTA color from red to orange and re-render all 47 versions of the ad?", you need:
- The exact same composition, byte-identical except for the one change.
- A deterministic render — no
Math.random(), no model nondeterminism, no "well, the AI decided to do it slightly differently this time." - Re-running last month's render with the same inputs and getting the same MP4.
HyperFrames is explicit about this: the rules forbid Math.random(), Date.now(), and time-based logic for exactly this reason. Use a seeded PRNG. The render must be deterministic so review cycles converge instead of diverging.
Try doing review cycles with a generative model. Every "re-roll" produces a different result. The client never gets the same video twice, and you never converge on "approved."
The one-line test
If you can rephrase the brief as "do X to a template with these inputs," it's a code job. If you can rephrase it as "imagine a beautiful scene where X," it's a model job. If it's both ("imagine a beautiful scene of X, then use that shot inside our standard template"), it's hybrid — and the pipeline matters more than any single tool.
The rest of this chapter is about building that pipeline.