promptdojo_
Checkpoint

One last thing before we move on. Same surface as a write step — but the lesson doesn't complete until this passes.

Final drill. Synthesize this lesson into one function: audit_prompts(techniques) that takes a list of technique dicts and returns a dict with TWO keys:

  • verdicts: a dict mapping technique name → verdict string
  • counts: a dict mapping each verdict to how many techniques got that verdict. Include all four verdicts as keys even if the count is zero.

Each technique dict has three fields: name, year_introduced, model_native_in (which may be None).

Use the same verdict rules as step 07:

  • model_native_in is None"still useful"
  • model_native_in <= 2025 and name in reasoning set → "redundant"
  • model_native_in <= 2025 and name NOT in reasoning set → "counterproductive"
  • otherwise → "context-dependent"

Reasoning set: {"chain-of-thought", "think-step-by-step", "self-critique"}.

Five techniques demoed. Expected output:

verdicts: {'chain-of-thought': 'redundant', 'few-shot-examples': 'still useful', 'structured-output-schema': 'counterproductive', 'self-critique': 'redundant', 'multi-tool-orchestration': 'context-dependent'}
counts: {'still useful': 1, 'redundant': 2, 'counterproductive': 1, 'context-dependent': 1}

this step needs the editor

on desktop today; in the app (coming soon). save your spot and we'll bring you back here when you're ready.

open this same url on a laptop to keep going today.