promptdojo_

Write score_technique(name, year_introduced, model_native_in) that returns a dict with two fields:

  • verdict: one of "still useful", "redundant", "counterproductive", "context-dependent"
  • reason: a short string explaining the verdict

Rules:

  • If model_native_in is None, the technique is still useful. The model never absorbed it, so you still have to provide it. Reason: "model has not absorbed this natively".
  • If model_native_in <= 2025 AND the technique is reasoning- related, the technique is redundant. The model now does this internally. Reason: "model does this internally since {year}".
  • If model_native_in <= 2025 AND the technique is NOT reasoning- related, the technique is counterproductive. It fights the native API. Reason: "native API exists since {year}; prompt-level version conflicts".
  • Otherwise (model_native_in > 2025), the technique is context-dependent. Some models have it, some don't. Reason: "depends on model class".

Reasoning-related techniques (use this exact set): {"chain-of-thought", "think-step-by-step", "self-critique"}.

Two techniques demoed. Expected output:

chain-of-thought: {'verdict': 'redundant', 'reason': 'model does this internally since 2024'}
structured-output-schema: {'verdict': 'counterproductive', 'reason': 'native API exists since 2024; prompt-level version conflicts'}

this step needs the editor

on desktop today; in the app (coming soon). save your spot and we'll bring you back here when you're ready.

open this same url on a laptop to keep going today.