promptdojo_
Checkpoint

One last thing before we move on. Same surface as a write step — but the lesson doesn't complete until this passes.

Final drill of the capstone. Build the agent loop running with MCP-sourced tools. Write run_mcp_agent(question, tools_list_response, mcp_call, fake_model, max_iters) that:

  • Bridges tools_list_response via bridge_mcp_tools(...) (provided) into tools, schemas.
  • Loops up to max_iters times calling fake_model(messages, list(tools.keys())).
  • The model returns a dict like {"stop_reason": "...", "content": [...]} where each content block is either {"type": "text", "text": "..."} or {"type": "tool_use", "id": "...", "name": "...", "input": {...}}.
  • On end_turn: collect text blocks, return {"ok": True, "answer": <joined text>, "iters": <iter num>, "tool_calls": <count of tool_use blocks across all turns>, "tool_errors": <count of TOOL_ERROR results>}.
  • On tool_use: dispatch each tool_use block through tools[block["name"]](**block["input"]). Count any result that starts with "TOOL_ERROR:" as a tool error. Append assistant
    • user turns the standard way.
  • On cap: return {"ok": False, "error": "capped", "iters": max_iters}.

Two cases run for you. Expected output:

ok=True iters=2 tool_calls=1 tool_errors=0 answer=Found Tokyo's best ramen.
ok=True iters=2 tool_calls=1 tool_errors=1 answer=I couldn't search; tool was down.

this step needs the editor

on desktop today; in the app (coming soon). save your spot and we'll bring you back here when you're ready.

open this same url on a laptop to keep going today.