BasicAgent

System Prompting: Techniques + Templates

System prompting techniques for reliable outputs—role, constraints, output contracts, and how to test prompts without guesswork.

A system prompt is the top-level instruction that sets rules for behavior and output. If you want reliable results (especially in apps), system prompting is the easiest leverage point.

Direct answer

A strong system prompt does four things:

  1. Defines the role (scope + tone)
  2. States the task objective (what “success” means)
  3. Sets constraints (what to do when uncertain, what not to do)
  4. Locks the output format (so results are machine-usable)

A simple framework (role → task → constraints → output)

  • Role: what the assistant is (and is not).
  • Task: the job to complete.
  • Constraints: guardrails for ambiguity, safety, and failure modes.
  • Output contract: the exact format you want back.

Drop-in system prompt template

SYSTEM:
You are {role}. Stay within scope and be concise.

RULES:
- If information is missing, say what’s missing. Do not guess.
- Ask at most 1 clarifying question if needed.
- If you can’t comply, explain why in 1 sentence and offer a safe alternative.

OUTPUT:
Return exactly one of:
- Bullets (max {n})
- A table
- JSON matching this schema: {schema}

Practical guardrails that improve reliability

  • Uncertainty rule: “Don’t guess. List missing inputs.”
  • Format lock: “Output JSON only” (or “Output bullets only”).
  • Max length: “Max 120 words” or “Max 10 bullets.”
  • One-question rule: stops multi-turn wandering.
  • Refusal boundary: specify what’s out of scope.

How to test system prompts (fast)

  1. Keep a small set of test inputs (easy + hard + adversarial).
  2. Save prompt versions.
  3. Inspect the last request payload (“prompt trace”) when results surprise you.

If you want a lightweight sandbox for this, start here: /prompt/