BasicAgent
LLM Prompting
LLM Prompting — LLM prompt basics, system prompting patterns, and practical prompt guidelines for production workflows.
Executive summary
This repo implements LLM prompting as a structured, multi-layer system: role personas, instructional directives, and a configurable chain-of-thought depth. Prompts are built from templates and enforced with a JSON output contract that includes confidence and location metadata. These practices support auditability and consistent extraction. Use the framework below to standardize prompt construction across workflows.
Logical Framework
Core concepts
- Role persona: domain expert definition (role-based prompting).
- Instruction prompt: explicit extraction requirements.
- Zero-shot CoT depth: reasoning depth from 1 to 5.
- Output contract: JSON schema with extracted_fields and metadata.
Taxonomy
- System prompting: role and rules.
- Task prompting: specific extraction instructions.
- Reasoning prompting: depth-based guidance.
- Output prompting: JSON schema and required keys.
Workflow (inputs, outputs, checkpoints)
- Choose a template (role_persona + instruction_prompt).
- Select CoT depth based on task complexity.
- Define output JSON structure with required metadata.
- Execute extraction and capture confidence scores.
- Checkpoints: schema compliance and confidence thresholds.
Practical guidance and guardrails
Do:
- Use domain-specific roles from templates.
- Require NOT_FOUND for missing data.
- Include confidence and location metadata in outputs.
Do not:
- Use ambiguous instructions without schema.
- Accept outputs without reasoning metadata.
Failure modes and mitigation:
- Inconsistent outputs: enforce JSON response format.
- Overconfidence: apply confidence thresholds.
- Missing context: require location tracking.
Evaluation criteria
- JSON validity rate.
- Confidence thresholds pass rate.
- Location coverage for extracted fields.
Example pattern (IRZ-CoT)
Role: expert document extraction specialist.
Instructions: extract fields with maximum accuracy.
Reasoning: step-by-step at the configured CoT depth.
Output: JSON with extracted_fields, confidence, reasoning, and location.
Related pages
- System prompting: /system-prompting/
- System prompting tools: /system-prompting/tools/
- System prompting guidelines: /system-prompting/guidelines/
- System prompting examples: /system-prompting/examples/
- LLM security: /llm-security/