
Category:
Category:
Self-Reflection / Self-Verification
Category:
Agentic AI & Reasoning
Definition
Mechanisms for LLMs or agents to review and correct their own outputs.
Explanation
Self-reflection enables models to critique their answers, detect inconsistencies, and improve solutions. The agent generates an initial response, evaluates it against rules or examples, then produces a refined answer. This reduces hallucinations and increases reliability. Techniques include critique-and-revise loops, verifier models, multi-pass reasoning, and self-evaluation prompts.
Technical Architecture
Initial Output → Critique Module → Revised Output → Validator
Core Component
Verifier model, evaluator prompts, error-checking rules, reasoning paths
Use Cases
Research agents, coding, math, analytics, planning
Pitfalls
Infinite loops, increased latency, high cost
LLM Keywords
Self Reflection, Self Verification, Critique Based LLMs
Related Concepts
Related Frameworks
• Chain of Thought
• ReAct
• Hallucination Mitigation
• Self-Verification Pipeline
