Reasoning
Also known as: LLM Reasoning, Chain-of-Thought, Deliberation
Definition
The process by which an agent (or LLM) derives conclusions, makes decisions, or solves problems through intermediate steps rather than direct pattern matching. In LLM contexts, reasoning typically manifests as the model "thinking out loud"—generating intermediate text that represents reasoning steps before producing a final answer or action decision.
What this is NOT
- Not the same as generating text (reasoning implies structured thought toward a goal)
- Not retrieval or memorization (reasoning produces novel conclusions)
- Not perception (reasoning is what happens after perceiving)
Alternative Interpretations
Different communities use this term differently:
llm-practitioners
The capability of LLMs to solve multi-step problems by generating intermediate reasoning tokens. Elicited through prompting techniques like Chain-of-Thought (CoT) or through models trained for reasoning (o1, DeepSeek-R1).
Sources: Chain-of-Thought paper (Wei et al., 2022), OpenAI o1 documentation, DeepSeek-R1 paper (2025)
cognitive-science
The cognitive process of drawing inferences, applying rules, and constructing arguments. Distinguished from intuition (fast, automatic) as deliberate, sequential thought (System 2 in Kahneman's framework).
Sources: Kahneman: Thinking, Fast and Slow, Cognitive psychology literature
philosophy
The faculty of drawing valid conclusions from premises. Includes deductive reasoning (necessary conclusions), inductive reasoning (probable conclusions), and abductive reasoning (best explanations).
Sources: Logic and epistemology literature
Examples
- Chain-of-Thought prompting: 'Let's think step by step...'
- o1 generating hundreds of hidden reasoning tokens before answering
- An agent reasoning about which tool to use based on the task
- Working through a math problem step by step
Counterexamples
Things that might seem like Reasoning but are not:
- Directly outputting an answer without intermediate steps
- Retrieving a pre-computed answer from memory
- Pattern-matching to a memorized template
Relations
- requires agent-loop (Reasoning is what happens in the "think" phase of the loop)
- overlapsWith planning (Planning is reasoning about future actions)
- overlapsWith reflection (Reflection is reasoning about past reasoning)
- requires large-language-model (LLM agents use LLMs for reasoning)
Implementations
Tools and frameworks that implement this concept: