Agent
Also known as: AI Agent, Intelligent Agent, LLM Agent
Definition
A software entity that perceives its environment, reasons about what actions to take, executes those actions (often via tools), and observes the results in a loop until achieving a goal or being terminated. In LLM-based systems, an agent typically consists of: (1) a language model for reasoning, (2) a set of available tools/actions, (3) memory or state, and (4) a loop that iterates perception-reasoning-action until completion.
What this is NOT
- Not merely a chatbot with a system prompt (chatbots respond; agents act)
- Not the same as a workflow (workflows follow predefined paths; agents decide dynamically)
- Not necessarily fully autonomous (human-in-the-loop agents are still agents)
- Not defined by having memory (stateless agents exist)
- Not defined by using tools (reasoning-only agents exist, though rare)
Alternative Interpretations
Different communities use this term differently:
academic-rl
A system that perceives and acts in an environment to maximize cumulative reward over time, as formalized in reinforcement learning and Markov Decision Processes.
Sources: Sutton & Barto, Reinforcement Learning: An Introduction (2018), Russell & Norvig, Artificial Intelligence: A Modern Approach
saas-vendors
Any AI-powered product that can take actions on behalf of users, often marketed as "autonomous" regardless of actual capability level.
This sense is often inflated. Products labeled "agents" range from simple chatbots with API calls to genuine autonomous systems. The term has become diluted in marketing contexts.
Sources: Various vendor marketing materials (2023-2024)
philosophy
An entity with intentionality—the capacity for mental states that are about or directed at objects or states of affairs. Agents have beliefs, desires, and the ability to act on them.
Sources: Stanford Encyclopedia of Philosophy: Agency, Dennett, The Intentional Stance (1987)
llm-practitioners
A system built on top of an LLM that uses a loop of reasoning and tool execution to accomplish tasks that require multiple steps or dynamic decision-making.
Sources: Anthropic: Building Effective Agents (2024), LangChain documentation, AutoGPT, BabyAGI, and similar projects
Examples
- A ReAct agent that searches the web, reasons about results, and synthesizes an answer
- A coding agent that reads requirements, writes code, runs tests, and iterates on failures
- A research agent that formulates queries, retrieves papers, extracts key findings, and compiles a summary
- An email agent that reads messages, decides which need responses, drafts replies, and sends them after approval
Counterexamples
Things that might seem like Agent but are not:
- A simple RAG chatbot that retrieves context and generates a response (no decision loop)
- A workflow that always executes steps A→B→C in sequence regardless of input
- A classification model that outputs a label (perception without action)
- A script that calls an API and returns results (no reasoning step)
Relations
- specializes autonomous-agent (Autonomous agents are agents with minimal human oversight)
- specializes tool-using-agent (Agents that specifically use external tools)
- inTensionWith workflow (Workflows are predefined; agents decide dynamically)
- requires reasoning (Agents need some form of reasoning to decide actions)
- overlapsWith agentic-system (Agentic systems may contain multiple agents or agent-like components)