System Prompt
Also known as: System Message, System Instructions, Meta Prompt
Definition
Instructions provided to an LLM that define its persona, behavior, constraints, and capabilities for the conversation. System prompts are typically set by the application developer (not the end user) and persist across turns. They're the primary mechanism for customizing LLM behavior without fine-tuning.
What this is NOT
- Not the user's message (system prompts are developer-set)
- Not the model's response (system prompts are inputs)
- Not fine-tuning (system prompts are inference-time configuration)
Alternative Interpretations
Different communities use this term differently:
llm-practitioners
The "system" role message in the Chat Completions API that precedes user messages. It sets context, rules, and persona that the model should maintain throughout the conversation.
Sources: OpenAI Chat Completions API, Anthropic system prompt documentation, Claude character documentation
Examples
- You are a helpful assistant that responds in formal English.
- You are a Python coding expert. Only provide Python code, no other languages.
- You are a customer service bot for Acme Corp. Here are our policies: ...
- Respond only in valid JSON with keys: answer, confidence, sources.
Counterexamples
Things that might seem like System Prompt but are not:
- User's question (that's a user message)
- The model's response
- Fine-tuned model behavior (that's in the weights)
Relations
- specializes prompt (System prompts are part of the full prompt)
- overlapsWith prompt-injection (Prompt injection can attempt to override system prompts)
- overlapsWith context-engineering (System prompt design is part of context engineering)