prompting
12 concepts in this domain
-
Assistant Message
ArtifactThe output generated by the LLM in response to the prompt, representing the "assistant" turn in a conversation. Assistant messages contain the model's answers, generations, or actions (like tool calls...
Also: Model Response, AI Response, Completion
-
Chain-of-Thought
ProcessA prompting technique that elicits intermediate reasoning steps from an LLM before it produces a final answer. By asking the model to "think step by step" or showing examples with reasoning traces, Ch...
Also: CoT, Step-by-Step Reasoning, Think Step by Step
-
Context Engineering
ProcessThe holistic practice of designing and managing everything that goes into an LLM's context window: system prompts, retrieved documents, conversation history, tool definitions, examples, and user input...
Also: Context Design, Context Management, Prompt Architecture
-
Context File
ArtifactA file placed in a project repository that provides persistent context, instructions, and constraints to AI coding assistants. Context files are automatically loaded when an AI assistant works on th...
Also: CLAUDE.md, AGENTS.md, .cursorrules, Project Context, AI Configuration File
-
Few-Shot Prompting
ProcessA prompting technique where examples of the desired input-output behavior are included in the prompt to guide the model's response. Instead of just describing what you want, you show the model example...
Also: Few-Shot Learning, In-Context Examples, Example-Based Prompting
-
In-Context Learning
ProcessThe ability of large language models to learn and perform new tasks from examples provided in the prompt, without any parameter updates. The model "learns" the pattern from examples and applies it to ...
Also: ICL, Learning from Examples, Prompt-Based Learning
-
Jailbreak
ProcessTechniques to bypass an LLM's safety guardrails and content policies, causing it to generate outputs it was trained or configured to refuse. Jailbreaks target the model itself (its RLHF training and s...
Also: Jailbreaking, Guardrail Bypass, Safety Bypass
-
Prompt
ArtifactThe text input provided to an LLM to elicit a response. A prompt can be a simple question, a complex instruction, or a carefully structured template with examples and context. Prompts are the primary ...
Also: Input, Query
-
Prompt Injection
ProcessAn attack where malicious input is crafted to override or manipulate an LLM's instructions, causing it to ignore its system prompt, reveal hidden information, or perform unintended actions. Prompt inj...
Also: Injection Attack, Prompt Hacking
-
Prompt Template
ArtifactA structured prompt with placeholders (variables) that are filled in at runtime with specific values. Templates separate the prompt structure (fixed text, instructions, formatting) from the dynamic co...
Also: Template, Prompt Format, Message Template
-
System Prompt
ArtifactInstructions provided to an LLM that define its persona, behavior, constraints, and capabilities for the conversation. System prompts are typically set by the application developer (not the end user) ...
Also: System Message, System Instructions, Meta Prompt
-
User Message
ArtifactA message in the conversation from the human user (or the application on behalf of the user), as opposed to system messages or assistant responses. User messages are the primary input mechanism for LL...
Also: User Input, User Query, Human Message