Few-Shot Prompting
Also known as: Few-Shot Learning, In-Context Examples, Example-Based Prompting
Definition
A prompting technique where examples of the desired input-output behavior are included in the prompt to guide the model's response. Instead of just describing what you want, you show the model examples of correct behavior. Few-shot prompting leverages the model's in-context learning capability to generalize from examples to new inputs.
What this is NOT
- Not fine-tuning (examples are in the prompt, not used for training)
- Not zero-shot (zero-shot means no examples)
- Not the model's demonstrations (examples are provided by the prompter)
Alternative Interpretations
Different communities use this term differently:
llm-practitioners
Including 2-10 example input-output pairs in the prompt before the actual query. The model learns the pattern from examples and applies it to the new input. Variants: zero-shot (no examples), one-shot (one example), few-shot (multiple examples).
Sources: GPT-3 paper (Brown et al., 2020), Prompt engineering guides
Examples
- Classify the sentiment: Text: "I love this product!" -> Positive Text: "Worst purchase ever." -> Negative Text: "It's okay, nothing special." -> Neutral Text: "The quality exceeded my expectations!" -> ?
- Showing format examples: Input: X, Output: {field1: ..., field2: ...}
- Translation examples: EN: Hello -> ES: Hola, EN: Goodbye -> ES: Adiós
Counterexamples
Things that might seem like Few-Shot Prompting but are not:
- Zero-shot prompting (no examples)
- Fine-tuning on examples (modifies model weights)
- The model's own previous outputs (that's conversation history)
Relations
- overlapsWith in-context-learning (Few-shot prompting is how you trigger in-context learning)
- overlapsWith prompt (Few-shot examples are part of the prompt)
- overlapsWith prompt-template (Templates often include example slots)