Few-shot Prompting
In one line
Few-shot prompting includes a small number of example inputs and outputs in the prompt itself, letting the LLM imitate the desired format or style without any retraining.
Going deeper
Few-shot prompting embeds 2–5 input/output examples directly in the prompt so the model mimics the pattern. Nothing is being retrained — the model just imitates what it sees in context. This behaviour is sometimes called in-context learning.
For marketers it is the cheapest way to enforce brand voice quickly. Three or four well-chosen examples can transform output style, often making fine-tuning unnecessary.
The downsides are real, too. Long example sets eat tokens and chew up the context window. Knowing when few-shot is enough versus when you genuinely need fine-tuning is a practical judgement call.
Related terms
Prompt Engineering
Prompt engineering is the practice of crafting inputs that steer an LLM toward better outputs — a way to dramatically change result quality without retraining the model.
LLMSystem Prompt
A system prompt is the instruction sent to an LLM before any user message, defining the assistant's role, tone and rules — effectively the AI product's character.
LLMChain-of-Thought
Chain-of-Thought (CoT) prompting asks the LLM to walk through intermediate reasoning steps before giving a final answer — a simple change that meaningfully improves accuracy on harder problems.
LLMFine-tuning
Fine-tuning takes an already pretrained LLM and trains it further on a narrower dataset to specialise it for a domain, task or voice — the most common path for adapting an LLM to your own data.
LLMContext Window
The context window is the maximum number of tokens an LLM can take in at once — it defines how much content the model can consider in a single prompt.
How does your brand show up in AI answers?
Villion measures how your brand appears across ChatGPT, Perplexity and AI Overviews, then automates the work that lifts citation rate and share of voice.
Get a free audit