Few-Shot Learning

AI & MACHINE LEARNING

Quick Definition

Few-shot prompting includes 2-10 example input/output pairs in the prompt before the actual question. The model uses the examples as a pattern for what the response should look like. Few-shot is especially effective for tasks where the output format is specific (extract these fields, classify into these labels, format as this JSON shape) or where zero-shot produces inconsistent results.

How it works

A typical few-shot prompt looks like: "Input: / Output: / Input: / Output: / Input: / Output:". The model continues the pattern. Quality and diversity of examples matter; pick examples that cover the variation in your real data.

Few-shot tradeoffs: more examples = better consistency but more tokens consumed per call. For high-volume use cases, few-shot with 5 examples per call burns serious tokens; fine-tuning the behavior into model weights becomes worth considering.

Why it matters

Few-shot is the workhorse of practical LLM usage. It is cheaper than fine-tuning, more reliable than zero-shot, and easy to iterate on. Most production prompts converge on few-shot with carefully-curated examples.

Where you'll see this on TerminalFeed

The Recipes section includes paste-ready few-shot prompt templates for common API tasks.