Zero-Shot Learning

AI & MACHINE LEARNING

Quick Definition

Zero-shot means asking a model to do something with only a description of the task, no examples. "Translate this to French" or "Classify this review as positive or negative" with just the input is a zero-shot prompt. Modern frontier models are remarkably good at zero-shot performance on tasks they have implicitly learned during training, even tasks that were never in their training data explicitly.

How it works

Zero-shot capability emerges from large-scale pretraining on diverse text. The model has seen enough variations of "do X" + "input" + "output" patterns that it generalizes to new instructions even without specific examples. The clearer the instruction, the better the zero-shot result.

When zero-shot fails, the next step is few-shot prompting (showing the model a few examples), and after that, fine-tuning (changing the model weights with task-specific data).

Why it matters

Zero-shot is the simplest, fastest, and cheapest way to use an LLM. For a huge fraction of tasks, a well-written zero-shot prompt is all you need. Reaching for fine-tuning before exhausting zero-shot and few-shot is usually premature.

Where you'll see this on TerminalFeed

The AI Agents article discusses when to escalate from zero-shot prompting to richer techniques.