Chain-of-Thought (CoT) prompting tells the model to "think step by step" before giving its final answer. Instead of jumping straight from question to conclusion, the model writes out its reasoning. The intermediate reasoning steps consume tokens but produce dramatically better answers on math, logic, code, and multi-step tasks. CoT is one of the most reliable techniques in prompt engineering.
The simplest form is to add "Let's think step by step" or "Reason through this carefully before answering" to a prompt. More sophisticated variants include few-shot CoT (showing the model a few examples of step-by-step reasoning), self-consistency (sample multiple CoT chains and majority-vote the answer), and tree-of-thought (branch reasoning, explore alternatives, backtrack).
Modern reasoning-tuned models (Claude with extended thinking, OpenAI o-series, DeepSeek R1) have CoT baked into the model rather than the prompt: they produce a long internal thought trace that is hidden from the user but informs the visible answer.
CoT was the discovery that turned LLMs from pattern-matchers into something resembling reasoners. It is the single most important prompting technique. Reasoning models that internalize CoT now outperform their non-reasoning counterparts on math and code by 20-50 percentage points on standard benchmarks.
The AI Agents article covers how agents use CoT to plan multi-step tasks before invoking tools.