ReAct is a paper-named pattern (from "Reason + Act") where an agent alternates between explicit reasoning steps and tool actions. Rather than just "agent calls tool, gets result, calls another tool", the agent writes out a thought ("I should check X before deciding Y"), then takes an action, observes the result, writes another thought, takes another action. The reasoning steps are visible in the conversation, both for debugging and for the model itself to maintain coherent state.
A ReAct prompt typically alternates between three blocks: Thought: (the agent's reasoning), Action: (a tool to call), and Observation: (the tool result). The agent loops through these until it reaches a Final Answer:. Most modern agent frameworks bake this into their prompt templates without making it visible to end users, but the pattern is still under the hood.
ReAct works well because it gives the model space to plan and self-correct. Without explicit reasoning, models tend to make tool calls reflexively without checking whether the previous result actually answered the question.
ReAct was one of the earliest patterns that demonstrated LLMs could do multi-step planning when given the right scaffolding. Almost every modern agent uses some descendant of it. Understanding ReAct is foundational to understanding why agents work.
The How AI Agents Browse blog covers ReAct-style agents reading and acting on web content.