Agent Loop

AI & MACHINE LEARNING

Quick Definition

The agent loop is the basic structure every autonomous AI agent runs: read current state, decide next action (often a tool call), execute the action, observe the result, decide whether the task is complete. Loop until done or until a stop condition (max iterations, time budget, error threshold) trips. The simplicity is the point: complex agent behavior emerges from a tight inner loop with good tool definitions and a capable model.

How it works

A typical loop iteration: (1) construct a prompt from the system prompt + conversation history + tool results so far; (2) send to the LLM with available tools; (3) parse the response, either a final answer or a tool call; (4) if a tool call, execute it and append the result to the history; (5) check stop conditions (final answer, iteration cap, error); (6) loop or return.

Frameworks like LangGraph, OpenAI's Agents SDK, and Anthropic's MCP-based agents all implement this loop with varying levels of abstraction. The key engineering decisions: what tools to expose, how to structure the system prompt, what counts as "done", and how to handle errors and retries.

Why it matters

Agent loops are how language models become persistent actors. Without a loop, an LLM can only respond to one prompt. With a loop, it can complete multi-hour tasks, browse the web, edit code, and chain dozens of tool calls. Quality of the loop (and its bounding conditions) determines whether the agent is useful or runs amok.

Where you'll see this on TerminalFeed

The TerminalFeed world-deltas endpoint is designed specifically for agent loops: agents poll it with a "since" timestamp and receive only new events, perfect for the observe step of the loop.