Function calling (also called tool calling) is the mechanism by which a language model decides to invoke an external function. The developer defines functions with names, descriptions, and parameter schemas (usually JSON Schema). The model, when prompted with a user query and the function list, can output a structured JSON payload naming the function and arguments. The application code then actually executes the function and feeds the result back to the model as context for the next turn.
The function definitions are passed as part of the system prompt or via a dedicated tools field in the API request. The model is trained to recognize when a query needs an external action (look up a price, send an email, run a calculation) and to format the response as a tool call rather than free-form text. The application code parses the call, executes it, and continues the conversation with the result.
Modern frontier models (Claude, GPT, Gemini) handle multi-step tool chains: call A, get result, decide based on result whether to call B, etc. This is the foundation of AI agents.
Function calling is what turned LLMs from chat toys into useful agents. Without it, an LLM can only output text. With it, an LLM can read data, take actions, integrate with any API, and orchestrate workflows. Every modern agent framework depends on it.
TerminalFeed exposes 30+ free APIs and 12 premium endpoints, all designed to be called by function-calling agents. The /api/llm-tools endpoint serves the entire catalog as ready-to-use function definitions in OpenAI, Anthropic, and raw JSON Schema formats.