Model Context Protocol (MCP) is an open standard introduced by Anthropic in late 2024 that defines how AI agents and the tools they use talk to each other. An MCP server exposes a list of tools (callable functions), resources (read-only data), and prompts (templated instructions). Any MCP-compatible client (Claude Desktop, Claude Code, Cursor, custom agents) can connect, list what is available, and invoke tools using a standard JSON-RPC schema. MCP is to AI agents what USB was to peripherals: a single plug that works across hosts.
An MCP server runs as a separate process or hosted endpoint and exposes a small JSON-RPC API. The client calls tools/list to discover available tools, resources/list for readable data, then invokes tools/call with a tool name and arguments. Servers can run locally over stdio (a child process) or remotely over HTTP with Server-Sent Events. The same server definition works for both; only the transport changes.
The protocol handles authentication (bearer tokens, OAuth), capabilities negotiation (the client tells the server what it supports), and structured errors. Most MCP servers are tiny: a few hundred lines wrapping an existing API. The agent does not need to know how the underlying service works, only the tool schema.
Before MCP, every agent framework had its own tool-definition format (LangChain tools, OpenAI functions, Anthropic tool-use, custom JSON), and integrating any service meant writing N adapters. MCP collapses that to one. Major dev tools (Cursor, Claude Code, Zed, Continue), enterprise platforms (GitHub, Slack, Notion), and an exploding ecosystem of community servers now speak it. For developers, building one MCP server makes your service callable by every modern AI agent.
TerminalFeed exposes a hosted MCP server with 27 tools (8 free, 19 premium). The free tools cover real-time data: BTC price, fear and greed, predictions, earthquakes, HN. Premium tools require a bearer token. See the MCP page for paste-ready config blocks.