llms.txt is an emerging standard for a plain-text file placed at a website's root (like /llms.txt) that tells AI agents what the site offers, which APIs are available, and how to interact with the content. Think of it as robots.txt, but for large language models.
The file is written in Markdown and contains a structured summary of the website: what it does, which pages are most important, what APIs it exposes, and any special instructions for AI consumption. When an AI agent arrives at a new website, it can read llms.txt to quickly understand what is available without crawling every page.
A typical llms.txt file includes a title, a brief description of the site, a list of key pages with short summaries, and links to API documentation or OpenAPI specs. Some sites also include an llms-full.txt with more detailed content for agents that want deeper context.
The format was proposed as a lightweight way to bridge the gap between websites designed for human readers and the large language models that increasingly browse and consume web content. It does not replace sitemaps or robots.txt. Instead, it adds a semantic layer specifically for AI discovery. As AI agents become more common, having an llms.txt file helps ensure your site and its data are accessible to them.
AI agents are a growing source of web traffic. Websites that make themselves easily discoverable to AI systems can attract more usage of their APIs, increase data citation, and become part of AI-generated answers and workflows. llms.txt is a simple, low-effort way to participate in this new ecosystem.
TerminalFeed publishes its own llms.txt file listing all API endpoints and site features. Combined with our OpenAPI spec and the /developers documentation, this makes TerminalFeed fully discoverable by AI agents. Read more in our article on building websites for humans and AI agents.