A decentralized machine intelligence network where subnets compete to provide AI services like inference, search, and image generation.
Live data via /api/coingecko/markets · Updated every page load
Bittensor is a network for decentralized AI infrastructure. The protocol coordinates a set of "subnets," each focused on a specific machine intelligence task: language model inference, embedding generation, image synthesis, scraping, prediction, and many others. Within each subnet, "miners" provide the actual model inference and "validators" score miners' outputs for quality. TAO is emitted to subnet participants based on validator-determined contributions, similar to how Bitcoin issues BTC to miners.
The Bittensor blockchain runs on Substrate (the same framework as Polkadot). Each block, ~7,200 TAO are emitted across all subnets in proportion to validator stake weight. Within a subnet, validators query miners with task-specific prompts, score the responses, and the subnet's emission is distributed proportionally. Subnets are launched by paying TAO to lock as collateral; subnets that produce useful work and attract validator stake earn larger emissions over time.
Subnet 1 (text generation), Subnet 9 (training), Subnet 18 (Cortex.t for chat APIs), Subnet 21 (FileTAO for storage), Subnet 23 (NicheImage for image gen), and dozens of others provide actual AI services. Token holders can stake TAO to validators as a yield-bearing position. Some applications integrate Bittensor as an AI inference layer; the interoperability story is improving but remains less mature than centralized alternatives.
Validator rewards depend on accurately scoring miner outputs, but scoring AI quality is itself difficult: subnets have grappled with miners gaming the scoring function. The decentralization claim is real but the network's actual usage by external applications is still small relative to centralized AI providers. The 21M TAO cap and Bitcoin-style halvings give it a clean monetary narrative; whether AI economics align with that narrative long-term is a real question.
See inference and large language model for foundational AI concepts.