When we started TerminalFeed, the default approach was polling. A timer fires every few seconds, hits a REST endpoint, updates the UI. It's how 90% of crypto sites do their ticker. It's simple, it's cacheable, and it doesn't require learning a new protocol.

We threw it out and rebuilt the Bitcoin ticker on WebSocket. This post is the decision record: what polling cost us, what WebSocket bought us, what we gave up, and where the fallback to polling still matters.

The Polling Version

Our first implementation was 10 lines of JavaScript. Every 3 seconds, fetch Binance's REST ticker endpoint, parse JSON, update the DOM. Shipped it, worked fine, moved on to other panels.

setInterval(async () => { const res = await fetch( 'https://api.binance.com/api/v3/ticker/price?symbol=BTCUSDT' ); const data = await res.json(); priceEl.textContent = '$' + parseFloat(data.price).toFixed(2); }, 3000);

Then we started looking at it on the dashboard for a few hours a day. Two problems emerged.

First: during volatile minutes, a lot happens in 3 seconds. When BTC moves $300 in 90 seconds, you see a staircase that updates every 3 seconds and feels jerky. The price appears to jump in chunks rather than flow. That's the visual tell that your ticker is polling.

Second: after we deployed to production and traffic grew, we started tripping into Cloudflare's caching behavior and Binance's client rate limits. At 3-second polls across a few hundred concurrent visitors, that's a lot of redundant outbound calls. We could proxy through our Worker and cache, but then the cache TTL becomes the ticker's real update frequency, and we'd just moved the lag one hop upstream.

The WebSocket Version

Binance publishes a WebSocket trade stream at wss://stream.binance.com:9443/ws/btcusdt@trade. One connection per client, server pushes a message for every trade that clears on the BTC/USDT book. Typical message rate during active trading is 5 to 30 events per second.

Compared to polling, the differences are immediate:

Metric Polling (3s) WebSocket
Latency 0 to 3000ms 50 to 300ms
Updates per minute (active) 20 300 to 1800
HTTP requests per hour 1200 1 (initial handshake)
Bandwidth per hour (idle) ~300 KB ~20 KB
Bandwidth per hour (active) ~300 KB ~200 KB
Proxy compatibility Perfect Some proxies block upgrade
Reconnection Trivial (every call independent) Manual, with backoff

The latency win is the headline. A polling ticker averages 1.5 seconds of lag behind the market. A WebSocket ticker averages around 150ms. For a dashboard that a trader glances at, that's the difference between "real-time" and "slightly stale." For an idle browser tab, it doesn't matter at all.

The Throttle Question

Receiving 30 trade events per second is wasteful if you only paint 1 per second. Why not poll once per second then? Two reasons.

First: polling once per second at scale creates thundering-herd traffic patterns. Every connected client sends a request within a narrow time window. The server sees a traffic sawtooth. WebSocket traffic is steady because each client's messages arrive independently.

Second: a WebSocket stream paired with a 1-second paint throttle is effectively a 1-second ticker with 150ms worst-case lag instead of 1000ms worst-case lag. You get the visual smoothness of slower updates without the freshness penalty.

Our render logic: keep the latest price in a ref, paint at most once per second.

let pending = null; let scheduled = false; socket.onmessage = (event) => { const trade = JSON.parse(event.data); const price = parseFloat(trade?.p); if (!isFinite(price)) return; pending = price; if (scheduled) return; scheduled = true; setTimeout(() => { if (pending !== null) renderPrice(pending); pending = null; scheduled = false; }, 1000); };

What WebSocket Cost Us

WebSocket isn't free. The trade-offs we accepted:

Reconnection logic is now our problem. Polling has no connection state. If one call fails, the next call tries again. WebSocket requires explicit reconnection with exponential backoff, or the ticker silently dies on the first network blip. We wrote ~20 lines of reconnect logic and accepted the maintenance burden.

Proxy compatibility is worse. Some corporate proxies block the WebSocket upgrade handshake. Some older mobile networks don't route wss:// cleanly. In those environments, the ticker must fall back to REST polling. We wrote that fallback.

Server-side state. If you run the WebSocket yourself (rather than connecting directly to Binance), each connected client holds a connection open. Sticky sessions, memory per connection, and horizontal scaling all get harder. We sidestepped this by letting the browser talk directly to Binance's WebSocket, but if you're building a proxy, be aware.

Rate limit shape is different. Binance's WebSocket has its own connection limits (5 messages per second outbound from client, 300 connections per 5 minutes). These rarely bite for passive tickers but trip you up when you try to subscribe to too many symbols at once.

The Fallback Logic

We run both. WebSocket primary, REST polling as fallback. The decision flow:

  1. On page load, attempt WebSocket connection with 5-second timeout.
  2. If it opens, stop polling. Subscribe to btcusdt@trade.
  3. If it fails to open, or closes later, start polling at 3-second interval.
  4. On any fall-through, try to reconnect the WebSocket with exponential backoff (1s, 2s, 4s, 8s, up to 30s).
  5. When the WebSocket comes back, stop polling.

The user sees a ticker that just works. They don't see the protocol dance underneath.

On our Cloudflare Worker side, /api/btc-price is a REST-only endpoint (no WebSocket from the Worker because Workers don't hold long-lived outbound connections the same way). The Worker polls Binance and CoinCap and caches in memory. Client falls through to it when the direct WebSocket fails. Triple-redundant: direct Binance WS → our Worker (Binance REST) → our Worker (CoinCap fallback) → stale cache.

When Polling Actually Wins

WebSocket is not always the right answer. For prices that are not volatile or do not need sub-second freshness, polling is simpler and cheaper. Specifically:

Generalizing the Decision

The rule we follow for every panel on TerminalFeed: if the data changes faster than the polling interval we'd need to look responsive, use push. Otherwise poll.

BTC: changes every second or two during active trading. Push.

Stock quotes (15-minute delayed): changes every 15 minutes. Poll.

Earthquake alerts: arrive in unpredictable bursts. Push (via SSE from USGS).

GitHub trending repos: changes slowly. Poll every 5 minutes.

Wikipedia edits: firehose of events. Push (SSE).

If you pick push for push's sake, you pay in complexity for no user-visible benefit. If you pick poll for everything, you quietly lag every time-sensitive feed.

Closing Thought

A ticker is one of the most user-visible pieces of any data app. If it feels laggy, the whole site feels laggy. If it feels crisp, the site feels fast. Bitcoin is especially unforgiving because users who care about it have seen faster tickers elsewhere. WebSocket is the correct default for this use case, with REST fallback for robustness.

If you want the code walkthrough, see How to Add a Free Bitcoin Ticker to Your Website. For the protocol choice in general, WebSocket vs Server-Sent Events covers the broader trade-off.

See the Architecture Live

The BTC hero at the top of TerminalFeed runs this exact pattern. Open the dashboard and watch the number tick.

Open the Dashboard API Docs