Skip to content

onionoo MCP: a query service for Tor relays

Onionoo is the Tor Project's metadata API for Tor relays and bridges. Anyone can query it over HTTP for the current state of the network: fingerprints, IPs, country and ASN, consensus flags (Guard, Exit, HSDir, and so on), bandwidth history, and uptime time series. The Tor Project's own Relay Search and most third-party Tor dashboards are powered by it.

onionoo-fastapi is a community-hosted wrapper around Onionoo. It exposes the same data through two interfaces that are easier for tooling and AI agents to consume:

  • Semantic HTTP API with OpenAPI / Swagger. Onionoo's compact field names (n, f, a, r, and friends) are remapped to readable ones (nickname, fingerprint, addresses, running), and the whole surface is described with a real OpenAPI document.
  • Model Context Protocol (MCP) server. Claude Desktop, Cursor, Claude Code, and other MCP clients can query relays through tool calls instead of hand-rolling HTTP requests.

The service does not store any Onionoo data. It only forwards requests and reshapes responses. The upstream is https://onionoo.torproject.org.

Swagger UI for onionoo.anoni.net showing Onionoo FastAPI Proxy v1.0.0 with endpoints /healthz, /metrics, /v1/summary, /v1/details, /v1/bandwidth, /v1/weights, /v1/clients The Swagger UI at onionoo.anoni.net/docs. Every /v1/* endpoint has a full schema and a Try-it-out button for ad-hoc testing.

API and MCP: a primer

API: a data interface for programs

An API (Application Programming Interface) is a standard way for programs to query data from each other. Onionoo itself is an API: you send an HTTP request (for example, "give me every running relay in Taiwan") and it returns JSON.

A few things to note about querying an API directly:

  • The specification is written for engineers — you need to know which endpoints exist, what parameters each takes, and what fields come back.
  • The answer is raw data. A typical Onionoo response is hundreds of relays' worth of detail; turning that into a trend or a conclusion means more code.
  • It works best when you already know exactly what to ask.

MCP: an interface layer for AI tools

MCP (Model Context Protocol) is an open protocol Anthropic introduced in 2024. It defines a standard format for how AI models invoke external tools:

  • For an AI client (Claude Desktop, Cursor, Claude Code, and others), MCP turns external services into "a list of tools plus a call format" that the model can read, decide when to use, and pick from on its own.
  • For a service provider, wrapping an existing API as an MCP server means every MCP-capable AI client can connect to it directly — no need to redo the integration each time a new client appears.

What this changes for data exploration

Imagine you want to survey an unfamiliar dataset — say, "what does Taiwan's Tor relay footprint look like right now?" With only the raw API, the flow is roughly:

  1. Read Onionoo's documentation; find the right endpoints (/details, /aggregate, and so on).
  2. Write a script that combines a few queries, merges the JSON, and computes the statistics.
  3. Format the result into a readable table or chart.

With MCP wired in, it becomes:

  1. Ask the AI tool directly: "What does Taiwan's Tor relay footprint look like — running count, total bandwidth, top five ASNs?"
  2. The AI picks the tools, composes the queries, and assembles a readable report (and, with some luck, sprinkles in context — for instance noting that TANet is Taiwan's academic network).

This is especially useful for early-stage research and data exploration: you do not need to learn the dataset's schema before you can start asking questions. The AI does the first pass of querying and summarizing for you, and you decide where to dig deeper after reading its report. When some queries need to be repeated or wired into a formal analysis, the API is still there — the two paths coexist.

Why this service

Onionoo's specification is solid, but it ships without an OpenAPI description, and its field names are short (optimized for transfer size). That works for a human writing a client by hand. It is less friendly to AI agents and third-party tooling:

  • No OpenAPI means tools like Swagger UI, Postman, or code generators can't introspect it.
  • The short field names confuse language models — is r a relay or just running?
  • A single user question often needs several endpoints stitched together (/details + /uptime + /bandwidth). Agents that re-derive that orchestration from scratch each time make mistakes.

onionoo-fastapi fixes all three: it ships an OpenAPI spec, exposes readable field names, and bundles common multi-endpoint tasks into single MCP tools (for example, "give me the health of this relay" is one call).

How to use it

In Claude Desktop, Cursor, or any MCP-capable client, add this to the mcpServers block of your config:

{
  "mcpServers": {
    "onionoo": {
      "type": "http",
      "url": "https://onionoo.anoni.net/mcp"
    }
  }
}

Save, restart the client, and an onionoo tool group appears in the tool list. From there you can ask the agent things like:

  • "Find the Tor relay named moria1 and report its status and country."
  • "List the top 10 Taiwanese (TW) relays by consensus weight."
  • "Compare the two fingerprints 9695DFC35FFEB861329B9F1AB04C46397020CE31 and 847B1F850344D7876491A54892F904934E4EB85D — versions and flags."
  • "Give me Taiwan's current Tor footprint: running relay count, total bandwidth, flag distribution."
Claude Desktop Connectors panel showing onionoo as a CUSTOM connector pointed at https://onionoo.anoni.net/mcp, with nine tools listed: aggregate_as, aggregate_countries, aggregate_flags, get_bandwidth, get_clients, get_details, get_summary, get_uptime, get_weights After configuration, Claude Desktop's Connectors panel shows onionoo with nine tools — the six low-level endpoints plus three aggregates — all exposed through the Streamable HTTP transport. Each tool's approval requirement can be tuned per use case.

To run it locally without depending on the hosted instance, use the stdio transport:

{
  "mcpServers": {
    "onionoo": {
      "command": "uvx",
      "args": ["--from", "git+https://github.com/anoni-net/onionoo-fastapi", "onionoo-mcp"]
    }
  }
}

You will need uv installed (brew install uv on macOS; see the official docs for Linux).

Every endpoint returns semantic JSON with a _meta envelope indicating cache state:

# Details for moria1 — only return nickname and fingerprint
curl -s 'https://onionoo.anoni.net/v1/details?search=moria&fields=nickname,fingerprint' | jq .

# Taiwan's relays, sorted by consensus weight
curl -s 'https://onionoo.anoni.net/v1/details?country=tw&running=true&order=-consensus_weight&limit=5' | jq .

# Per-country aggregation of currently running relays
curl -s 'https://onionoo.anoni.net/v1/aggregate/countries?running=true' | jq .

For the full list of endpoints, parameters, and response fields, see the Swagger UI. Query parameters mirror Onionoo's protocol specification.

3. Self-host with Docker

If you want to run your own copy (for a .onion service, an internal network, or experimentation):

git clone https://github.com/anoni-net/onionoo-fastapi
cd onionoo-fastapi
docker compose up -d --build

It listens on port 8000 by default. OpenAPI docs are at http://localhost:8000/docs, the MCP endpoint at http://localhost:8000/mcp.

Common settings (via environment variables):

Variable Purpose Default
ONIONOO_BASE_URL Upstream Onionoo URL https://onionoo.torproject.org
CACHE_MAXSIZE / CACHE_DEFAULT_TTL_SECONDS In-memory cache size and TTL 1024 / 300 s
RATE_LIMIT_ENABLED / RATE_LIMIT_PER_MINUTE Per-IP rate limiting false / 120
CORS_ALLOWED_ORIGINS Allowed CORS origins empty (CORS off)
LOG_FORMAT json or console json
METRICS_ENABLED Expose /metrics in Prometheus format true

For the full list, see the README.

MCP tools at a glance

Tool Purpose
find_relay(query) Free-form lookup; auto-detects whether the query is a 40-character fingerprint, an AS number, an IP, or a nickname substring
get_relay_health(fingerprint) A composite health snapshot — details + uptime + bandwidth in one call
top_relays_by_bandwidth(country?, flag?, limit) Top-N relays by consensus weight, optionally filtered by country or flag
compare_relays(fingerprints) Fetches details for several fingerprints in parallel for side-by-side comparison
country_summary(country) Running count, total bandwidth, and flag distribution for one country
aggregate_relays(group_by, running, top) Server-side group-by over country / AS / flag

Low-level pass-through (both transports)

get_summary / get_details / get_bandwidth / get_weights / get_clients / get_uptime: thin wrappers over the corresponding Onionoo endpoints. Each takes a params dict and returns the semantically renamed JSON.

The Streamable HTTP endpoint at /mcp exposes the six low-level wrappers plus three aggregate tools (countries, as, flags). The task-oriented tools and the unified aggregate_relays live on the stdio transport. The two transports can run side by side.

Example: surveying Taiwan's Tor footprint with an agent

Once onionoo MCP is wired into Claude Desktop or Claude Code, you can ask:

"Give me a summary of Taiwan's current Tor relays: how many are running, total bandwidth, top 5 ASNs, and pick out the three relays with the highest consensus weight — tell me their nicknames and which AS they're in."

The agent breaks the question into a handful of MCP tool calls (over the HTTP transport, that's typically aggregate_countries plus get_details; over stdio, the task-oriented tools country_summary, aggregate_relays, and top_relays_by_bandwidth are available too) and assembles a single report. Queries like this previously meant manually composing Onionoo parameters and merging JSON. Now they are a single sentence.

Claude Desktop response from Opus 4.7 summarizing Taiwan's current Tor relays: 13 running relays, total advertised bandwidth ~41.1 MB/s (~329 Mbit/s combined), based on the consensus published 2026-05-16 15:00 UTC, with an ASN breakdown table — AS3462 Chunghwa Telecom (HiNet) 10 relays (77%), AS1659 TANet, AS9416 Hoshin Multimedia, AS18041 Taiwan Digital Streaming each 1 relay (8% each) The model's final summary — running count, total bandwidth, and ASN distribution in one place, anchored to a specific consensus snapshot. Numbers come from upstream Onionoo and are a point-in-time view; values shift as the network evolves.

Expanding the model's reasoning shows it asking the MCP server which tools are available, then planning which ones to combine:

Claude Opus 4.7 thinking trace: stating it needs Taiwan relay info, listing candidate tools, deciding to call aggregate_countries to find Taiwan's row, then get_details with country=tw to pull AS information, and finally to fetch the top three by consensus weight. The aggregate_countries tool call is shown expanded with a Result chip The agent narrates its plan — what to query, which tool to pick, what to do with the response — with the actual aggregate_countries and get_details tool calls inlined. The full MCP interaction is visible, which makes debugging and prompt tuning much easier.

Observability and operations

  • /healthz: static liveness check, never hits upstream.
  • /healthz/ready: pings Onionoo (cached) — 200 if reachable, 503 otherwise.
  • /metrics: Prometheus format. Includes cache hit/miss counters (onionoo_cache_hits_total and _misses_total), upstream latency (onionoo_upstream_seconds), and error rates.
  • Every request gets an X-Request-ID echoed in the response header and bound into log records — handy for correlating issues.

Get involved

Released as v1.0.0 under MIT. Issues, PRs, and Matrix discussion on which task-oriented tools to add next are all welcome.