How we’re getting the Sovereign AI MCP endpoint listed in five registries with real traffic tracking and zero KYC friction.

MCP Registry Distribution: Submission Plan & Tracking

Coming from outside the stack? The Self-Hosted AI: Start Here hub article maps where strategy decisions like this one land in the actual deploy: hardware tree, inference engine, what hurts most. Useful as the operational anchor for the framing here.

Last week, I realized that the Sovereign AI MCP endpoint had zero external traffic despite being live for months. The only calls we saw were my own curl tests from the owner account. That’s when I decided to push it into public registries with attribution tags to measure real adoption.

Quick Take

  • Zero external traffic despite a working MCP endpoint
  • Need five registries, each with a unique attribution tag
  • Track referrals with ?ref= tags and measure after 30 days
  • No KYC, no friction, just GitHub or email sign-in

1. Why we’re pushing the endpoint into registries

The endpoint works today, but it’s invisible to anyone outside my own tests. For example, the search_blog tool returns results in under 150 ms on a DGX Spark with SGLang, but only when I call it directly. Without registries, there’s no discovery mechanism for others.

We’re targeting five registries because that’s the minimum viable set to gather meaningful data. Each registry must accept a ?ref=<tag> suffix so we can track which registry drives traffic. The goal is 5 real external tool_calls per registry after 30 days. If a registry fails to deliver, we’ll drop it and pivot to blog posts or Nostr.

2. The registries we’re targeting and why

Smithery.ai and Glama.ai are live today with verified listings. For example, Smithery.ai approved the submission in 24 hours and shows a 100/100 quality score. Glama.ai required fixing the container build command after their auto-detect set uv run mcp instead of python -m src, but it’s now live with a build time of 22.5 seconds.

The remaining registries are either PR-based (awesome-mcp-servers, modelcontextprotocol.io) or optional (Cline Marketplace, Continue Hub). The PR for awesome-mcp-servers was blocked by a bot, but the maintainer merged it after I added a Glama score badge to the PR description. That’s proof that even community lists respond to visible quality signals.

3. How we track attribution and traffic

We use a simple query parameter: ?ref=<registry_tag> appended to the endpoint URL. For example, the Smithery.ai listing uses https://mcp.sovgrid.org/self-hosted-ai?ref=smithery. The aggregator script nsm-aggregate.py parses this field and writes it to mcp.referrers in nsm-stats.json.

Concretely, after one week, the referrers field shows:

The “direct” bucket means the user typed the URL manually or the registry stripped the tag. We expect this to shrink as registries adopt the tag.

4. The next concrete step

The plan is to submit to modelcontextprotocol.io next week. That registry is harder to crack because it’s an official Anthropic repo, but it offers the highest visibility. The submission is a PR to the community examples repository with a one-line entry referencing the endpoint URL tagged with ?ref=mcp-io.

If modelcontextprotocol.io fails to drive traffic, we’ll reassess after 30 days. The threshold is 5 calls per registry. Anything below that means the registry isn’t the right channel, and we’ll shift to blog posts or Nostr instead.

What I Actually Use

  • DGX Spark with SGLang: Handles 60 requests/minute without breaking a sweat
  • FastMCP: Runs the endpoint in stdio mode so Glama’s container build works
  • GitHub OAuth for registries: No KYC, no friction, just an email address

5. What actually moved NSM, with the data

After three weeks live across all three target registries, the attribution data is in. The headline numbers (zero zaps, low double-digit unique IPs per day, 47 articles live) hide the more interesting per-registry breakdown.

Smithery accounts for the largest share of MCP traffic by gateway-IP count, but the unique-IP count once gateway-collapse is accounted for is smaller than direct claude-code traffic. Most Smithery installs are agents-as-a-service that proxy through Smithery’s infrastructure; the per-user signal is collapsed. Smithery’s value is therefore distribution-of-discovery (more agents see the listing) rather than direct-usage (the agents pinging from Smithery may all be one or two large operators).

Glama traffic is the second tier and is gateway-mixed for similar reasons. The Connector path versus Server path question turned out to matter less than expected once both were live; agents pick whichever one their client supports without much preference. The Connector path with the wrong attribution query-param (the BLOG-001 issue) means Glama-attributed traffic is currently undercounted; once the Glama dashboard fix lands the Glama share will look larger than it does today.

awesome-mcp-servers traffic is the smallest of the three, but the per-IP signal is highest because there is no gateway. Every IP in the awesome-mcp logs is a real human or a real agent that found the listing organically. The conversion rate per click-through is far higher than the Smithery or Glama rates, suggesting the audience the GitHub list reaches is more deliberate about which servers they actually try.

6. What the next round of registry submissions should target

If we were doing the registry submission round again from scratch, the priority order would change. Smithery and the awesome-mcp PR would still be first (mature, low-friction, real reach). Glama would still be in the lineup but submitted with the correct attribution query-param from the start. MCP-Get is now archived and would not be on the list.

The new candidate to evaluate: Anthropic’s own MCP directory if that ever ships in a public-listing form. Today it is internal, but the announcement signals a future surface. Worth tracking because Anthropic-listed servers will inherit some of the trust signal that Anthropic-as-a-brand carries, the way Apple-curated apps inherit Apple’s trust signal.

7. The honest framing on registry-driven NSM

Registry listings are necessary, not sufficient. They put the server in front of agents who would otherwise never find it; they do not by themselves cause those agents to call it. Conversion from “agent saw the listing” to “agent calls a tool” requires the listing to look credible (Smithery 100/100 score helped here), the README to make a useful tool obvious, and the actual tool to do something the agent could not get from a plain web fetch. The first two are content; the third is product. Both have to be right at the same time. Registry submissions were the easy part; the harder part is making sure the agents who arrive via the listings find a reason to come back.

8. The lessons that travel beyond MCP

The pattern the MCP-registry submission round demonstrated, attribution-via-query-param plus per-registry tracking plus the discipline of submitting to all relevant directories at once, is not MCP-specific. It applies to any new content surface that has multiple discovery directories: package registries (npm, PyPI, Docker Hub for code), creative directories (awesome-* lists for any topic), and curated marketplaces. The work of submitting to N directories is roughly N times the work of submitting to one, but the attribution discipline makes the N-times-cost legible afterward instead of leaving it as folklore. Shipping that discipline once on MCP made it cheaper to apply to the next surface (whatever that turns out to be); the cost was the article you are reading now plus the Caddy-log aggregator that already existed for blog traffic.

The honest one-line takeaway from this whole submission round: directory listings without a credible product behind them are wasted distribution effort, but a credible product without directory listings is invisible to most agents who could use it. The two parts have to ship together. Either alone produces no signal. Both together is the minimum viable visibility, and even that does not guarantee adoption. Adoption requires the product to actually solve a problem agents would otherwise solve worse, which is a content-and-engineering question, not a registry-and-distribution question. Both layers matter at once or not at all.