MCP Server
The GeoDaddy MCP server exposes the CLI as a tool that AI assistants can call directly. Once configured, you can ask Claude or Cursor to analyze any URL and get a full GEO report without leaving your conversation.
What is MCP?
Model Context Protocol (MCP) is an open standard that lets AI assistants call external tools. The GeoDaddy MCP server wraps the CLI binary and registers a single tool — analyze_url — that accepts the same parameters as the CLI.
No API keys. No cloud. The analysis still runs entirely locally.
Installation
The MCP server is published on npm as geodaddy-mcp. On first install it automatically downloads the correct CLI binary for your platform. Node.js 18 or later is required.
Claude Desktop
Add the following to your Claude Desktop config file:
- macOS:
~/Library/Application Support/Claude/claude_desktop_config.json - Windows:
%APPDATA%\Claude\claude_desktop_config.json
{
"mcpServers": {
"geodaddy": {
"command": "npx",
"args": ["-y", "geodaddy-mcp"]
}
}
}Restart Claude Desktop. You should see a hammer icon in the chat input — that confirms the server is active.
Claude Code
Run this command once to register the server:
claude mcp add geodaddy -- npx -y geodaddy-mcpClaude Code will pick it up automatically in future sessions.
Cursor
- Open Settings → MCP Servers
- Click Add
- Set the command to
npx -y geodaddy-mcp - Save and restart Cursor
From source
If you prefer to build the binary yourself:
# 1. Build the CLI binary
git clone https://github.com/borabiricik/geodaddy-cli.git
cd geodaddy-cli
cargo build --release
# 2. Install and build the MCP server
cd mcp
npm install
npm run buildThe server falls back to target/release/geodaddy automatically, so no extra configuration is needed.
Using the Tool
Once configured, just describe what you want in natural language:
Analyze https://example.com for GEO issues
Crawl my site https://mysite.com up to 30 pages and show me everything that's failing
Check if any AI bots are blocked on https://example.com
Run a full audit of https://example.com including Core Web Vitals
The AI will call analyze_url with the appropriate parameters and return the results inline.
Tool Reference
analyze_url
Runs the GeoDaddy CLI against a URL and returns either a JSON report or a formatted text report.
| Parameter | Type | Required | Description |
|---|---|---|---|
url | string | yes | The URL to analyze. Supports http://localhost and http://127.0.0.1 for local dev. |
max_pages | number | no | Enable site-wide crawling, stop after N pages. Without this, only the given URL is analyzed. |
enable_js | boolean | no | Enable JavaScript rendering. Downloads Chromium (~150 MB) on first use. |
vitals | boolean | no | Measure Core Web Vitals (LCP, FCP, CLS, TTFB, TBT). Requires headless browser. |
fail_under | number (0–100) | no | Return an error if the overall score is below this threshold. |
beauty | boolean | no | Return a human-readable text report instead of JSON. Defaults to JSON. |
Return value
By default the tool returns the raw JSON report. Set beauty: true to get formatted text output — useful when you want the AI to interpret results inline rather than parse JSON.
{
"schema_version": "1",
"url": "https://example.com",
"crawled_at": "2026-03-23T15:42:30.123Z",
"score": 82.5,
"categories": {
"technical": 85.0,
"content": 80.0,
"geo": 75.0,
"performance": null
},
"pages": [
{
"url": "https://example.com/",
"robots_blocked": false,
"score": 82.5,
"categories": { "technical": 85.0, "content": 80.0, "geo": 75.0, "performance": null },
"results": [
{
"check": "geo-ai-bot-gptbot",
"status": "fail",
"message": "GPTBot is blocked in robots.txt",
"recommendation": "Remove the Disallow rule for GPTBot to allow ChatGPT to index your content"
}
]
}
]
}See the Checks Reference for the full list of checks and what each result means.
How It Works
The MCP server is a thin TypeScript wrapper around the CLI binary:
- The AI assistant sends a tool call with the parameters above
- The MCP server builds the equivalent CLI command (
geodaddy <url> [flags]) - It spawns the binary as a subprocess and captures stdout/stderr
- The output (JSON or beauty text) is returned to the AI assistant
Binary resolution order:
mcp/bin/geodaddy— downloaded by the postinstall scripttarget/release/geodaddy— built locally viacargo build --release- Error with installation instructions if neither is found
Execution limits: 120-second timeout per call, 10 MB output buffer. Large site crawls (--max-pages with high values) may approach the timeout.
Troubleshooting
Tool doesn't appear in Claude Desktop
Restart Claude Desktop after editing the config file. Check that the JSON is valid (no trailing commas).
"geodaddy binary not found" error
The postinstall download may have failed. Try reinstalling:
npm install -g geodaddy-mcpOr build from source and ensure target/release/geodaddy exists:
cargo build --releaseChromium download prompt
enable_js and vitals both require a headless browser. On first use, Chromium (~150 MB) is downloaded automatically. This is a one-time download — subsequent calls reuse the cached binary.
Analysis times out
The server allows up to 120 seconds per call. For large crawls, reduce max_pages or run the CLI directly for long-running jobs.
Prebuilt binary not available for my platform
Prebuilt binaries are provided for macOS (Intel & Apple Silicon), Linux (x86_64 & arm64), and Windows (x86_64). For other platforms, build from source:
cargo build --release