CLI Installation and Usage
The GeoDaddy CLI analyzes websites for AI discoverability. It runs completely locally with no accounts or API keys, and ships as a single binary with no runtime dependencies.
Installation
macOS / Linux
curl -fsSL https://raw.githubusercontent.com/borabiricik/geodaddy-cli/main/install.sh | shWorks on macOS (Intel & Apple Silicon), Linux (x86_64 & arm64), and WSL.
Windows (PowerShell)
irm https://raw.githubusercontent.com/borabiricik/geodaddy-cli/main/install.ps1 | iexWindows (CMD)
curl -fsSL https://raw.githubusercontent.com/borabiricik/geodaddy-cli/main/install.cmd -o install.cmd && install.cmdFrom source
Requires Rust 1.83 or later:
git clone https://github.com/borabiricik/geodaddy-cli.git
cd geodaddy-cli
cargo build --release
sudo cp target/release/geodaddy /usr/local/bin/Verify installation
geodaddy --versionQuick Start
Analyze a URL and get a human-readable report:
geodaddy https://example.com --beautyGet machine-readable JSON output:
geodaddy https://example.comCrawl an entire site (up to 50 pages):
geodaddy https://example.com --max-pages 50 --beautyUsage
geodaddy <URL> [OPTIONS]Flags & Options
| Flag | Type | Default | Description |
|---|---|---|---|
<URL> | required | — | The URL to analyze. Supports http://localhost and http://127.0.0.1 for local dev. |
--max-pages <N> | optional | — | Enable crawling and stop after N pages. Without this flag, only the given URL is analyzed. |
--enable-js | boolean | false | Enable JavaScript rendering for JS-heavy pages. Downloads Chromium (~150 MB) on first use. |
--vitals | boolean | false | Measure Core Web Vitals (LCP, FCP, CLS, TTFB, TBT) via headless browser. Downloads Chromium on first use. |
--beauty | boolean | false | Output a colored, human-readable report instead of JSON. |
--fail-under <SCORE> | optional | — | Exit with code 1 if the overall score is below this threshold (0–100). Useful for CI gates. |
Crawling Modes
Single URL mode (default — no --max-pages): analyzes only the URL you provide.
geodaddy https://example.comSite-wide crawl mode (--max-pages required): discovers and analyzes up to N pages using a sitemap-first strategy — fetches /sitemap.xml first, falls back to breadth-first link discovery.
geodaddy https://example.com --max-pages 20Progress is written to stderr so the JSON report on stdout stays clean.
Output Modes
JSON (default) — machine-readable, piped to stdout:
geodaddy https://example.com > report.jsonBeauty mode — colored, human-readable:
geodaddy https://example.com --beautyExample beauty output:
geodaddy — GEO Analysis Report
URL: https://example.com
Crawled: 2026-03-23T15:30:00Z
─────────────────────────────────────────────────
Overall Score: 75.5/100
Technical: 80.0 Content: 70.0 GEO: 65.0 Performance: N/A
━━━ Page 1/1: https://example.com/ ━━━
[PASS] tech-meta-title Title is 55 chars (optimal range 50-60)
[PASS] tech-https Page served over HTTPS with no mixed content
[WARN] geo-listicle No listicle format detected on this page
-> Consider restructuring content as a numbered list
[FAIL] tech-heading-h1 No H1 heading found
-> Add exactly one <h1> tag per page with your primary keywordCI/CD Integration
Fail on score threshold
geodaddy https://example.com --fail-under 80
echo $? # 1 if score < 80, 0 if score >= 80GitHub Actions
name: GEO Check
on: [push]
jobs:
geo:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Run GEO analysis
run: |
curl -L https://github.com/borabiricik/geodaddy-cli/releases/latest/download/geodaddy-linux-x86_64 -o geodaddy
chmod +x geodaddy
./geodaddy ${{ env.SITE_URL }} --max-pages 20 --fail-under 70Parse JSON with jq
# Get overall score
geodaddy https://example.com | jq '.score'
# List all failing checks
geodaddy https://example.com | jq '[.pages[].results[] | select(.status == "fail")]'
# Check if all AI bots are allowed
geodaddy https://example.com | jq '[.pages[].results[] | select(.check | startswith("geo-ai-bot")) | select(.status == "fail")] | length'JSON Report Schema
{
"schema_version": "1",
"url": "https://example.com",
"crawled_at": "2026-03-23T15:42:30.123Z",
"score": 82.5,
"categories": {
"technical": 85.0,
"content": 80.0,
"geo": 75.0,
"performance": null
},
"pages": [
{
"url": "https://example.com/",
"robots_blocked": false,
"score": 82.5,
"categories": { "technical": 85.0, "content": 80.0, "geo": 75.0, "performance": null },
"results": [
{
"check": "tech-meta-title",
"status": "pass",
"message": "Title is 55 chars (optimal range 50-60)",
"recommendation": ""
}
]
}
]
}categories.performanceisnullwhen--vitalsis not usedstatusvalues:"pass","warn","fail"recommendationis""for passing checksrobots_blocked: truemeans the page was skipped perrobots.txt