CLI Installation and Usage

The GeoDaddy CLI analyzes websites for AI discoverability. It runs completely locally with no accounts or API keys, and ships as a single binary with no runtime dependencies.

Installation

macOS / Linux

curl -fsSL https://raw.githubusercontent.com/borabiricik/geodaddy-cli/main/install.sh | sh

Works on macOS (Intel & Apple Silicon), Linux (x86_64 & arm64), and WSL.

Windows (PowerShell)

irm https://raw.githubusercontent.com/borabiricik/geodaddy-cli/main/install.ps1 | iex

Windows (CMD)

curl -fsSL https://raw.githubusercontent.com/borabiricik/geodaddy-cli/main/install.cmd -o install.cmd && install.cmd

From source

Requires Rust 1.83 or later:

git clone https://github.com/borabiricik/geodaddy-cli.git
cd geodaddy-cli
cargo build --release
sudo cp target/release/geodaddy /usr/local/bin/

Verify installation

geodaddy --version

Quick Start

Analyze a URL and get a human-readable report:

geodaddy https://example.com --beauty

Get machine-readable JSON output:

geodaddy https://example.com

Crawl an entire site (up to 50 pages):

geodaddy https://example.com --max-pages 50 --beauty

Usage

geodaddy <URL> [OPTIONS]

Flags & Options

FlagTypeDefaultDescription
<URL>requiredThe URL to analyze. Supports http://localhost and http://127.0.0.1 for local dev.
--max-pages <N>optionalEnable crawling and stop after N pages. Without this flag, only the given URL is analyzed.
--enable-jsbooleanfalseEnable JavaScript rendering for JS-heavy pages. Downloads Chromium (~150 MB) on first use.
--vitalsbooleanfalseMeasure Core Web Vitals (LCP, FCP, CLS, TTFB, TBT) via headless browser. Downloads Chromium on first use.
--beautybooleanfalseOutput a colored, human-readable report instead of JSON.
--fail-under <SCORE>optionalExit with code 1 if the overall score is below this threshold (0–100). Useful for CI gates.

Crawling Modes

Single URL mode (default — no --max-pages): analyzes only the URL you provide.

geodaddy https://example.com

Site-wide crawl mode (--max-pages required): discovers and analyzes up to N pages using a sitemap-first strategy — fetches /sitemap.xml first, falls back to breadth-first link discovery.

geodaddy https://example.com --max-pages 20

Progress is written to stderr so the JSON report on stdout stays clean.

Output Modes

JSON (default) — machine-readable, piped to stdout:

geodaddy https://example.com > report.json

Beauty mode — colored, human-readable:

geodaddy https://example.com --beauty

Example beauty output:

geodaddy — GEO Analysis Report
URL: https://example.com
Crawled: 2026-03-23T15:30:00Z
─────────────────────────────────────────────────
 
Overall Score: 75.5/100
Technical: 80.0  Content: 70.0  GEO: 65.0  Performance: N/A
 
━━━ Page 1/1: https://example.com/ ━━━
 
  [PASS]  tech-meta-title     Title is 55 chars (optimal range 50-60)
  [PASS]  tech-https          Page served over HTTPS with no mixed content
  [WARN]  geo-listicle        No listicle format detected on this page
      -> Consider restructuring content as a numbered list
  [FAIL]  tech-heading-h1     No H1 heading found
      -> Add exactly one <h1> tag per page with your primary keyword

CI/CD Integration

Fail on score threshold

geodaddy https://example.com --fail-under 80
echo $?  # 1 if score < 80, 0 if score >= 80

GitHub Actions

name: GEO Check
on: [push]
 
jobs:
  geo:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - name: Run GEO analysis
        run: |
          curl -L https://github.com/borabiricik/geodaddy-cli/releases/latest/download/geodaddy-linux-x86_64 -o geodaddy
          chmod +x geodaddy
          ./geodaddy ${{ env.SITE_URL }} --max-pages 20 --fail-under 70

Parse JSON with jq

# Get overall score
geodaddy https://example.com | jq '.score'
 
# List all failing checks
geodaddy https://example.com | jq '[.pages[].results[] | select(.status == "fail")]'
 
# Check if all AI bots are allowed
geodaddy https://example.com | jq '[.pages[].results[] | select(.check | startswith("geo-ai-bot")) | select(.status == "fail")] | length'

JSON Report Schema

{
  "schema_version": "1",
  "url": "https://example.com",
  "crawled_at": "2026-03-23T15:42:30.123Z",
  "score": 82.5,
  "categories": {
    "technical": 85.0,
    "content": 80.0,
    "geo": 75.0,
    "performance": null
  },
  "pages": [
    {
      "url": "https://example.com/",
      "robots_blocked": false,
      "score": 82.5,
      "categories": { "technical": 85.0, "content": 80.0, "geo": 75.0, "performance": null },
      "results": [
        {
          "check": "tech-meta-title",
          "status": "pass",
          "message": "Title is 55 chars (optimal range 50-60)",
          "recommendation": ""
        }
      ]
    }
  ]
}
  • categories.performance is null when --vitals is not used
  • status values: "pass", "warn", "fail"
  • recommendation is "" for passing checks
  • robots_blocked: true means the page was skipped per robots.txt