A composite 0–100 score, three-layer Stack breakdown, six per-platform readiness scores, 93-factor coverage, Authority Rank, and citation event monitoring — generated in 60 seconds per URL. Every measurement labeled by confidence.
The AI Visibility Score is the at-a-glance composite — one number that reflects how citation-ready your content is across the AI search surface. The score aggregates from all 93 factors weighted through the three-layer AI Visibility Stack, with phase-aware confidence weighting.
Access and Understanding are healthy; Extractability has gaps. AI reads the content but extraction is sub-optimal.
Either L2 understanding or L3 extractability is failing — content is reachable but not optimally citable.
Start at Layer 1 — fix access before anything else. Content is effectively absent from AI citation behavior.
Tier classification isn't a linear interpretation of the score. Two pages can score 65 and land in different practical fix paths depending on which Stack layer is failing. A score of 65 with strong L1 / weak L3 starts at L3. A score of 65 with weak L1 / strong L3 must restart at L1 — the L3 work is wasted until Access is fixed.
The four tier names and color tokens are part of the published methodology. White-label resellers brand the interface and the deliverables — they don't override the methodology that gives the deliverables credibility. AI Authority means the same thing whether delivered by AIVZ directly or by an agency partner reselling AIVZ.
The full Stack methodology and tier dynamicsThe composite score is the executive view. The three-layer Stack breakdown is the operational view — what's actually working, what's failing, where to start.
What it measures: Whether AI bots can physically reach your content. Crawl access, feed and discovery, bot blocking.
What it measures: Whether AI systems can parse the structure and meaning of crawled content. Schema, structure, entity, schema accuracy, author E-E-A-T.
What it measures: Whether AI can isolate clean, citation-ready answer blocks. FAQ, summary, content richness, content quality, speakable.
The composite score and Stack layer scores aggregate from individual factor measurements. AIVZ scores every URL across all 93 factors — no factor skipped, no measurement abstracted.
| Coverage | Free | Pro | Agency | Ent. |
|---|---|---|---|---|
| All 93 factors scored | ● | ● | ● | ● |
| Per-factor detail | Subset | Full | Full | Full |
| Confidence labels | ● | ● | ● | ● |
| Recommendations | Subset | Full | Full | Full |
| Score-impact estimates | — | ● | ● | ● |
The conceptual framework is published. The exact mathematical weights at the factor level, the per-factor scoring formulas, and the analyzer implementation logic stay confidential — that's the engineering moat.
A page can score 78 for Google AI Overviews and 45 for ChatGPT — because each platform values different structural signals. AIVZ scores readiness separately for the six major AI engines, surfacing exactly which platforms you're ready for and which need work.
The 93-factor taxonomy covers on-page signals comprehensively. But AI citation behavior also depends on signals from outside your site — endorsements, mentions, reviews, professional credentials, podcast appearances, knowledge graph presence. AIVZ measures these through Authority Rank, a graph-based scoring engine adapted from PageRank.
Intrinsic credentials — experience, education, content output, business clarity.
Graph-based propagation — who endorses you, and how authoritative are they?
Audience behavior — retention, engagement, return rate, conversion.
Verification — credentials, testimonials, case studies, identity confirmation.
Backlinks, brand mentions, knowledge graph presence, reviews across 5 platforms, directory consistency, certifications, community size, app store ratings.
Podcast appearances, published books, GitHub presence, conference speaking, LinkedIn authority, course instruction, Amazon author page, coaching directories.
Authority Rank is available at Agency tier and above. The off-site signal aggregation requires more API quota, more complex graph-traversal infrastructure, and longer-running batch jobs than the on-page measurement — making it appropriate for higher-tier engagements where the off-site work is part of the scope.
The AI Visibility Score predicts citation likelihood. Citation Event Monitoring measures whether AI systems are actually citing your content over time. The two together close the measurement loop: predict, observe, refine.
An AI platform begins citing your content for a query it didn't previously cite.
An AI platform stops citing your content for a query it previously cited.
What fraction of citation slots in a topic area your content holds vs. competitors.
How citation behavior differs across the 6 AI engines for the same content.
AIVZ does not claim to predict whether any specific user query will result in your content being cited. AI citation behavior is a function of query specificity, competing content, platform-specific scoring, and stochastic LLM behavior. AIVZ measures structural readiness (the AI Visibility Score) and observed citation events — but explicit citation prediction at the per-query level is not a product claim.
A snapshot score is useful. A time series is operational. AIVZ retains scan history per domain, surfaces score change over time, and alerts on regressions — score drops driven by site changes, AI platform behavior shifts, or content edits that broke previously-working factors.
Alerts deliver via Slack, Microsoft Teams, email, or webhook — depending on integration configuration.
| Tier | Score history retention |
|---|---|
| Free | 7 days |
| Pro | 90 days |
| Agency | 1 year |
| Enterprise | Unlimited |
AIVZ's scanner engine runs every factor through a measurement pipeline that follows eight design principles. Practitioners and engineering reviewers should understand not just what AIVZ measures but how the measurements are produced.
Regex pattern matching, DOM parsing, schema validation, and Flesch readability calculations instead of LLM judgment. Deterministic scoring is reproducible; LLM scoring isn't.
Every factor's measurement must be inexpensive, fast (sub-second per URL for on-page factors), and human-auditable.
Recommendations are stack-ordered. Layer 1 issues surface before Layer 2 issues; Layer 2 before Layer 3.
Single-page signals are scored before site-wide aggregations. Page-level scores are always available even when site-level analysis is still running.
Factor scoring uses only publicly available, stable APIs. No factors depend on private or rumored AI platform endpoints.
On-page factors are scored by the scanner engine; off-site authority signals are scored by Authority Rank. Decoupled by design.
Every factor carries one of the four confidence labels. No exceptions.
New factors are tested against real citation data before being added to the taxonomy or surfaced as recommendations.
Within the composite: Deterministic factors (Phases 1–2) carry weight 0.82. LLM-augmented and experimental factors (Phases 4–5) carry weight 0.18. Auto-normalization applies when LLM/experimental scores are absent. Deterministic measurements drive the composite; LLM and experimental measurements supplement without dominating.
| Capability | Free | Pro | Agency | Enterprise |
|---|---|---|---|---|
| AI Visibility Score (composite + 4 tiers) | ● | ● | ● | ● |
| Three-layer Stack breakdown | ● | ● | ● | ● |
| 93-factor scoring (all factors run) | ● | ● | ● | ● |
| Per-factor detail surfaced | High-impact subset | Full | Full | Full |
| Confidence labels per factor | ● | ● | ● | ● |
| Per-platform readiness (6 platforms) | ● | ● | ● | ● |
| Authority Rank | — | — | ● | ● |
| Citation event monitoring | — | ● | ● | ● |
| Score history retention | 7 days | 90 days | 1 year | Unlimited |
| Regression detection + alerts | — | All channels | All channels | |
| Score-impact estimates per fix | — | ● | ● | ● |
Composite score, tier classification, three-layer breakdown, top fix recommendations — all in under a minute. No signup. No credit card.