Sign in Run Free Scan
Capability 1 — Measurement

The AI Visibility Score.

A composite 0–100 score, three-layer Stack breakdown, six per-platform readiness scores, 93-factor coverage, Authority Rank, and citation event monitoring — generated in 60 seconds per URL. Every measurement labeled by confidence.

0–100 score 4 tiers 3 Stack layers 6 AI platforms 93 factors 4 confidence labels
Run a Free Scan See the Stack methodology
What the number means

A single 0–100 score. Four tiers. Layer-aware.

The AI Visibility Score is the at-a-glance composite — one number that reflects how citation-ready your content is across the AI search surface. The score aggregates from all 93 factors weighted through the three-layer AI Visibility Stack, with phase-aware confidence weighting.

AI Authority
90 — 100

Healthy across all three layers

Content is reachable, parseable, and extractable. AI systems can crawl, understand, and cite cleanly.

AI Extractable
70 — 89

Strong foundation, needs L3 refinement

Access and Understanding are healthy; Extractability has gaps. AI reads the content but extraction is sub-optimal.

AI Readable
40 — 69

AI can read but won't cite

Either L2 understanding or L3 extractability is failing — content is reachable but not optimally citable.

0 — 39

Failing early Stack layers

Start at Layer 1 — fix access before anything else. Content is effectively absent from AI citation behavior.

Tier classification isn't a linear interpretation of the score. Two pages can score 65 and land in different practical fix paths depending on which Stack layer is failing. A score of 65 with strong L1 / weak L3 starts at L3. A score of 65 with weak L1 / strong L3 must restart at L1 — the L3 work is wasted until Access is fixed.

The four tier names and color tokens are part of the published methodology. White-label resellers brand the interface and the deliverables — they don't override the methodology that gives the deliverables credibility. AI Authority means the same thing whether delivered by AIVZ directly or by an agency partner reselling AIVZ.

The full Stack methodology and tier dynamics
Layer-level visibility

Three layer scores, in dependency order.

The composite score is the executive view. The three-layer Stack breakdown is the operational view — what's actually working, what's failing, where to start.

Layer 01 · 0–100

Access

What it measures: Whether AI bots can physically reach your content. Crawl access, feed and discovery, bot blocking.

  • Crawl Access35%
  • Feed & Discovery25%
  • Bot Blocking40%
Failure mode: AI systems cannot access your content at all. Nothing else matters until this is fixed.
Layer 02 · 0–100

Understanding

What it measures: Whether AI systems can parse the structure and meaning of crawled content. Schema, structure, entity, schema accuracy, author E-E-A-T.

  • Schema25%
  • Structure20%
  • Entity20%
  • Schema Accuracy15%
  • Author E-E-A-T20%
Failure mode: AI can reach your content but can't reliably interpret it. Read but not understood with citation-grade confidence.
Layer 03 · 0–100

Extractability

What it measures: Whether AI can isolate clean, citation-ready answer blocks. FAQ, summary, content richness, content quality, speakable.

  • FAQ20%
  • Summary20%
  • Content Richness20%
  • Content Quality20%
  • Speakable20%
Failure mode: AI reads you, processes the meaning, then quotes a competitor with cleaner answer-block formatting.
What you get in the dashboard
  • Composite AI Visibility Score (0–100) with tier label
  • Three layer scores (Access, Understanding, Extractability) each 0–100
  • All sub-scores within each layer
  • Per-factor scoring detail (Pro and above for full detail; Free shows top-impact factors)
  • Confidence label per factor
Factor-level depth

93 factors. 9 categories. 4 confidence labels.

The composite score and Stack layer scores aggregate from individual factor measurements. AIVZ scores every URL across all 93 factors — no factor skipped, no measurement abstracted.

93
Factors scored
9
Categories
11
Citation Core
4
Confidence labels
What you see at the factor level
  • Score (0–100) for that specific factor
  • Confidence label (Established / Strongly Inferred / Indirect-Correlated / Emerging-Experimental)
  • Stack layer mapping — which of the three layers the factor lives in
  • Score-impact estimate — how much fixing this factor would move the composite
  • Recommendation with implementation specifics where AIVZ has an adapter
  • Direct-execute button where the adapter is Live (one-click fixes)
Factor surfacing by tier
CoverageFreeProAgencyEnt.
All 93 factors scored
Per-factor detailSubsetFullFullFull
Confidence labels
RecommendationsSubsetFullFullFull
Score-impact estimates

The conceptual framework is published. The exact mathematical weights at the factor level, the per-factor scoring formulas, and the analyzer implementation logic stay confidential — that's the engineering moat.

6 AI engines. 6 scores.

The platforms don't agree. AIVZ surfaces the differences.

A page can score 78 for Google AI Overviews and 45 for ChatGPT — because each platform values different structural signals. AIVZ scores readiness separately for the six major AI engines, surfacing exactly which platforms you're ready for and which need work.

ChatGPT
45
Highest weight on bot access (GPTBot specifically) and content quality.
~8% overlap with Google's top-10 organic.
Google AI Overviews
78
Highest weight on schema markup and structured data.
~76% overlap with top-10 organic results.
Perplexity
62
Highest weight on content freshness and bot access (PerplexityBot).
Strong recency bias; surface-visible citations.
Gemini
71
Schema + E-E-A-T weighted. Google ecosystem alignment.
Strong correlation with AIO readiness.
Microsoft Copilot
67
Schema markup heavily weighted. Bing index aligned.
Bundled across Microsoft 365 surfaces.
Voice Assistants
38
Speakable schema markup is the load-bearing signal.
Spoken-language + Speakable JSON-LD presence.
What you get per platform
  • 0–100 readiness score per platform
  • Per-platform top-issue surfacing — which factors are bottlenecking citation readiness for that engine
  • Per-platform fix queue when fixes have platform-specific optimal implementations
  • Cross-platform comparison view — see all six scores at once with the gap analysis
The off-site dimension

Citation isn't just about your page.

The 93-factor taxonomy covers on-page signals comprehensively. But AI citation behavior also depends on signals from outside your site — endorsements, mentions, reviews, professional credentials, podcast appearances, knowledge graph presence. AIVZ measures these through Authority Rank, a graph-based scoring engine adapted from PageRank.

Four Authority Rank components
BaseScore

Intrinsic credentials — experience, education, content output, business clarity.

EndorsementRank

Graph-based propagation — who endorses you, and how authoritative are they?

EngagementRank

Audience behavior — retention, engagement, return rate, conversion.

TrustRank

Verification — credentials, testimonials, case studies, identity confirmation.

34 authority signals across two tiers
20 Organizational

Backlinks, brand mentions, knowledge graph presence, reviews across 5 platforms, directory consistency, certifications, community size, app store ratings.

14 Person-Level

Podcast appearances, published books, GitHub presence, conference speaking, LinkedIn authority, course instruction, Amazon author page, coaching directories.

Authority Rank is available at Agency tier and above. The off-site signal aggregation requires more API quota, more complex graph-traversal infrastructure, and longer-running batch jobs than the on-page measurement — making it appropriate for higher-tier engagements where the off-site work is part of the scope.

Tracking when AI actually cites

The score is the leading indicator. Citations are the result.

The AI Visibility Score predicts citation likelihood. Citation Event Monitoring measures whether AI systems are actually citing your content over time. The two together close the measurement loop: predict, observe, refine.

Citation Start

An AI platform begins citing your content for a query it didn't previously cite.

Citation Stop

An AI platform stops citing your content for a query it previously cited.

Share-of-Voice

What fraction of citation slots in a topic area your content holds vs. competitors.

Per-Platform Patterns

How citation behavior differs across the 6 AI engines for the same content.

AIVZ does not claim to predict whether any specific user query will result in your content being cited. AI citation behavior is a function of query specificity, competing content, platform-specific scoring, and stochastic LLM behavior. AIVZ measures structural readiness (the AI Visibility Score) and observed citation events — but explicit citation prediction at the per-query level is not a product claim.

Time-series measurement

Track changes. Catch regressions.

A snapshot score is useful. A time series is operational. AIVZ retains scan history per domain, surfaces score change over time, and alerts on regressions — score drops driven by site changes, AI platform behavior shifts, or content edits that broke previously-working factors.

What's tracked
  • Per-domain score history (composite + per-layer + per-platform)
  • Per-page score history
  • Per-factor score history (Pro and above)
  • Citation event timeline (Pro and above)
Regression alert payload
  • Which score dropped (composite, layer, platform, or factor)
  • The magnitude of the drop
  • Likely-cause attribution (which factors changed; on-page vs. off-page)
  • The fix path to restore the prior score

Alerts deliver via Slack, Microsoft Teams, email, or webhook — depending on integration configuration.

Retention by tier
TierScore history retention
Free7 days
Pro90 days
Agency1 year
EnterpriseUnlimited
The measurement discipline

Deterministic over LLM. Cheap, fast, explainable.

AIVZ's scanner engine runs every factor through a measurement pipeline that follows eight design principles. Practitioners and engineering reviewers should understand not just what AIVZ measures but how the measurements are produced.

01

Deterministic over LLM

Regex pattern matching, DOM parsing, schema validation, and Flesch readability calculations instead of LLM judgment. Deterministic scoring is reproducible; LLM scoring isn't.

02

Cheap, fast, explainable

Every factor's measurement must be inexpensive, fast (sub-second per URL for on-page factors), and human-auditable.

03

Fix from the bottom up

Recommendations are stack-ordered. Layer 1 issues surface before Layer 2 issues; Layer 2 before Layer 3.

04

Page-level first, site-level second

Single-page signals are scored before site-wide aggregations. Page-level scores are always available even when site-level analysis is still running.

05

No hypothetical APIs

Factor scoring uses only publicly available, stable APIs. No factors depend on private or rumored AI platform endpoints.

06

Scanner vs. Authority Graph separation

On-page factors are scored by the scanner engine; off-site authority signals are scored by Authority Rank. Decoupled by design.

07

Confidence labels mandatory

Every factor carries one of the four confidence labels. No exceptions.

08

Validate before publishing

New factors are tested against real citation data before being added to the taxonomy or surfaced as recommendations.

Phase weighting

Within the composite: Deterministic factors (Phases 1–2) carry weight 0.82. LLM-augmented and experimental factors (Phases 4–5) carry weight 0.18. Auto-normalization applies when LLM/experimental scores are absent. Deterministic measurements drive the composite; LLM and experimental measurements supplement without dominating.

What's available where

Measurement scope by tier.

CapabilityFreeProAgencyEnterprise
AI Visibility Score (composite + 4 tiers)
Three-layer Stack breakdown
93-factor scoring (all factors run)
Per-factor detail surfacedHigh-impact subsetFullFullFull
Confidence labels per factor
Per-platform readiness (6 platforms)
Authority Rank
Citation event monitoring
Score history retention7 days90 days1 yearUnlimited
Regression detection + alertsEmailAll channelsAll channels
Score-impact estimates per fix
Full tier pricing
Ready when you are

See your AI Visibility Score in 60 seconds.

Composite score, tier classification, three-layer breakdown, top fix recommendations — all in under a minute. No signup. No credit card.

Enter your domain
Free No signup Results in 60 seconds

Or see how the recommendations work.

Product · AI Visibility Score