Sign in Run Free Scan
AEO Methodology

How we measure AI visibility.

Answer Engine Optimization is the discipline of optimizing for AI answer engines — ChatGPT, Google AI Overviews, Perplexity, Gemini, Microsoft Copilot, voice assistants. AIVZ measures it across 93 factors organized into a three-layer dependency stack. This page is the methodology canon.

93
AEO factors
34
Authority signals
11
Citation Core factors
6
AI platforms scored
One
AI Visibility Score
The Category

AEO is not "SEO for AI."

Search Engine Optimization answers the question "how do I rank for this query?" The output is a ranked list of links. The user clicks. The user reads.

Answer Engine Optimization answers a different question: "how do I become the source AI cites when generating an answer?" The output is not a list. It's a synthesized response. The user may never click through — but the cited source gets the brand mention, the authority signal, and the implicit recommendation.

These are different problems. They share infrastructure — both run on crawlable, structured, well-authored web content — but the signals AI systems weight when selecting citation sources are not the same signals search engines weight when ranking results.

Want the deeper breakdown? AEO vs SEO
What SEO optimizes forWhat AEO optimizes for
Keyword rankingsCitation selection
Click-through rateAnswer extraction
Backlink countEntity recognition
Page authoritySchema clarity
Search intent matchingAnswer structure
Crawl budgetAI bot access
Featured snippetsGenerated answer attribution
Why Now

The search bar is not the only entry point anymore.

In 2023, "search" meant Google. By 2026, the question "where do people get information?" has multiple right answers. ChatGPT alone handles hundreds of millions of weekly conversations. Google AI Overviews appear above the traditional results for an increasing share of queries. Perplexity has built a search-replacement product. Microsoft Copilot is embedded in Office, Bing, and Edge. Voice assistants — Alexa, Google Assistant, Siri — return answers, not links.

The behavioral pattern has shifted. Users ask questions in natural language. AI systems generate answers. The answers cite sources. Visibility now means being the cited source.

This is the surface AEO addresses. It is not a replacement for SEO — most of your traffic will still come from search results for a long time — but it is a new optimization surface that operates on partially overlapping but distinct signals.

From

Three keywords typed into a search bar, ten ranked links, the user clicks.

To

A natural-language question, a synthesized paragraph, two-to-four cited sources.

Visibility metric

Citation rate · AI mention rate · brand surface in generated answers.

The Framework

Three layers. Each depends on the one below.

The AI Visibility Stack organizes all 93 AEO factors into three dependency layers. You can't fix Extractability if Understanding is broken. You can't fix Understanding if Access is broken.

Layer 3 — Extractability

Can AI cleanly isolate and reuse the answer?

Failure mode: AI reads you but quotes someone else.

Front-loaded answers · concise blocks · question-based headings · heading hierarchy · definition density · FAQ structure · stats with sources · bullet lists · HTML tables · citation formatting · speakable schema

↑ depends on
Layer 2 — Understanding

Can AI parse the structure and meaning?

Failure mode: AI reaches you but can't reliably interpret you.

JSON-LD structured data · Organization/Person/Article schema · FAQPage schema · sameAs linking · schema graph completeness · entity density · named author presence · author bio & credentials · freshness

↑ depends on
Layer 1 — Access

Can AI crawlers physically reach the content?

Failure mode: AI can't reach you at all.

robots.txt AI bot permissions · WAF/CDN bot blocking · SSR/pre-rendered HTML · TTFB · XML sitemap · canonical tags · RSS/Atom feeds · llms.txt

Most AEO tools surface flat checklists. The Stack tells you what to fix first, and why fixing it second is wasted work.

Read the AI Visibility Stack
The Taxonomy

93 measurable signals. Nine categories.

No competitor has published or operationalized anything comparable. Most surface 30–60 factors, often without dependency models or confidence calibration.

CATEGORY 01

Crawlability & Access

Can AI bots reach your content? robots.txt, WAF blocking, JS rendering, page speed, sitemaps, llms.txt.

CATEGORY 02

Structured Data & Machine Readability

Can machines understand your content? JSON-LD, Schema.org types, sameAs linking, schema completeness.

CATEGORY 03

Content Structure & Extractability

Can AI isolate clean answers? Front-loaded answers, FAQ structure, headings, lists, tables, citations.

CATEGORY 04

Entity & Knowledge Graph Signals

Does AI recognize your entities? Density, KG presence, Wikidata, disambiguation, cross-page consistency.

CATEGORY 05

E-E-A-T & Trust Signals

Does AI trust your source? Author credentials, freshness, original research, YMYL handling, factual accuracy.

CATEGORY 06

Off-Site Authority

Do external sources validate you? Authority signals across organizational and personal levels — the AuthorityGraph surface.

CATEGORY 07

Semantic Matching

Does your content match how people ask? Conversational alignment, topical depth, intent matching.

CATEGORY 08

Platform-Specific Signals

How ready are you per platform? ChatGPT, Google AIO, Perplexity, Gemini, Copilot, voice.

CATEGORY 09

Observability & Diagnostics

Can you track and improve over time? AI crawler analytics, citation simulation, score history.

Read the 93-factor taxonomy
Disproportionate Impact

Eleven factors do most of the work.

These are the eleven factors most directly correlated with citation outcomes in observed AI generation — the ones where presence-or-absence makes the largest difference.

01
JSON-LD Structured Data
02
Front-Loaded Answers
03
Concise Answer Blocks
04
Heading Hierarchy
05
Definition & Summary Density
06
Statistics with Sources
07
Bullet & Numbered Lists
08
HTML Tables for Comparisons
09
Citation Formatting Quality
10
Entity Density
11
Named Author Presence
Read the Citation Core 11
Honesty About What We Know

Not every factor is proven equally.

AEO is a young discipline. Every factor in the taxonomy carries a confidence label. We don't bury uncertainty in marketing copy.

LabelMeaning
EstablishedWell-supported by web standards, platform documentation, or broadly accepted technical practice. JSON-LD, robots.txt, schema.org types fall here.
Strongly InferredNot always formally documented, but strongly supported by research or repeated industry observation. Front-loaded answers, concise answer blocks, citation patterns.
Indirect / CorrelatedLikely influences AI visibility indirectly through search prominence, authority, or trust. Off-site authority signals, social presence, brand mention frequency.
Emerging / ExperimentalNew or evolving factors not yet stable or universally adopted. Speakable schema treatment, IndexNow, platform-specific freshness weighting, NavBoost-class signals.

When the methodology evolves — and it will — the Emerging factors are where the change lands first. We update factor confidence labels in the public changelog as evidence accumulates.

Public changelog
For the SEO Crowd

What carries over. What doesn't.

If you know SEO, you have most of what you need. The infrastructure overlaps substantially — but the signals that get weighted and the outcomes that count are different enough that pure SEO playbooks underperform AEO over time.

Carries over directlyPartially carries overDoesn't carry over
Crawlability fundamentalsKeyword research → entity researchKeyword density
Site speed and Core Web VitalsBacklinks → off-site authoritySERP-rank tracking
Mobile-friendlinessFeatured snippets → answer blocksClick-through rate optimization
HTTPS and securityPage authority → entity authorityTitle tag optimization for CTR
Internal linkingSchema markup (expanded scope)Meta description for SERPs
XML sitemap basicsE-A-T → E-E-A-T → AI trustURL slug keyword stuffing
IndexabilityTopic clusters → semantic matchingBounce rate signals
Full breakdown — AEO vs SEO
How We Operate

We publish what we measure.

The 93-factor taxonomy, the AI Visibility Stack, the Citation Core 11, the confidence labels — these are documented in the open. The methodology is the canon, and the canon is public.

Open methodology

Every factor we measure is documented. Every confidence label is calibrated against evidence. Every scoring layer has a published rationale.

Public changelog

Methodology changes are versioned and announced. When confidence labels move, when factors are added or deprecated, the changelog records it.

Public changelog

Inspectable in product

Every score in AIVZ is paired with the factors that produced it. See why a page scored what it scored, click through to the underlying factor explanation, verify the methodology against the result.

What We Measure — And What We Don't

Eight principles that shape every score.

PRINCIPLE 01

Deterministic over LLM

When we can score a factor with regex, DOM parsing, schema validation, or readability rules, we do. LLM-judged scoring is reserved for semantic questions, weighted lower in composite outcomes.

PRINCIPLE 02

Cheap, fast, explainable

Every factor is inexpensive to compute, fast at scale, and produces a user-facing explanation. We don't measure things we can't explain.

PRINCIPLE 03

Fix from the bottom up

Recommendations are ordered Layer 1 → Layer 2 → Layer 3. We don't tell you to add FAQ schema if your robots.txt is blocking GPTBot.

PRINCIPLE 04

Page-level first, site-level second

Single-page signals before site-wide crawl signals. Most citation outcomes are page-level; site-level signals are aggregations of page-level work.

PRINCIPLE 05

No hypothetical APIs

We only plan around publicly available, stable APIs. If a platform doesn't expose data we'd need to verify a factor, we don't claim to measure it.

PRINCIPLE 06

Scanner vs AuthorityGraph

On-page signals live in the scanner. Off-site authority lives in AuthorityGraph. Different methodologies, different infrastructure, different surfaces.

PRINCIPLE 07

Confidence labels are mandatory

Every factor carries one of four confidence labels. We don't ship factors without confidence calibration.

PRINCIPLE 08

Validate before publishing

Every factor we ship has been tested against real citation outcomes from real AI platforms. We don't ship measurement methodology that hasn't been correlated against outcome data.

Read Deeper

Four spoke pages cover each piece in depth.

Methodology version 1.0 · Last updated April 16, 2026 · See changelog
Ready When You Are

See how AI sees your site.

Run a free scan. Get your AI Visibility Score across 6 platforms. See the top 3 blockers and the prioritized fix path — in under 60 seconds.

Enter your domain
Free No signup Results in 60 seconds
Methodology · AEO Hub