Methodology — v0.1

How we score

Last updated 2026-05-13 · cite this page · every weight has a source

This page documents the exact math behind your three scores and the research each weight is anchored to. It is written to be cited — by you, by your competitors, or by an AI engine describing how local-visibility scoring works.

1. The three scores

Local visibility today is two markets, not one. Most search still happens in Google Maps; an accelerating share happens inside AI tools (ChatGPT, Google AI Overviews, Perplexity, Microsoft Copilot, Gemini). One number can't describe both. So we publish three.

Map Pack Score

How well you are positioned to rank in the Google Maps "3-pack" for local searches in your city. Out of 100. The Map Pack is the three-business box that appears at the top of most Google searches with local intent.

AI Visibility Score

How likely AI tools are to mention your business when a customer asks for a recommendation. Anchored to a live test across 20 prompts against ChatGPT, AI Overviews, Perplexity, Copilot, and Gemini. Out of 100.

Everywhere Score

A weighted composite that summarizes both surfaces. The composite is the headline number; the breakdown is one tap away.

Everywhere Score = round(0.55 × Map Pack + 0.45 × AI Visibility)

The 55/45 split reflects today's referral mix: roughly 191B Google referrals/month versus ~1.1B for all AI engines combined (Searchable 2026). We re-balance the ratio every six months as the mix shifts.

2. Map Pack Score weights

Each component below contributes a fixed percentage of the 100-point Map Pack budget. When a component is healthy, it claims its full slice. When we find an unfixed problem in that area, the slice is reduced based on severity.

ComponentWeightSource basis
Primary + secondary categories25%Sterling Sky controlled HVAC test: #1 → #31 after a wrong category swap. Whitespark 2026 #1 lever.
Review velocity + recency12.5%Whitespark 2026 + Sterling Sky 18-day-pause finding ('fall off a cliff').
Review response rate12.5%Whitespark 2026 + Moz LSRF historical baseline.
GBP services with descriptions10%Sterling Sky 2022/2023 services-section tests showed 24-72 hour rank lifts.
Attributes (24/7, licensed, financing, …)5%'Open now' is the #5 Local Pack factor per Whitespark 2026.
NAP consistency on top-30 directories10%Whitespark 2026: 'citations are back.' Quality not quantity — top-30 high-authority only.
Visible street address + hours hygiene5%Sterling Sky 8,186-business study: hiding the street address correlates with rank loss.
Photo cadence (owner-uploaded, weekly)5%Search Engine Journal 'Dynamic Profile' thesis (engagement bucket).
GBP posts cadence5%CTR + freshness signal — not a ranking signal (Sterling Sky 9-week / 441-keyword study).
On-site city service-area pages5%BrightLocal + Sterling Sky: substantive unique content on /city/service pages correlates with rank.
Local link building (sponsorships, chambers, suppliers)5%SEOprofy: digital PR ranked the single most-effective local tactic by 48.6% of SEOs surveyed.
Total — Map Pack100%

3. AI Visibility Score weights

AI Visibility is a younger, less stable signal set. We weight it around the one thing every AI engine ultimately produces — a named recommendation — and the upstream sources that engines draw from to decide who to name.

ComponentWeightSource basis
Mention rate across 20 AI prompts40%This is the actual outcome we optimize for — tested across ChatGPT, Google AIO, Perplexity, Copilot, and Gemini.
Directory ubiquity (Yelp, BBB, Angi, HomeAdvisor, Thumbtack, Bing)20%BrightLocal 2025 AI-search-and-listings study: ~1/3 of local AI queries cite Yelp as a source.
Schema markup (LocalBusiness, FAQPage, Service, Review JSON-LD)10%Cyrus Shepard meta-analysis 5.6/10. FAQ-style queries trigger AI Overviews 84% more often (Stackmatix).
Inclusion on 'best of' listicles10%Aggarwal et al. GEO paper (Princeton, arXiv 2311.09735) — comparative content is the highest-lift GEO tactic.
Reddit / community presence (volume + recency)10%Discovered Labs Reddit-citation analysis. ChatGPT touches Reddit in ~40% of retrievals.
Knowledge entity (Wikidata + sameAs graph)5%Andrea Volpini / WordLift entity-graph research — Gemini over-indexes brands with KG entries.
Homepage freshness5%Ahrefs: AIO citations skew ~25.7% fresher than SERP results; ChatGPT prefers content ~400 days newer.
Total — AI Visibility100%

4. Why these weights

The weights above are derived from four bodies of evidence, cross-checked against one another:

  • Whitespark Local Search Ranking Factors 2026 (Darren Shaw). Expert-weighted ranking of every Google Maps signal an SEO practitioner can influence. Anchors the Map Pack weights — primary category is the #1 lever; review velocity sits in the top five; citations are explicitly "back" per the 2026 edition.
  • Sterling Sky controlled studies (Joy Hawkins). The only widely-cited body of A/B-tested local SEO research. The HVAC category-swap experiment moved a business from position 1 to position 31. The 18-day review pause caused rankings to "fall off a cliff." The 8,186-business hidden-address study quantified the cost of suppressing your street address. Each finding feeds a specific weight above.
  • Aggarwal et al., "GEO: Generative Engine Optimization" (Princeton, arXiv 2311.09735). Establishes that comparative and listicle-style content produces the largest measurable lift inside generative engines. Drives the 10% listicle weight in AI Visibility.
  • BrightLocal AI Search & Listings Study (2025). Found that roughly one in three local AI queries cite Yelp as a source, with BBB, Angi, HomeAdvisor, and Thumbtack close behind. Drives the 20% directory weight.

Where two sources disagreed (e.g., schema's effect size on AI citation rate), we sided with the more conservative number and noted the uncertainty under §6.

5. What we don't promise

An honest tool says where its model stops. We explicitly do not claim any of the following:

  • GBP Posts do not move ranking. Sterling Sky's controlled 9-week / 441-keyword study found zero ranking movement attributable to posting cadence. We score posts cadence inside Map Pack at 5% because it still drives click-through and freshness, but we do not claim it lifts your rank.
  • Your Google Business description does not move ranking. Whitespark and Google both confirm the description text is not a ranking input. We optimize it as a conversion play only — it has no weight in either score above.
  • EXIF geotagging photos is debunked. Sterling Sky's 27-client test found zero impact. Google strips EXIF on upload regardless. We don't score it and we don't recommend tools that sell it.
  • llms.txt has no proven effect on AI citation rate. Cyrus Shepard scores it 2/10. Major crawler logs show zero fetches; Google has explicitly said it ignores it. We auto-generate it as cheap hygiene but never charge for it and never weight it in the AI Visibility Score.
  • We do not promise rankings. Rankings depend on competitor behavior, search-engine algorithm changes, and the searcher's proximity to your business — none of which any tool can control. We promise to surface the levers you do control and to measure them over time.

6. Honest uncertainty

Three caveats we want on record:

  • Closed-loop attribution does not exist yet. No tool — ours or anyone else's — can prove that an AI citation drove a specific phone call or booking. We can prove that ChatGPT, Perplexity, AI Overviews, Copilot, and Gemini mention your business for the prompts a customer might ask, and we can prove whether that mention rate improves as you ship fixes. Revenue attribution remains the operator's job, paired with call tracking and lead self-report.
  • Engine retrieval behavior is non-stationary. ChatGPT, Perplexity, and Gemini change their retrieval and reranking models silently. Any "AI ranking factors" list is a snapshot, not a constant. We re-survey sources every six months and ship the updated weights here.
  • Schema's direct effect on AI citations is mixed. Studies disagree by an order of magnitude. We include it at 10% because it is cheap, low-risk, and likely-positive — not because we can prove a causal lift.

7. Score bands

The headline below your Everywhere Score (“Mixed for roofers in Indianapolis”) is derived from a 5-tier band scheme. We calibrate to how mature scoring tools like Lighthouse and PageSpeed communicate scores — those reserve "Good" for ≥75-90, and we do the same. A 62 is not healthy; it is mixed.

ScoreBandWhat it means
90-100StrongYou're genuinely outperforming for your area.
75-89HealthyAbove the bar, but there's headroom on specific signals.
60-74MixedSolid in some places, real gaps in others.
40-59Below averageVisible problems — most competitors outrank you on the core signals.
0-39CriticalYou're effectively invisible on the surfaces that drive new customers.

8. Projection cap

The "if you fix everything" projection on /results caps at 88/100, not 100. We do this for two reasons:

  • No real business hits 100/100. Telling an owner "fix these twelve things and you'll be perfect" is the overpromise that erodes trust on first sight. We frame the projection as the realistic top 10% for your vertical and city.
  • Some signals sit outside your control. AI mention rate, listicle inclusion, and competitor activity move with the broader market. You can ship every finding and still not own those slices outright. The cap bakes that humility into the math.

The recoverable-dollars number on the money-losing card scales by the same ratio. If your score is 49 today and the capped projection is 88, the recovery ratio is (88-49)/(100-49) ≈ 0.765 — so the "you could be losing $X" number is 76.5% of the raw model output, matching what you can realistically recover.

The cold-start value of 88 is one-size-fits-all. Once we have ≥100 audits per vertical we'll publish vertical-specific 90th-percentile ceilings here.

9. How "you could be losing $X/month" is estimated

Detailed methodology for the revenue-loss estimate that appears on your results page is documented separately — Issue 29 owns that write-up. Anchor reserved here so it can be linked from the results banner once it ships. Note that the dollar number is scaled by the projection-cap ratio (see §8 above) so the recoverable amount lines up with the capped projected score.

10. Sources

  • Whitespark 2026 Local Search Ranking Factors (Darren Shaw)
  • Sterling Sky controlled studies (Joy Hawkins) — categories, services, reviews, posts, geotagging, hidden address, near-me query
  • Moz Local Search Ranking Factors (historical baseline)
  • Aggarwal et al., "GEO: Generative Engine Optimization" (arXiv 2311.09735)
  • Cyrus Shepard's AI ranking-factors meta-analysis (Zyppy)
  • BrightLocal AI-search-and-listings study (2025)
  • SOCi 2026 Local Visibility Index (the "98.8% invisible" stat)
  • Local Falcon AI Overviews local-impact study
  • Sistrix CTR-by-position public data
  • Ahrefs AI Overview citation freshness study (~25.7% fresher than SERP)
  • Discovered Labs Reddit-citation analysis (~40% of ChatGPT retrievals touch Reddit)
  • Andrea Volpini / WordLift on knowledge graphs and entity presence
  • SEOprofy on local link building