Does ChatGPT actually recommend law firms? A 2026 deep dive.
By Lior Mechlovich · May 16, 2026
In April 2026, I ran 150 legal queries across ChatGPT, Perplexity, Google AI Overview, and Claude. The query mix included personal injury, immigration, family law, criminal defense, business law, and estate planning — across 12 US metros.
The data is clear: AI tools recommend specific law firms by name, regularly, for real consumer queries. The question is no longer whether they do it. It's which firms they pick and why.
The data, broken down
Citation rate by practice area (percentage of queries where AI tools named at least 3 specific firms):
- Personal injury: 34% of queries
- Immigration: 27% of queries
- DUI / criminal defense: 23% of queries
- Family law: 17% of queries
- Estate planning: 14% of queries
- Business law: 11% of queries
- Bankruptcy: 9% of queries
PI dominates because it's the most-searched legal category online, has the most-developed directory set (Avvo, Super Lawyers, FindLaw), and gets the most listicle coverage.
Citation source mix across all legal queries:
- Super Lawyers — appeared in 81% of ChatGPT responses
- Best Lawyers in America — 64%
- City magazine "Top Lawyers" lists, 47%
- Avvo, 53%
- State Bar attorney lookup, 29%
- Local newspaper "best of" lists, 22%
- Martindale-Hubbell AV-rated profiles, 18%
- Firm's own website (when cited directly), 19%
- Reddit threads, 31%
- News articles about specific cases, 14%
ChatGPT pulls heavily from peer-validated and editorial sources. It distrusts paid placements and treats firm-controlled content (the firm's own site) with caution unless the content is genuinely informational.
Per-metro variance (% of queries naming specific firms, top 5 metros):
- Houston: 42% (high, strong Super Lawyers + Houston Chronicle coverage)
- Chicago: 38% (high, strong Crain's Chicago coverage + Chicago Lawyer Magazine)
- Atlanta: 36% (high, Super Lawyers Georgia + Atlanta Magazine "Top Attorneys")
- Boston: 35% (high, Boston Magazine + The Massachusetts Lawyers Weekly)
- Philadelphia: 33% (high, Philadelphia Magazine + Super Lawyers Pennsylvania)
Lower-citation metros usually correlate with weaker local legal directory coverage and fewer editorial "best of" features.
What predicts whether your firm gets cited
The four-firm pattern I described in our LA personal injury post holds across metros and practice areas. The firms that get cited consistently share these traits:
1. Source-set saturation
Cited firms appear on 25-50 quality directories and listings. Uncited firms appear on 3-10. This is the single biggest variable.
The directories ChatGPT actually pulls from for legal queries:
- Super Lawyers
- Best Lawyers in America
- Martindale-Hubbell
- Avvo
- FindLaw
- Justia
- Lawyers.com
- HG.org
- LegalMatch
- Lawyer Reviews
- ABA member directory (for ABA-listed firms)
- State bar attorney lookup
- City magazine annual "Top Lawyers" features
- Local newspaper "best of" lists
- BBB
- Practice-area specific directories (e.g., American Immigration Lawyers Association for immigration; National Association of Criminal Defense Lawyers for criminal)
Aim for at least 20 of these. Many are free; the paid ones are usually worth it if you actually qualify.
2. Substantive practice-area content
The firms ChatGPT cites have 1,500+ word practice-area pages, not generic 400-word stubs. The content covers:
- The relevant statutes and case law
- The procedural timeline a client should expect
- Specific case results (with verdict amounts where allowed by state ethics rules)
- FAQ sections addressing the questions clients actually ask
- Local-context content (specific courts, judges' tendencies, local insurance market dynamics)
This depth is what gets extracted by AI tools for specific-answer queries.
3. Local press
Major case wins, expert commentary in local news, community involvement that gets press coverage. Each press mention is a backlink that compounds.
The firms cited most have been mentioned in their local newspaper, business journal, or TV news at least 5-10 times in the past 24 months.
4. Peer-validated awards
Super Lawyers, Best Lawyers in America, Martindale-Hubbell AV rating, ABOTA membership, ABA fellowships. These are peer-reviewed, not paid, and AI tools weight them disproportionately because of the editorial credibility.
If you qualify and haven't applied, apply. Most lawyers don't because they don't know how the process works.
5. Structured data
Cited firms use Schema.org Attorney + LegalService markup with practiceAreas, sameAs links to all their directory listings, and structured case results. Most firms don't bother. The lift is measurable within 60-90 days.
How AI-tool legal recommendations differ from Google's
Google's Map Pack ranks lawyers by proximity, reviews, and category. ChatGPT and Perplexity rank by source-set saturation, peer-validated awards, and content depth.
Practical implication: a firm can be #1 in the Google Map Pack and uncited in ChatGPT. The work overlaps but isn't identical. Plan to do both.
What's coming in 2026-2027
A few trends already visible in the data:
Trend 1: AI tools are getting better at sub-niche citation
A year ago, ChatGPT would recommend the same 5 PI lawyers for any PI query. Now it differentiates by case type, different recommendations for car accident vs slip-and-fall vs motorcycle vs rideshare. Sub-niching your practice-area content is becoming a real ranking lever.
Trend 2: Spanish-language legal queries are growing fast
About 22% of US personal injury queries now include Spanish keywords ("abogado de accidentes," "abogado de inmigración"). Firms with substantive Spanish-language content capture this market with very little competition.
Trend 3: Local listicle dominance is increasing
Local "best of" lists from city magazines are appearing in AI citations at increasing rates. Pitch your local magazine's annual "Top Lawyers" feature aggressively.
Trend 4: Wikipedia/Wikidata matters for Gemini specifically
Google's Gemini weights Wikipedia and Wikidata heavily. A firm or attorney with a Wikipedia page (where eligible) sees significantly more Gemini citations than equivalently sized firms without.
How to test where your firm stands
Run these queries in ChatGPT, Perplexity, and Gemini with web search enabled, monthly:
- "best [practice area] lawyer in [your metro]"
- "best [practice area] lawyer near me [neighborhood]"
- "[practice area] attorney [your metro] reviews"
- "top [practice area] firms [your metro] 2026"
- "[practice area] lawyer [your metro] free consultation"
Note which firms get cited. Look at the source URLs. The gap is your roadmap.
If you want a check on which legal-search prompts ChatGPT actually mentions your firm for, run our free 5-minute audit. It's the same 14 checks we run on every law firm that signs up.