Across 137 B2B SaaS companies and 120 buying queries, run through four AI engines in April 2026, the brands ChatGPT cited most had two things in common. They had high-volume presence on review directories, and they had strong topical authority on Google, meaning Google ranked them for a high count of keywords inside their product category. In our April 2026 dataset, brands with both signals at once were cited 84% of the time. Brands with neither sat at 34%. A 50-percentage-point gap. The study's authors, EMGI Group, named the pattern the Compounding Rule.
Before going further: this is research Blastra, a SaaS listings management platform, co-produced with EMGI Group, an authority-building agency for SaaS. Blastra contributed the directory crawl: 23 review and listing platforms, more than 130 SaaS brands, more than 2,000 individual listing checks. EMGI ran the AI citation analysis and named the rule. The findings I'm walking through are EMGI's. The interpretation is mine.
The Compounding Rule is one important controllable play inside a larger AI citation economy. Reddit alone accounts for roughly 40% of AI citation share for SaaS queries in public datasets, and earned media, Wikipedia, and community surfaces show up repeatedly across AI visibility research like Profound's. What this article describes is the controllable part: the surfaces you can act on, plus a shortlist of where to put the budget. The qualitative version of this argument (why AI engines look for consensus across third-party sources at all) is in an earlier piece we published on the consensus pattern. This post is the numbers behind it.
The Compounding Rule, stated simply
EMGI split the companies on two axes: directory review volume above or below the sample median, and Google topical ranking volume above or below the sample median. That produces a four-quadrant matrix. Here's what fell out:
The stacked quadrant gets roughly 5x the average citations of the empty quadrant in our sample (8.2 vs 1.6). Either signal alone gets you partway. Both together is where the gap opens.
The mechanism, in EMGI's framing: AI engines look for confirming evidence across multiple authority surfaces. One signal looks like noise. Two signals look like a pattern. Three signals look like a brand. LLMs are directory visitors too. They scan listings the same way a human triager would, faster and across more surfaces, arriving at category conclusions before the buyer ever reaches your website. When user reviews on the software directories agree with what Google ranks you for on the category-research queries buyers run before they short-list, the model treats your category placement as consensus and surfaces you with confidence. When those signals disagree, the model defaults to the dominant one or skips you.
These are top-of-funnel triage queries. Buyers (and the LLMs working on their behalf) are deciding what to even keep looking at on the way to a short list. The buying decision comes later.
The cross-engine convergence is the most defensible finding in the study. Google AI Overviews appearances and ChatGPT citations correlate at r = 0.91 across the dataset (where 1.0 is perfect correlation and 0 is none; 0.91 is unusually tight). The engines converge on the same brands at the visibility level. If you become visible on one, you tend to become visible on the others. Track one engine and you have a strong proxy for the rest.
Reviews drive citations, ratings don't
The single cleanest illustration in the study sits inside one platform. G2 review count correlates with ChatGPT citations at r = +0.48, a moderate positive relationship on the -1 to +1 scale. G2 average rating correlates at r = -0.19, slightly negative but close to noise. Trustpilot mirrors the pattern: +0.32 on review count, -0.29 on rating.
EMGI's framing, as they wrote it: "Reviews drive citations. Stars do not."
The -0.19 needs an honest caveat that EMGI flags themselves: brand-size confound. Big incumbents have broader customer bases, more diverse review pools, and therefore lower average ratings. Read the -0.19 as a residue of who has the most reviews in the first place. The safe read: volume drives the citation effect, and ratings have no clear positive contribution beyond what volume already explains.
Category placement is the gate. Pipedrive is listed as CRM on 11 of 11 directories we surveyed and gets cited for CRM queries. Apollo is listed as sales engagement on every review platform we checked and gets cited for sales-engagement queries. CRM queries don't pick up Apollo, even with 12,597 directory reviews and a #2 Google ranking on "best outbound sales prospecting tool." Apollo isn't doing anything wrong; AI just follows the shelves the directories assign, and Apollo's shelf is sales engagement. If you want citations in a category, your listings need to live in that category. Volume in the wrong category doesn't travel.
Lift by directory
EMGI calculated lift for each directory as the percentage-point difference in AI citation rate between brands listed and brands not listed. Here is the full lift chart across the 12 SaaS-directory platforms in the analysis:
| Directory | Lift (pp) |
|---|---|
| TrustRadius | +47 |
| Crozdesk | +43 |
| GetApp | +40 |
| SoftwareReviews | +38 |
| Gartner Peer Insights | +36 |
| Trustpilot | +31 |
| SoftwareAdvice | +29 |
| Capterra | +28 |
| G2 | +24 |
| PeerSpot | +22 |
| SourceForge | +18 |
| AlternativeTo | +9 |
Lift clusters above the mid-30s and drops off below. For readers outside the listings world: TrustRadius is a B2B review platform known for longer-form, deeply vetted reviews. Crozdesk is a European-rooted SaaS directory aggregator. SoftwareReviews is the research arm of Info-Tech, the IT analyst firm. The G2, GetApp, SoftwareAdvice, and Capterra cluster all sit under Gartner Digital Markets as of February 2026. PeerSpot is an enterprise IT review platform (formerly IT Central Station). AlternativeTo is the consumer-leaning "find an alternative to" tool aggregator, useful as a long-tail signal but weaker as a citation driver in this sample.
Crunchbase sits outside this list, as a separate baseline. Its lift was +61 percentage points. Every brand in the dataset that wasn't on Crunchbase had zero citations: a striking floor in a curated sample, worth holding loosely. Being absent from Crunchbase is conspicuous, and it's the cheapest signal floor available. Whether the listing itself produces citations is still open.
Two surprises sit inside that table. The first is Crozdesk at #2, well above G2. The second is G2 at #9. G2 is the dominant SaaS-directory surface in the public AI-citation research published in 2025–2026 (Kevin Indig's analysis put G2 at ~22% share of voice on software-related AI queries; G2's own studies put it higher), so a +24 lift behind TrustRadius, Crozdesk, GetApp, SoftwareReviews, Gartner Peer Insights, and Trustpilot reads as a surprise. The honest reading is that marginal lift points is a saturation-sensitive metric on near-universal platforms. Most cited brands are already on G2, so the variance (and therefore the lift gap) shrinks compared to less-saturated directories. And post-February 2026, the G2 Digital Markets portfolio spans four spots in this chart: G2 (#9), GetApp (#3), SoftwareAdvice (#7), and Capterra (#8). The platform-level read from this data: be on G2 and its sister sites, because one company now owns four of your higher-lift profiles.
Trustpilot's +31 lift is consistent with its broader AI citation footprint outside the SaaS directory category.
Read the original study
If you want to pressure-test the analysis above (see the original charts, the per-category breakdowns, the engine-by-engine correlations, and EMGI's full methodology), go to emgigroup.com/blog/directory-listings-and-ai-search-citations/.
The numbers here are an April 2026 snapshot. They'll move as AI engines update and the dataset gets re-measured. The shape of the argument (stacked signals beat single signals) should hold longer than the specific lift figures.
Stack your directory half of the Compounding Rule
Blastra is a SaaS listings management platform. We handle your presence, accurate and current and category-consistent, across the directories in the EMGI lift chart (TrustRadius, Crozdesk, GetApp, SoftwareReviews, Gartner Peer Insights, Trustpilot, SoftwareAdvice, Capterra, G2, PeerSpot, SourceForge, AlternativeTo) plus Crunchbase and dozens of others. So when AI looks for category consensus across third-party sources, your signals agree.

