Pick any two universities. Ask ChatGPT to compare them.
The model is going to cite something. In our audits, that something is Reddit, Yocket, or Quora about three times out of four.
We've now run this experiment across eighteen comparison query templates and a panel of fifty North American institutions. The pattern is so stable that we've stopped treating it as a finding and started treating it as a feature of the current AI search landscape.
The data
The third number is the interesting one. The model is not citing yesterday's hot take. It is citing a thread from 2022 because no school has shipped a fresher, more authoritative comparison in the intervening three years.
Why this happened
Universities have a hundred-year tradition of refusing to name rivals. It is in the brand guidelines. It is in the legal review. It is the kind of policy that survives every reorganization.
AI engines do not have that policy. They build comparisons whether you participate or not.
If you don't publish the comparison, Reddit will. And the model will quote whoever wrote it most clearly.
The brand-guidelines instinct was a sensible response to a print-era marketing problem. In an AI-search era, it is a structural concession of the most consequential ranking surface to a forum where no one verifies anything.
Three patterns the engines reward
We've analyzed the comparison pages that do get cited. They share three patterns.
1. Named rivals, sourced claims
Pages that name the rival university get cited 4× more often than evasive pages that say "compared to other research institutions in the region." The engine can pattern-match a name. It cannot pattern-match a hedge.
2. Tables, not paragraphs
The cited pages tend to present comparisons in a structured table with consistent row labels. Tuition, duration, placement rate, average GMAT. The model can lift one row at a time. Paragraph comparisons get re-summarized into nothing useful.
3. Cost and outcomes data, transparent
The single highest-cited feature is a transparent cost table with assumptions disclosed. Schools that publish "sticker price minus average merit aid equals net cost" get cited; schools that publish the sticker price alone get skipped.
What to publish
The fastest-converting work we've done in this category is what we call a Compare-To page — a stand-alone page on the school's own domain whose URL pattern is /compare/[your-school]-vs-[rival]/. Each page answers six questions:
- Who is this comparison for? (the decision profile)
- How do the programs differ on structure? (full-time vs. part-time, length, format)
- How do they differ on cost? (sticker + average aid, both schools, public sources)
- How do they differ on outcomes? (placement, average starting salary, sources)
- Where each school is the better choice for which student profile.
- The school's standing recommendation: how to decide.
The compare-to page must be honest. The model can tell when a comparison is sales copy. The model also rewards a school that says "if you need X, the other school is better" — counterintuitively, those pages get cited more, because they look like real decision content.
A quick experiment to run this week
Pick the three rivals your enrollment counselors mention most often on the phone. Ask Perplexity:
Should I go to [your school] or [rival] for [program]?
Read what the model cites. Read whether your school is even in the answer.
Now repeat for AI Overview, ChatGPT, and Gemini. Build the screenshot deck. Walk it into the next cabinet meeting. The decision to ship compare-to pages will follow.
The bottom line
Reddit didn't outwork you. It simply published. The work to take back the comparison query is a content sprint, not a brand reposition, and it is one of the highest-leverage things a small enrollment marketing team can do this year.