A real audit. A real school. We don't name it.
Look at the credentials.
- Top national ranking in its class.
- An accredited business school.
- A distinctive national research claim.
By rights, this school should top every AI answer about Canadian higher education. It doesn't.
We ran a structured audit across ChatGPT, Perplexity, Google AI Overviews, and Gemini. We asked the questions a prospective student actually asks — "best business school in [region]," "is School A or School B better for undergraduate management," "can I do a part-time MBA at a satellite campus in [secondary city]."
The answers came back. The school we audited barely appeared. When it did, it was a footnote in someone else's recommendation.
Three in four Canadian schools have this gap. The biggest gap in the sector is the cheapest one to close.
What we audited
We treat AI visibility as a stack with five layers, and we test every one. The audit framework we use covers thirty-eight individual signals across:
- Markup and structured data — JSON-LD coverage, schema completeness, machine-readability of course pages, FAQ-schema density, accreditation tagging.
- Crawler readability — robots policy for AI user-agents, llms.txt presence, sitemap freshness, content rendered client-side vs server-side, content gated behind interaction.
- Comparative content — pages that name peer institutions, decision-helpful comparisons, transparent cost and outcomes data, satellite-campus disambiguation.
- Brand authority — Wikipedia article freshness, Wikidata completeness, faculty links to ORCID and Google Scholar, citations from accreditors and rankers.
- Local and program signals — Google Business Profile for every campus, third-party directory listings, transparent program codes, application-deadline pages.
The school we audited scored well on layer one. It scored badly on layers two, three, four, and five. That gap — strong content, weak signals — is the single most common gap we see in Canadian higher education.
Gap 1 — Strong content, weak signals
The program pages read well to a person. To an AI parser, "Bachelor of Management" is just words on a page.
We checked thirty-one program landing pages. Two used Course schema. One used EducationalOrganization. None used EducationalOccupationalProgram — the schema that tells an AI engine this is a degree, here is the credential, here are the prerequisites, here is the duration.
That single absence pushes the school from "considered" to "not considered" the moment an AI engine builds a comparison.
Gap 2 — Reddit is winning the comparison
A student asks ChatGPT, "School A or School B for undergraduate management?"
The model needs a source. Neither school has one. So it pulls from Reddit.
We pulled the actual citation set for fifteen common comparison queries. Across all four engines, the top-three citations were Reddit threads, Yocket forum posts, or Quora answers — 73% of the time.
That is not a search-quality problem. That is a content-supply problem. The schools wrote nothing comparing themselves to a peer. So the engines went to a forum where someone did.
The school wrote the content. The scraper gets the quote.
Gap 3 — The aggregators are winning the brand
Ask any AI engine about tuition at this school. You see Yocket. Leap. Collegedunia. CollegeDekho.
You don't see the school.
The institution publishes the exact tuition figure on its own website. It is in a JavaScript-rendered table. It is not exposed in the page source. So the major AI crawlers — most of which still render server-side HTML only — never see it. The aggregators do see it (they manually transcribe the figure into static HTML). The model grabs the static copy.
This is happening to almost every Canadian school. The fix is: render the table server-side, mark it up with MonetaryAmount schema, and stop letting an aggregator be the canonical source for your own price.
Gap 4 — Wikipedia matters more than you think
AI engines lean on Wikipedia. They lean on Wikidata. They check rankings, accreditation, leadership, enrolment.
The school's Wikipedia article was last meaningfully updated in 2022. The president on the page had retired. The enrolment figure was three years stale. The Wikidata entry had no link to ORCID for any faculty member.
When the model assembles an answer about the school, it reads that article. The model now produces a confident answer about the school, with last-administration leadership and pre-pandemic enrolment.
Gap 5 — The satellite campus goes untold
Working adults want a satellite campus. The school has one — a beautiful new urban campus in a major Canadian city. The local SEO is thin: no Google Business Profile in the city, no dedicated landing page indexed for "part-time MBA in [city]" or the equivalent. The campus does not show up in AI answers about part-time programs in the city.
This is the second-most-common gap in our audits. Same story at every second campus. The fix is mechanical — claim every campus on Google, ship a dedicated landing page per campus with local schema, link from the main site footer — and the lift is among the fastest-converting work we do.
The strategic frame
Rankings reward inputs — research funding, faculty count, selectivity.
AI search rewards being the answer — schema, comparisons, freshness, authority.
The two rankings are pulling apart. A mid-sized school can out-cite a U15 if it commits to the work. It needs a category to own. And the content to prove it.
A 90-day plan
Here is what we'd do, in order, on day one.
First 30 days — Foundations
- Tag the site with JSON-LD across every program page.
- Mark up courses and FAQs with full schema.
- Point robots.txt at a refreshed sitemap and explicitly allow AI user agents.
- Ship an llms.txt file at the domain root.
- Claim every campus on Google Business Profile.
Days 30 to 90 — Content
- Build a page for each rival program — yes, with names.
- Rewrite the top ten programs as Q&A pages with
FAQPageschema. - Update the Wikipedia article inside COI guidelines, with talk-page disclosure.
- Link every named faculty member to ORCID and Google Scholar.
Days 90 to 180 — Authority
- Claim two or three categories the school is best-in-country on, and publish the proof.
- Watch where AI cites you — quarterly citation audits across ChatGPT, Perplexity, AI Overviews, Gemini, and Copilot.
Five lessons for higher-ed marketers
- A ranking will not get you cited. Structure will.
- If you don't publish the comparison, Reddit will.
- The moment you leave a branded query alone, an aggregator takes it.
- Your Wikipedia page matters. More than you think.
- Claim a category. Then write so AI can cite it.
That is the playbook. We've now run it for a half-dozen institutions. The lift compounds, because every fix accrues to the next query the engine sees.
If you'd like to see what AI says about your school — run the live audit. Free. Instant. Built by Ibex Insights.