Field Notes  /  Playbook

A 90-day GEO plan for universities you can execute this quarter.

The fastest path from "we don't show up in AI answers" to "we are cited in 8 of 15 test queries." Three sprints, thirteen weeks, measurable lift at the end. Here is the plan we run for every new institutional engagement.

Mar 10, 2026 | 10 min read | By Hamza Qureshi, Founder
GEO Playbook 90-Day Plan

Most senior marketing leaders we meet have a similar reaction after their first AI citation audit: okay, what do we actually do.

This is the answer. Three sprints, thirteen weeks, in the order we run them.

We've now run this plan, with light variation, on a dozen institutions. The pattern is consistent. The lift is real. We anonymize every engagement, so this post talks about the plan, not the schools.

The plan, at a glance

Days 1–30
Sprint 1 — Foundations
Crawler visibility, schema, llms.txt, Google Business Profile per campus.
Days 30–90
Sprint 2 — Content
Comparison pages, FAQ rewrites, faculty linking, Wikipedia refresh.
Days 90–180
Sprint 3 — Authority
Category claims, citation audits, refinement.

We measure citation share at day 0, day 30, day 90, day 180. The lift typically starts to show at day 30. The compound effect shows at day 90 and beyond.

Sprint 1 — Foundations (Days 1 to 30)

The mechanical work that everything else builds on. None of it requires a single new piece of content. All of it is engineering and configuration.

Week 1 — Audit and inventory

  • Run a full SEO/AEO/GEO audit on the live site. Snapshot the citation share across ChatGPT, Perplexity, AI Overviews, Gemini, Copilot for a fixed 15-query bank.
  • Inventory every program page, FAQ page, and faculty page. Identify which are JSON-LD tagged today.
  • Identify every campus (main and satellite) and whether each has a claimed Google Business Profile.
  • Identify your robots.txt's current treatment of GPTBot, Google-Extended, PerplexityBot, ClaudeBot, Applebot-Extended.

Week 2 — Robots, llms.txt, sitemap

  • Update robots.txt to explicitly allow major AI user agents (unless a specific institutional reason to deny them; in most cases, allow).
  • Ship a hand-curated llms.txt at the domain root. See our llms.txt guide for the template.
  • Refresh sitemap.xml. Ensure every canonical URL is present, no 404s, no redirects.
  • Verify in Google Search Console; submit the sitemap; confirm it indexes.

Week 3 — Schema, part one

  • Ship EducationalOrganization schema sitewide.
  • Ship Person schema on the top 30 named faculty pages.
  • Ship FAQPage schema on the four highest-traffic FAQ pages.
  • Validate every block in Google's Rich Results Test.

Week 4 — Local

  • Claim a Google Business Profile in every campus city.
  • Ship a dedicated landing page per satellite campus with full local schema.
  • Add every campus to the sitewide footer.
  • Update the institution's Wikidata item with addresses and identifiers for every campus.

By the end of week 4: a fully crawler-readable site, a published llms.txt, schema across the most important pages, and every campus claimed locally.

Sprint 2 — Content (Days 30 to 90)

This is the sprint where new public content ships. It is the sprint that wins or loses the long-run citation share fight.

Weeks 5–6 — Comparison content, batch 1

  • Identify the top 5 rival programs your counselors compete with most often.
  • Ship 5 comparison pages, one per rival, in the format /compare/[your-program]-vs-[rival-program]/.
  • Each page: 1,200–1,800 words, structured table, named rivals, transparent cost and outcomes data, decision guidance for the student.

Weeks 7–8 — FAQ rewrite, batch 1

  • Identify the 10 FAQ pages that receive the most traffic.
  • Rewrite each as a structured FAQPage with 8–15 questions each, ranging from broad to specific.
  • Cross-link from the relevant program pages and from the homepage.

Weeks 9–10 — Schema, part two

  • Template EducationalOccupationalProgram across every remaining program landing page.
  • Add Offer schema to the tuition page, linked from every program.
  • Verify every block in the validator; deploy.

Weeks 11–12 — Wikipedia and Wikidata refresh

  • Begin the Wikipedia refresh project under declared COI. Talk-page discussion; request edit for each correction.
  • Update the institution's Wikidata item with current leadership, current enrolment, full identifier set.
  • Add ORCID and Google Scholar links to each named senior faculty member's Wikidata item.

By the end of week 12: 5 comparison pages, 10 rebuilt FAQs, full schema across the program catalog, and a substantially refreshed Wikipedia/Wikidata presence.

Sprint 3 — Authority (Days 90 to 180)

This is the sprint where the institution claims its categories and starts to compound the gains.

Weeks 13–16 — Category claims

  • Identify 2 or 3 categories the institution can credibly claim to be best-in-region or best-in-country on (e.g. "largest Indigenous-studies graduate program in the prairies," "only AACSB-accredited business school in [city]").
  • Build a category page per claim. The page should contain the proof: rankings, accreditations, enrolment, alumni outcomes, publications.
  • Cross-link from the homepage and the relevant program pages.

Weeks 17–20 — Comparison content, batch 2

  • Ship 5 more comparison pages, targeting longer-tail rivals or programs at the satellite campus.

Weeks 21–24 — Citation audit and refinement

  • Re-run the full citation audit at day 180.
  • Compare to day 0, day 30, day 90.
  • Identify the biggest movers and the biggest gaps.
  • Build the day-180-to-day-365 plan based on the audit.

The compounding is the point. Every fix accrues to the next query the engine sees. The schools that started this in early 2025 have a year of compounding behind them. The schools that start in mid-2026 will have a year of compounding by mid-2027.

Measurement — what to report to cabinet

The four headline numbers we recommend reporting to the cabinet:

  1. Citation share across the 15-query bank, by engine, against day 0 and against peer institutions.
  2. Schema coverage across the program catalog.
  3. Inbound traffic by source — organic, AI-referred, direct, social. Split out AI-referred separately; engines like Perplexity and Copilot send identifiable referrer traffic.
  4. Funnel impact — campus tour bookings, application starts, application completions, attributed to the channels you've moved.

A clean dashboard with these four numbers is the new headline pair for the marketing report. We typically build it as a one-page PDF for each board meeting.

Staffing

The 90-day plan, executed in-house, requires:

  • 0.5 FTE engineer for the schema and crawler work.
  • 0.3 FTE technical writer for the comparison and FAQ content.
  • 0.1 FTE for Wikipedia/Wikidata work.
  • A single senior marketing leader with cabinet authority to authorize the comparison pages.

The plan, executed by Ibex Insights, requires the same senior marketing leader and one named project sponsor on the institution's side. Everything else we bring.

A closing note

We've now run this plan, with light variation, on a dozen institutions. The pattern is consistent. The lift is real. The work is mostly mechanical. The hardest decision is the first one — the decision to authorize comparison pages and a Wikipedia refresh — and the rest of the plan is a sequence of well-bounded engineering and content sprints.

If you want a 90-day plan modeled on your institution's specifics, the first audit is free. Paste any program page; the audit runs in 30–60 seconds.