Field Notes  /  Operations

FERPA-compliant AI agents: a buyer's guide for higher ed.

Half the AI vendors pitching universities aren't actually FERPA-compliant. Most haven't even read the regulation. Here are the eight questions an enrollment leader, registrar, or CIO should ask before signing.

Mar 18, 2026 | 8 min read | By Ibex Insights
FERPA AI Agents Compliance

Half the AI vendors pitching universities are not actually FERPA-compliant. Most have not read the regulation. A handful have but assume the institution will sign a BAA-style agreement that papers over the gap.

This is the buyer's guide we wish more universities used. Eight questions, in the order to ask them.

The eight questions

1. Are you a "school official" under 34 CFR § 99.31(a)(1)(i)(B)?

The threshold question. FERPA only permits disclosure of student education records to a third party if that party qualifies as a school official with a legitimate educational interest. To qualify, the vendor must:

  • Perform an institutional service or function the institution would otherwise use employees for.
  • Be under the direct control of the institution with respect to the records.
  • Be subject to the same FERPA non-disclosure requirements as institutional employees.

If the vendor cannot answer yes to all three, you cannot share student records with them under FERPA's school-official exception. The conversation can end there.

2. Where does the data live, and under what legal regime?

If student records leave the United States, the institution needs an explicit policy basis. If the vendor's model provider is foreign-owned, the policy basis is more complex. If the data passes through a U.S.-based but foreign-controlled subprocessor, the institution should expect to disclose this to students under § 99.7(a).

Acceptable answer: data residency in the United States, named subprocessors, contractually-bound subprocessing.

3. Is student PII used to train the model?

The fastest way to disqualify a vendor. Acceptable answer: PII is not used to train the underlying model; institutional fine-tunes (if any) are scoped to a private deployment that is not shared with other tenants.

Many vendors will say "we don't train on your data" but will route prompts through a shared inference endpoint that retains them for 30 days. Press on this.

4. What is the audit trail?

Every action an AI agent takes on student records should produce an immutable audit log entry: timestamp, user, action, records touched, justification. The institution should be able to export this log on demand and retain it on the same retention schedule as other education records.

Acceptable answer: immutable per-action logging, exportable, retained for at least the institution's retention schedule.

5. What is the human-in-the-loop architecture?

Most institutions are not ready to let an AI agent take consequential action on a student record without a human approval step. The right architecture has the agent propose; a human reviews and authorizes; the action is then taken under the human's name in the audit log.

Acceptable answer: HITL is configurable per action type; consequential actions are HITL by default; the institution controls the policy.

6. What happens at offboarding?

When the institution terminates the contract, what happens to the data?

Acceptable answer: full extract in a standard format within a defined window (e.g. 30 days); cryptographic deletion confirmed; certificate of deletion provided.

7. What is the incident-response SLA?

In the event of a confirmed or suspected unauthorized access:

  • How fast does the vendor notify the institution?
  • What is the format of the notification?
  • Does the vendor support the institution's own breach-disclosure obligations under state law (e.g. CA Civ. Code § 1798.82) and federal law (e.g. GLBA Safeguards Rule for financial aid data)?

Acceptable answer: under 24 hours, written notification, full cooperation with institutional notification obligations.

8. Is there a directory-information toggle?

Many institutions allow the disclosure of directory information without consent. Most allow students to opt out. The agent's policy engine should respect the institution's directory-information policy and the per-student opt-out state.

Acceptable answer: per-student opt-out is read from the SIS and enforced at agent runtime.

A short note on the GLBA Safeguards Rule

If the agent touches financial aid data, the GLBA Safeguards Rule applies as well. The bar is higher than FERPA's school-official exception. Among other things:

  • Multi-factor authentication is required for any access to customer information.
  • Encryption is required at rest and in transit.
  • A written incident response plan is required.
  • A qualified individual (the "Qualified Individual" defined in the rule) must own the program.

Press your vendor on each of these.

Don't skip the GLBA bar. The number of higher-ed AI agents that meet the FERPA bar but not the GLBA bar is uncomfortably high — and almost every enrollment-side agent touches some form of financial-aid data.

What an Ibex Insights agent looks like

For full disclosure: we ship agents in this category. Here is how we answer each of the eight questions:

  1. School official. Yes, under contract. We perform institutional service under direct institutional control with full non-disclosure binding.
  2. Data residency. US-based; subprocessors named and bounded.
  3. PII used for training. No. Inference endpoints are zero-retention. Institutional fine-tunes are private.
  4. Audit trail. Immutable per-action log, exportable, retained per institutional policy.
  5. HITL. Configurable, with consequential actions HITL-by-default.
  6. Offboarding. 30-day extract, cryptographic deletion, certificate provided.
  7. Incident response. 24-hour notification, written, full cooperation with institutional disclosures.
  8. Directory opt-out. Per-student opt-out from the SIS, enforced at runtime.

We also publish a full catalog of our agents with the FERPA and GLBA posture for each. The point isn't to sell our agents in this post — it's to give a working template for what a complete answer to the eight questions should look like.

The bottom line

Most universities are evaluating AI agents on capability. Capability is the third thing to evaluate. The first thing is the answer to question 1. The second thing is the answer to question 3. If you can get straight answers to both, you can proceed to the capability evaluation.

If you can't, find a different vendor.