FERPA Meets AI: The Privacy Risks Students Don’t Know They’re Taking When Using AI Tools on Campus
Find your perfect college degree
In this article, we will be covering...
Every time you paste an essay draft into ChatGPT, upload a homework problem to an AI tutor, or log into an AI-powered advising platform your school provides, you are sharing data. Some of that data is protected by federal law. Some of it isn’t. And most students, and frankly, many administrators, don’t know where the line falls.
The Family Educational Rights and Privacy Act (FERPA), the 50-year-old federal law designed to protect student education records, was written long before generative AI existed. It was not written for a world in which students voluntarily submit their academic work to third-party platforms that may train machine learning models on that content, share inferences with institutional partners, or retain data far longer than any physical transcript ever would.
This is not a hypothetical concern. As of 2025, more than 70% of U.S. colleges and universities have adopted at least one AI-powered tool for instruction, advising, or student support. Many are not issuing any formal guidance to students about what happens to their data.
This article explains what FERPA actually covers, where it leaves students exposed, what schools are (and aren’t) doing about it, and what you can do right now to protect yourself.
Quick Answer: Does FERPA Protect Students When Using AI Tools?
Partially, and the gaps are significant.
- FERPA protects education records held by your institution. It does not automatically extend to third-party AI platforms you use independently.
- When your school contracts with an AI vendor, FERPA rules apply to what your institution shares with that vendor, but the vendor’s own data practices are governed by its terms of service, not FERPA.
- When you personally sign up for an AI tool (ChatGPT, Claude, Gemini, Grammarly, etc.) and submit academic content, FERPA provides no protection over that data. You have entered a private agreement with a commercial company.
- AI tools can generate behavioral inferences about students, including academic struggle indicators, mental health signals, and engagement patterns that exist outside traditional education record definitions and may fall into a legal gray zone.
- Many students are unknowingly waiving privacy protections every time they click “I agree” on an AI platform’s terms of service.
What FERPA Actually Covers: A Plain-Language Breakdown
FERPA (20 U.S.C. § 1232g) gives students who are 18 or older, or who attend a postsecondary institution, the right to:
- Inspect and review their education records
- Request corrections to records they believe are inaccurate
- Consent to disclosure of records to third parties, with specific exceptions
- File a complaint with the U.S. Department of Education if their rights are violated
The law applies to any school that receives federal funding, which includes virtually every accredited U.S. college and university.
What counts as an “education record” under FERPA?
Education records are broadly defined as records, files, documents, and other materials that contain information directly related to a student and are maintained by the educational agency or institution, or by a person acting for or on behalf of the agency or institution.
This typically includes:
- Transcripts and grades
- Enrollment records
- Financial aid information
- Disciplinary records
- Advising notes maintained by the institution
- Accommodation records from disability services
- Health records maintained by student health services (with some overlap with HIPAA)
What FERPA does NOT cover:
- Sole-possession records kept only in the personal memory or private files of a school official and not shared
- Law enforcement unit records
- Records of employees who are not also students
- Information that is not directly related to the student’s academic record
- Data held by third-party platforms that students voluntarily join
That last point is where AI creates its largest vulnerability.
The Three AI Risk Zones FERPA Doesn’t Fully Cover
Risk Zone 1: AI Tools You Sign Up for Yourself
When you create a personal account on ChatGPT, Claude, Gemini, or any other commercial AI platform and use it for coursework, FERPA has no jurisdiction over what that platform does with your data.
You are operating under the platform’s terms of service and privacy policy, not federal student privacy law. Most major AI platforms:
- Retain conversation data for model training unless you explicitly opt out (and many users never do).
- Reserve the right to use inputs to improve services, which can mean your essay drafts, problem sets, and personal context are processed by the model’s training pipeline.
- Share aggregated or de-identified data with research partners and affiliated entities.
- Disclose data in response to legal process, including government requests, without notifying you.
When you describe your academic situation, such as your major, your professor’s assignment requirements, and your struggles with a topic, you are creating a profile that exists entirely outside FERPA’s reach.
Risk Zone 2: School-Contracted AI Tools and the “School Official” Exception
FERPA does allow schools to share student education records with third-party vendors without student consent under what is called the “school official” exception (34 CFR § 99.31(a)(1)). This applies when the vendor:
- Has a legitimate educational interest in the data
- Is under the school’s direct control for the use and maintenance of that data
- Does not use or re-disclose the data for other purposes
In theory, this means AI tutoring platforms, AI-powered advising systems, and AI writing tools contracted by your university could receive access to your grades, enrollment data, demographic information, and academic history, all without your explicit consent, as long as the institution classifies them as a school official.
The problem is enforcement and transparency. The Department of Education does not pre-approve school official designations. Schools self-certify that vendors meet the criteria. And data use agreements, which are the contracts that are supposed to restrict what vendors do with student data, are rarely made available for students to review.
A 2023 audit by the Student Privacy Compass (part of FPF Education) found that fewer than 30% of surveyed institutions had publicly posted data use agreements with their AI tool vendors.
Risk Zone 3: AI-Generated Inferences That Don’t Fit FERPA’s Definition
This is the most legally unsettled risk, and the one that may have the longest-term consequences.
AI systems used in educational settings don’t just store the data you give them. They generate inferences or predictions and probability scores about you based on patterns in your behavior. These might include:
- Early alert flags suggesting academic difficulty or disengagement
- Retention risk scores used by advisors to prioritize outreach
- Writing style profiles that could theoretically be used for plagiarism detection or authorship attribution
- Engagement metrics from AI tutoring tools showing how long you spent on problems, how often you asked for hints, or what topics caused repeated errors
- Mental health signal detection embedded in some AI writing analysis tools that flag linguistic markers associated with distress
Are these inferences “education records” under FERPA? The answer is genuinely unclear.
FERPA defines education records as records maintained by the institution. If an AI vendor generates an inference on its own servers and uses it for its own purposes but never formally transmits it to the institution, there is a credible legal argument that it falls outside FERPA’s definition entirely. The U.S. Department of Education has not issued definitive guidance on AI-generated inferences as of early 2025.
This gap is not accidental. It is a product of a law written in 1974, encountering technology that didn’t exist until 50 years later.
What Universities Are and Aren’t Doing
What Leading Institutions Are Doing
A growing number of universities have taken proactive steps to address AI and student privacy:
- MIT, Stanford, and UC Berkeley have issued formal AI use policies that include explicit student data privacy provisions and list approved vendor categories
- The University of Michigan and Ohio State have published AI procurement guidelines requiring vendors to complete privacy impact assessments before deployment
- Some state university systems, including the California State University system and the University of North Carolina system, have issued systemwide guidance on AI tool procurement and student data handling
These institutions typically require:
- Data use agreements limiting vendor use of student data to the contracted purpose
- Prohibition on using student data to train commercial AI models without explicit consent
- Data deletion timelines aligned with institutional records retention schedules
- Security controls equivalent to those applied to other student data systems
What Most Institutions Are Not Doing
Despite the activity at leading institutions, the broader landscape remains largely unregulated at the institutional level:
- No required disclosure to students about which AI tools are receiving their data or under what terms
- No standardized student consent process for AI tools that fall into gray zones
- No centralized registry of AI tools approved for use on campus or their associated privacy terms
- Inconsistent faculty guidance — many professors assign AI tools for coursework without knowing whether their institution has a data use agreement with the vendor
In a 2024 EDUCAUSE survey, only 38% of higher education technology officers reported that their institution had a formal AI governance framework that included student privacy protections.

Your FERPA Rights in Practice: What You Can Actually Do
Step 1: Request Your Education Records
Under FERPA, you have the right to inspect any education record your institution maintains about you within 45 days of a written request. This includes:
- Any advising notes generated or stored by AI advising platforms that the school operates
- Early alert flags or academic risk scores in your student information system
- Any documentation the institution maintains about your use of institutionally deployed AI tools
Submit your request in writing to your institution’s registrar. Most schools have a formal FERPA records request process.
Step 2: Ask Your Professors and IT Department Specific Questions
Before submitting academic work to any tool — AI or otherwise — you are entitled to ask:
- Is this platform covered by a data use agreement with our institution?
- Does the vendor have the ability to use student submissions for model training?
- What data does this tool collect and retain about my usage?
- Is there a FERPA-compliant alternative I can use instead?
Faculty are often unaware of the answers to these questions. Escalate to your institution’s Chief Privacy Officer or IT department if needed.
Step 3: Read the Terms Before You Click
For AI tools you use independently (not school-mandated), always review:
- Data retention policies: How long is your content stored?
- Training data opt-out: Does the platform use your inputs to train its models, and can you opt out?
- Deletion rights: Can you request that your data be deleted?
- Third-party sharing: Does the platform share your data with affiliated companies or partners?
Most major platforms (OpenAI, Anthropic, Google) offer opt-out mechanisms for training data usage and provide data deletion options — but these must be actively configured. They are not the default for most users.
Step 4: File a FERPA Complaint If Your Rights Are Violated
If you believe your institution has improperly disclosed your education records to an AI vendor, for example, by sharing your transcript or academic history with a vendor that did not meet the school official exception requirements, you can file a complaint with:
U.S. Department of Education, Student Privacy Policy Office (SPPO) studentprivacy.ed.gov
Complaints must be filed within 180 days of the alleged violation. The SPPO investigates and can order corrective action, though it does not award damages to individual students.
The Emerging Legal and Regulatory Landscape
FERPA is not the only legal framework at play. Several other laws and regulations affect student data privacy in the AI context:
COPPA (Children’s Online Privacy Protection Act): Applies to students under 13 but does not cover most traditional college students.
State Privacy Laws: Several states have passed student data privacy laws that go beyond FERPA. California’s Student Online Personal Information Protection Act (SOPIPA) and similar laws in New York, Colorado, and Virginia impose additional restrictions on how vendors can use student data, including prohibiting use for targeted advertising and restricting use of student data for commercial purposes beyond the contracted educational service.
Proposed Federal AI Legislation: As of 2025, Congress is actively debating comprehensive AI regulation. Several proposals include provisions specific to educational AI deployments, including requirements for transparency about AI use in educational settings and restrictions on AI-generated inferences about students.
The EU AI Act: For students at institutions with international operations or for students studying abroad in the EU, the EU AI Act (fully applicable as of August 2026) classifies AI systems used in educational and vocational training as high-risk, requiring transparency disclosures, human oversight requirements, and data governance standards. U.S. institutions with EU students or operations may face indirect compliance pressure.
A Note on AI Academic Integrity Tools: A Privacy Paradox
Many universities now deploy AI-powered academic integrity tools, including platforms like Turnitin’s AI detection feature, that analyze student writing for markers of AI generation. These tools present a distinct privacy dynamic.
Students are subject to analysis by AI tools designed to detect AI, often without being clearly informed of:
- What data is retained from submitted documents
- How long writing samples are stored in the vendor’s database
- Whether the analysis generates a permanent flag in their academic record
- The error rate of AI detection and the appeals process for false positives
Turnitin, for example, has disclosed that submitted papers may be stored in its database for comparison purposes, meaning your writing potentially persists in a commercial database indefinitely. Whether this constitutes an education record under FERPA, or whether the school’s use of Turnitin satisfies the school official exception, depends on the specifics of the institution’s data use agreement.
If you have concerns about a specific integrity tool’s data handling, request your institution’s data use agreement with that vendor and the institution’s policy on how AI integrity flags are recorded and retained in your student record.
Frequently Asked Questions
Does FERPA protect my data when I use ChatGPT or other AI tools for homework?
No. When you voluntarily use a commercial AI platform, regardless of whether it’s for academic purposes, FERPA does not apply. You are subject to that platform’s terms of service and privacy policy. FERPA only governs education records held by your institution.
Can my university share my grades or transcripts with an AI company?
Yes, under the school official exception, your institution can share education records with AI vendors without your consent if the vendor has a legitimate educational interest, is under the school’s direct control for that data, and does not re-disclose it. Whether a specific vendor meets that standard depends on your institution’s data use agreement with that vendor.
What should I do if I don’t want my data used to train an AI model?
For school-contracted tools, ask your institution whether the data use agreement prohibits training use. For independent tools, look for the training opt-out in your account settings. OpenAI, Anthropic, and Google all provide opt-out options, but they must be manually enabled.
Are AI-generated risk scores or early alert flags part of my education record?
If those scores are maintained in your institution’s student information system and are directly related to you as a student, they likely qualify as education records under FERPA. If they exist solely on a vendor’s servers and are never formally integrated into institutional records, their status is legally uncertain. Request to see any such records through your registrar if you have concerns.
Can I opt out of my university’s AI tools for privacy reasons?
This depends on your institution’s policy and the specific tool. For course-required tools, you may need to request an accommodation. For optional institutional tools (advising bots, tutoring platforms), you can typically decline to use them. Contact your institution’s registrar or Chief Privacy Officer to understand your options.
What happens to my data if an AI company goes bankrupt or is acquired?
Your data becomes an asset of the company and can be transferred to the acquiring entity or bankruptcy trustee. Review the platform’s privacy policy for language about transfers. Some platforms commit to notifying users; many do not. This is a documented risk with all commercial data relationships, not unique to AI.
Is AI-powered plagiarism detection legal under FERPA?
The use of AI plagiarism detection tools does not itself violate FERPA if the institution has a valid data use agreement with the vendor. The legality depends on whether the vendor’s data handling practices comply with the restrictions of that agreement. Institutions using tools like Turnitin should be able to provide students with the data use agreement and their policy on record retention upon request.
What is the FERPA school official exception, and does it apply to AI companies?
The school official exception allows institutions to share student education records with vendors without student consent when those vendors perform services on behalf of the school and are under the school’s control for how they use that data. AI vendors can qualify, but the exception’s requirements are not automatically satisfied simply because a school has purchased a product. The key is whether a valid, restrictive data use agreement is in place.
Bottom Line: What Every Student Should Know Before Using AI on Campus
The intersection of FERPA and AI is a legal frontier. The law has not caught up to the technology, and the gap is real, wide, and consequential for student privacy.
The most important things to understand:
FERPA protects records your school holds, not data you voluntarily share with third-party platforms. The moment you open a personal account on an AI platform and submit your academic work, you have moved outside federal privacy protection and into commercial data territory.
When your school deploys AI tools, FERPA’s school official exception permits data sharing that you were never directly notified about. Your grades, enrollment history, and academic profile may be accessible to vendors whose data practices you have never reviewed.
AI systems generate inferences about you, including risk scores, behavioral profiles, and engagement patterns, whose FERPA status is genuinely unresolved. These inferences can affect how advisors interact with you and, in some cases, institutional decisions about your enrollment.
You have tools available: FERPA records requests, questions to your institution’s privacy office, and opt-out settings in commercial platforms. But exercising those rights requires knowing they exist.
The practical framework:
- For school-provided tools: Ask your institution for the data use agreement and understand what data is being shared and why.
- For personal AI use: Configure your privacy settings, opt out of training data usage, and be deliberate about what personal and academic information you submit.
- For AI integrity tools: Request information on how flags are stored in your record and what the appeals process is.
- For anything you’re uncertain about: Your institution’s Chief Privacy Officer is the right person to contact. If one doesn’t exist at your school, that itself is worth noting.
Privacy in the age of AI requires active participation. Clicking “I agree” is a choice. Make it an informed one.
