AI in Education News: April 2026 Update on Policy, Classrooms, and Cheating

The AI-in-education news cycle has shifted gears. After two years of "what do we do about ChatGPT?" panic, K-12 districts are writing real policies, statehouses are passing real laws, universities are signing campus-wide AI deals, and a handful of high-profile institutions just started turning AI detection off. The research base is finally maturing too — and it's more complicated than either side wants to admit. Here's what actually happened in the last few weeks, what it means for K-12 vs. higher ed, and what teachers and admins should do before the next school year.
TL;DR — the six stories that matter right now
- State AI laws for schools went mainstream. As of March 2026, MultiState is tracking 134 AI-in-education bills across 31 states. Ohio and Tennessee now require every district to adopt an AI policy; Oklahoma's SB 1734 mandates educator supervision and parent disclosure; New York's A 9190 restricts classroom AI use to grade 9 and up.
- Claude for Education and ChatGPT Edu are now real products with real campus deals. Anthropic has signed campus-wide agreements with Northeastern (50,000 users across 13 campuses), the London School of Economics, Champlain College, and parts of Oxford. OpenAI has Wharton, Columbia, and UT Austin.
- Universities are turning AI detection off. Curtin University announced it will disable Turnitin's AI detection in 2026. Vanderbilt, UCLA, UC San Diego, Cal State LA, Yale, Johns Hopkins, and Northwestern have already pulled the plug. False positives — disproportionately hitting non-native English speakers — are the reason.
- Khanmigo crossed 1.4M users; MagicSchool crossed 6M educators. A National Bureau of Economic Research paper published in early 2026 reports 34% greater learning gains for students using Khanmigo vs. traditional tutoring. A separate Scientific Reports RCT found AI tutoring beat in-class active learning by 0.73–1.3 standard deviations.
- The EU AI Act's high-risk education obligations are still on track for August 2026 — even as other parts of the Act get delayed. Education AI is explicitly classified as high-risk under Annex III, and emotion-inference systems in schools are flat-out banned.
- The research is more cautious than the marketing. A 666-participant study found a significant negative correlation between frequent AI use and critical thinking, mediated by cognitive offloading. The story isn't "AI is great" or "AI is bad" — it's "AI works when it scaffolds thinking and fails when it replaces it."
If any of those six stories surprise you, the rest of this post is the version with receipts. We'll handle K-12 and higher ed separately because — for the first time since ChatGPT launched — they really do have different news cycles.
K-12: state laws are catching up to what districts already started doing
Two years ago, "AI policy" in most US K-12 districts meant a memo from the superintendent saying "don't use ChatGPT." That era is over.
State-level mandates
The most consequential K-12 news of 2026 is that state legislatures stopped issuing optional guidance and started passing binding laws.
- Ohio released a model AI policy in early 2026 paired with a state requirement that every public, community, and STEM school adopt an AI framework by July 1, 2026. Districts can adopt the model policy as-is or write their own — but they have to do something.
- Tennessee got there first. Public Chapter 550, signed in spring 2024, required all K-12 districts to adopt AI policies. The Tennessee School Boards Association published Model Policy 4.214 in June 2024, and most districts adopted it verbatim.
- Oklahoma's SB 1734 is the most aggressive. It allows AI in schools only with educator supervision and human review, bans AI from "high-stakes decisions" entirely, mandates state-level guidance and district policies, and requires schools to disclose AI use to parents annually.
- Maryland passed twin bills (SB 720 / HB 1057) requiring the State Department of Education to issue AI guidance, mandating local AI policies, and pushing AI literacy into the workforce standards.
- New York's A 9190 takes a developmental approach: AI is restricted to grade 9 and above for direct student use, with carve-outs for diagnostics and special education interventions. Staff can use AI freely for administrative and planning work.
- Vermont issued framework guidance on January 23, 2026, emphasising that AI must augment — not replace — the educator-student relationship.
According to the Education Commission of the States, more than 28 states plus DC have now issued AI guidance for K-12. The 2026 legislative session pushed that into binding territory in a handful of states; expect more to follow.
The pattern across most of the new laws: mandatory district policies, educator supervision, restrictions on high-stakes decisions, parent disclosure, and AI literacy in curriculum. That's the rough consensus shape of state-level AI regulation in K-12.
What districts are actually doing
Two patterns dominate at the district level.
Stoplight frameworks. Niles Township High School District 219 in Illinois has popularised a three-tier model that's spreading fast: red (AI prohibited, treated as cheating), yellow (AI allowed with citation and prompt sharing), green (AI required because the assignment can't be done without it). The framework is simple enough to print on a syllabus and clear enough to enforce. Expect copies of it in your district's policy by next school year.
Walled-garden AI deployments. Alexandria City Public Schools, Charlotte-Mecklenburg, and others have moved away from "block ChatGPT" toward routing student requests through controlled AI environments. In Alexandria, students who try to access ChatGPT are redirected to Securly's AI chat, which is configured to refuse full-essay generation and offer scaffolding instead. Administrators can read the logs. Charlotte-Mecklenburg ran a pilot in 30 of 185 schools, trained all 14,000 staff before turning anything on, and required digital citizenship training for all 140,000 students.
The lesson from both: bans don't work, and unfiltered access is a liability. The middle path — AI that is monitored, scoped, and integrated with instructional design — is where the most serious districts are landing.
Higher ed: the "campus AI deal" is the new normal
Higher ed news in 2026 is dominated by a single story: AI labs are signing university-wide contracts, and universities are letting them.
The Claude for Education / ChatGPT Edu race
Anthropic launched Claude for Education in April 2025 and has spent the last twelve months stacking partnerships:
- Northeastern University — 50,000 students, faculty, and staff across 13 global campuses
- London School of Economics
- Champlain College
- Oxford University (parts of)
- Columbia University, University of Pennsylvania (Wharton)
OpenAI's competing ChatGPT Edu product has its own roster — UT Austin, Wharton (yes, both labs are on campus there), Columbia, Arizona State. OpenAI also offered ChatGPT Plus free to US and Canadian college students through May 2025, which functioned as a thinly disguised customer acquisition campaign for the post-graduation market.
Claude for Education ships with a "Learning Mode" designed to guide reasoning rather than hand back answers. ChatGPT Edu has equivalent guardrails. Both products bundle SSO, admin dashboards, FERPA-compliant data handling, and shared workspaces — the kinds of features universities asked for in the 2023-2024 RFP cycles and didn't get.
The underlying business logic is straightforward: get students into your tool before they graduate, and you've built a pipeline into the workforce. This is the EdTech equivalent of giving away textbooks. Expect more deals, more competition, and a steady downward pressure on per-seat pricing through the rest of 2026.
University AI policies are still all over the map
There is no settled academic-integrity standard for AI in higher ed. The most-cited landscape review puts it bluntly: while ~92% of students now use AI tools for academic work, around 70% of universities still lack a clearly defined AI policy.
A few notable institutional positions as of April 2026:
- Columbia: AI use is prohibited unless the instructor explicitly grants permission.
- Oxford: AI is allowed for studies and research, but in summative assessments only when the course says so. Any permitted use must be declared.
- Cambridge: AI is fine for personal study and formative work; unacknowledged AI in summative assessments is academic misconduct.
- Most US R1s: course-by-course discretion, with the syllabus as the contract.
The "course-by-course" model is what most institutions have settled on, and it's not entirely satisfying. Students complain about whiplash between professors. Faculty complain about being asked to design and police a policy they didn't sign up for. But it's what's stuck, and it's probably what we'll have for at least another year.
The integrity question: AI detection is losing
The biggest higher-ed story of the spring is the rapid retreat from AI detection.
Curtin University in Australia announced it will disable Turnitin's AI detection feature in 2026, citing reliability concerns. They are not alone. The list of institutions that have already pulled the plug now includes:
- UCLA
- UC San Diego
- Cal State Los Angeles
- Vanderbilt
- Yale
- Johns Hopkins
- Northwestern
The reasons are consistent across institutions: false positives, especially for non-native English speakers, with consequences that are catastrophic for the student and unrecoverable for the institution's reputation. Multiple peer-reviewed studies have shown that AI detectors flag prose written by non-native speakers at rates several multiples higher than native-English prose. Once a few high-profile false-accusation cases hit the press, the calculus for keeping the tool turned on collapsed.
The arms race is real, though. Turnitin has identified about 150 "humanizer" tools — services that take AI-generated text and rewrite it to evade detection, often for $20-$50/month. Detection vendors have responded with new models trained on humanized output. Students respond by chaining humanizers. Detection vendors update their models again. The cycle is not converging.
The emerging consensus: detection is losing as a primary line of defense. The institutions that are doing this seriously have moved toward:
- Process-based assessment — drafts, in-class writing, oral defenses, version-controlled work. Make it expensive to outsource.
- Explicit AI policies in every syllabus — green/yellow/red lists, with citation requirements when AI is allowed.
- Conversation-not-conviction when something looks off — a flag triggers a discussion, not an automatic referral.
If your institution is still treating Turnitin's AI score as evidence, you are exposed. Read the Curtin and UCLA write-ups before your next academic-integrity case lands on a dean's desk.
What the research actually shows
The 2025-2026 research literature on AI and student learning is finally large enough to draw real conclusions. Three findings that should reshape how you think about this:
Finding 1: AI tutors work — when they're well-designed and used as tutors.
A 2025 randomized controlled trial published in Scientific Reports found AI tutoring outperformed in-class active learning with effect sizes between 0.73 and 1.3 standard deviations. That's not noise. That's roughly the gap between a median student and a top-quartile student. Students learned more in less time and reported higher engagement.
A National Bureau of Economic Research paper from early 2026 found students using Khanmigo showed 34% greater learning gains vs. traditional tutoring, with the effect concentrated among students from underserved communities. A separate SRI International evaluation found 23% faster mastery of algebra concepts vs. traditional Khan Academy video instruction. A 2026 pilot across 200 schools and 15,000 students found students who used Khanmigo for ≥30 minutes per week gained the equivalent of 2-3 weeks of additional instruction.
Finding 2: Engagement is the real bottleneck — most students don't use these tools much.
Sal Khan, in an April 2026 Chalkbeat interview, was unusually candid: "For a lot of students, it was a non-event. They just didn't use it much." The headline efficacy numbers above are conditional on use. The kids who use Khanmigo get a lot out of it. The kids who don't use it get nothing.
That's the deployment problem nobody wants to put on a slide. AI tutors don't automatically reach students. They have to be assigned, scheduled, integrated into the regular flow of the class, and made non-optional. The schools getting results aren't the ones that bought the license — they're the ones that built it into the lesson plan.
Finding 3: Heavy AI use correlates with weaker critical thinking — but the mechanism matters.
A 2025 study of 666 participants found a significant negative correlation between frequent AI tool use and critical thinking, mediated by cognitive offloading. Translation: when students outsource the hard parts of thinking to a chatbot, the thinking muscle atrophies. This is not the same as "AI is bad." It's "using AI as a thinking-replacement is bad." Using AI as a Socratic tutor is different from using it as an answer factory, and the data is starting to show that distinction.
The implication for instructional design is uncomfortably specific: assignments that allow AI to do the cognitive work are actively worse than assignments without AI. Assignments that force students to use AI as a sparring partner — explain your reasoning, defend your answer, critique the AI's response — are better than either extreme.
EU AI Act: education AI is high-risk, August 2026 still applies
A note for anyone selling EdTech into Europe, or for European institutions buying it: the EU AI Act explicitly classifies AI used in education as high-risk under Annex III. That covers:
- AI systems determining access or admission to educational institutions
- AI evaluating learning outcomes or assessment
- AI assessing appropriate education levels
- AI monitoring student behaviour during tests (proctoring)
Emotion-inference AI in education is prohibited outright as an unacceptable-risk use case. This is the bright-line rule that catches a lot of "AI proctoring" products that infer cheating from facial cues — they cannot be sold into EU schools, full stop.
The Digital Omnibus simplification package making its way through trilogue right now will delay some high-risk obligations beyond August 2026. Education AI is in Annex III, which means it's potentially affected. But until the omnibus is formally adopted, August 2026 still applies, and the Commission's February 2026 guidelines on high-risk classification did not soften the education provisions.
If you're a US EdTech company shipping into the EU, you need a conformity assessment, a quality-management system, human-oversight documentation, and a risk assessment per system. If you're a school or university using these tools, you're a "deployer" and you have your own obligations. Don't assume the omnibus saves you here.
What teachers and admins should do in the next 90 days
Concrete, in priority order:
For K-12 admins: 1. Check whether your state has an AI policy mandate with a deadline this year. If yes, get a draft in front of your board. 2. Pick a stoplight framework or equivalent and put it in writing. Don't invent a new one — adapt Niles Township's or Tennessee's TSBA model. 3. Decide whether you're running a walled-garden deployment (Securly, GoGuardian, MagicSchool, etc.) or open access with monitoring. Either is defensible. The middle ground is not. 4. Train staff before students. Charlotte-Mecklenburg trained all 14,000 staff before flipping anything on. Skip this and you will have a board meeting you don't want.
For higher-ed admins: 1. Make a decision on AI detection. If you're keeping it on, document why and put a human review process in front of any consequence. If you're turning it off, communicate it now so faculty don't hand out failing grades next month based on a tool the institution no longer endorses. 2. If you don't have a campus-wide AI license, get quotes from both Anthropic and OpenAI. The pricing is moving and the negotiating leverage is real. 3. Push the policy decision down to syllabi but give faculty a template. The "every professor for themselves" model fails when faculty don't have a default to start from.
For teachers (any level): 1. Update your syllabus this summer. Be explicit about which assignments are red/yellow/green for AI. 2. Redesign at least one assessment to require process artifacts (drafts, in-class writing, oral defense). It's the single most effective integrity intervention. 3. Try one AI tool yourself for prep work — lesson planning, rubric writing, differentiation. The teachers who get the most out of this transition are the ones who use the tools first.
Bottom line
The story of AI in education in 2026 is no longer "what is going to happen?" — it's "what's working and what isn't, and how do we scale the working part?" State laws are landing. Campus-wide AI contracts are normal. Detection is losing as a strategy. AI tutors work, when students actually use them. The research is converging on a finding that is more inconvenient than it sounds: AI helps learning when it makes thinking harder, and hurts learning when it makes thinking easier.
Plan accordingly.
Sources and further reading
- MultiState: AI in Education Legislation — 2026 State Policy Trends
- Ohio Department of Education and Workforce: AI Model Policy
- GovTech: Ohio Unveils Model AI Policy for Use by K-12 Schools
- AI for Education: State AI Guidance for K-12 Schools
- EdTech Magazine: CoSN 2026 — How K-12 Districts Are Tackling Responsible AI Adoption
- Education Commission of the States: AI in Education Task Forces
- Center for Democracy and Technology: States Focused on Responsible Use of AI in Education during the 2025 Legislative Session
- Stateline: More than half the states have issued AI guidance for schools
- US Department of Education: Guidance on Artificial Intelligence Use in Schools
- Anthropic: Introducing Claude for Education
- CNBC: OpenAI, Anthropic target college students with latest education AI announcements
- EdTech Innovation Hub: Anthropic Claude for Education Brings AI to Universities
- Fortune: Anthropic president makes a case for AI in college classrooms
- Thesify: Generative AI Policies at the World's Top Universities — October 2025 Update
- Columbia University: Generative AI Policy
- Cornell Center for Teaching Innovation: AI & Academic Integrity
- Vanderbilt University: Academic Integrity and Generative AI
- EdTech Innovation Hub: Curtin University to disable Turnitin AI detection tool in 2026
- NBC News: To avoid accusations of AI cheating, college students are turning to AI
- OngoingNow: AI Detection Tools and Academic Integrity in 2026
- Turnitin: AI and academic integrity — Policy update paper
- Chalkbeat: Why Sal Khan is rethinking how AI will change schools
- Global Society: Khan Academy rolls out AI-powered teaching tools as school districts scale up adoption
- Nature Scientific Reports: AI tutoring outperforms in-class active learning — an RCT
- Engageli: 25 AI in Education Statistics to Guide Your Learning Strategy in 2026
- arXiv: Impact of AI Tools on Learning Outcomes — Decreasing Knowledge and Over-Reliance
- EU Artificial Intelligence Act: Annex III — High-Risk AI Systems
- AI Act Recital 56: High-risk AI systems in education
- Digital Education Council: EU AI Act — What it means for universities
- European Commission: Regulatory framework for AI





















