Governing Generative AI at Stanford

Authors

  • Sergio Escobar
  • Silvia Lombardo
  • Elena Kim
  • Jenin Al Shalabi

DOI:

https://doi.org/10.60690/4jhh9m45

Abstract

Artificial intelligence is now deeply embedded in academic life, yet university governance frameworks have struggled to keep pace with rapid changes in AI capabilities and student adoption. This policy memo examines how generative AI is currently used by students, faculty, parents, industry representatives, and administrators at Stanford University and evaluates whether existing institutional policies adequately reflect these practices. Drawing on a review of Stanford’s 2023 Generative AI Policy Guidance, comparative analysis of peer institutions, and twenty semi-structured stakeholder interviews, the memo identifies persistent gaps related to disclosure, academic integrity, accuracy, bias, data protection, and faculty autonomy.

The findings show that while stakeholders view AI as a valuable efficiency and learning tool, they consistently report risks related to weakened critical thinking, hallucinations, privacy leakage, bias amplification, and academic misconduct. Current policy relies heavily on course-level discretion and informal guidance, producing uncertainty and uneven enforcement across departments. To address these gaps, the memo recommends the creation of a Stanford “Code of Practice for the Use of AI” centered on transparency, data protection, assessment design, and iterative oversight. It further proposes faculty-specific and student-specific governance measures, proportional enforcement mechanisms, and the establishment of a standing AI governance framework to support continuous policy updating.

These recommendations aim to preserve faculty autonomy while establishing clear, institution-wide standards for responsible, transparent, and effective use of generative AI in higher education.

color pencil drawing of Stanford as viewed from Palm Drive. LLM generated

Downloads

Published

2026-01-24