Law schools across the country are currently updating their honor codes to address how students should use artificial intelligence, spurred by the rapid adoption of AI tools in legal research and writing. The shift is a direct response to the transformative impact of generative AI on the practice of law, prompting schools to integrate AI education into coursework and practical clinics.
However, the increasing prevalence of AI tools — including those embedded in legal research software — coupled with the lack of clear guidelines, creates a risk for ethical violations. Law schools are grappling with how best to educate students on the technology while preventing academic dishonesty. Policy approaches vary widely, from outright bans on the use of generative AI to allowing professors to set their own classroom rules within broader guidelines.
“This is one of the really exciting things about GenAI right now. Students are seeing their professors struggle with it in real time,” said Bryant Walker Smith, associate professor at the University of South Carolina’s schools of law and engineering. The University of South Carolina School of Law is among those that recently revised its honor code at the beginning of the academic year. This policy, which previously made no mention of AI, now categorizes unauthorized use as plagiarism. “Our students are either going to use AI tools or be used by AI tools,” Smith stated.
Rewriting the Handbooks
In April 2023, the University of California, Berkeley School of Law was one of the first law schools to issue a comprehensive set of rules regarding AI use, according to administrators. Berkeley permits students to use generative AI for research and editing, but not to compose an assignment or for any purpose during exams. The policy also states that AI “never may be employed for a use that would be plagiarism if generative AI were a human or organizational author.” However, professors can deviate from the rules in writing, provided they offer written notice.
“I thought it was critical to stop the ban advocates from creating a broad prohibition on these technologies because their use will be so crucial for future professionals,” said Chris Hoofnagle, faculty director of the Berkeley Center for Law & Technology. Hoofnagle developed the policy with two other law professors and, in one of his courses, Programming for Lawyers, students begin using ChatGPT and DeepSeek by week seven.
“What many people don’t understand is that there are uses of this technology that align with a pedagogical mission and ought to be even encouraged,” Hoofnagle added.
The University of Chicago Law School’s policy went through several iterations. Initially, the school applied existing plagiarism rules to AI use, before updating the policy to mandate that students cite AI-generated content in their assignments. Months later, in the fall of 2023, the law school revised its AI policy again to more closely resemble Berkeley’s more flexible rules. According to Deputy Dean William Hubbard, most professors adhered to the default of not allowing AI use, except for brainstorming and proofreading purposes. “But some professors have been very entrepreneurial to make drafting and research with AI an integral part of how they teach the material,” Hubbard said. The University of Chicago Law School recently incorporated AI into its mandatory first-year legal research and writing course.
From Ban to Norm
It’s evident that law schools are taking varied stances on their embrace of AI in assignments. Columbia Law School, in line with the university, prohibits the use of AI unless expressly permitted by an instructor. Conversely, both the University of Pennsylvania Carey and The George Washington University Law School allow professors to establish their own policies, within general guidelines concerning academic honesty. Laurie Kohn, senior associate dean for clinical affairs at GW Law, noted that in clinical programs, students are not only learning law but practicing under the supervision of licensed attorneys. “So, from an educational perspective, we are not doing them any service by banning the use of generative AI. It’s a norm of practice,” Kohn said.
The Washington University in St. Louis School of Law provides faculty with sample AI policies that they can adjust to fit their specific course objectives. “We don’t have a rigid policy that says you should not use AI in any way, but an individual faculty member can have such a policy,” said Peter Joy, vice dean for academic affairs.
Currently, there appears to be no widespread appetite to standardize a learning approach across law schools regarding AI integration. While the ABA Standing Committee on Ethics and Professional Responsibility issued its first opinion on attorneys’ use of generative AI in July 2024, neither the ABA Section of Legal Education and Admissions to the Bar nor the Association of American Law Schools have established explicit rules on AI in law school classes.
At the University of South Carolina School of Law, 2L student Lauren O’Steen said her experience with AI use in classes depended on the comfort levels of her various professors. However, she has welcomed opportunities to use AI tools in class. “Law school should reflect the legal environment we are going to go into. You should have the same tools that you are going to have outside of law school and with that comes the responsibility, too,” O’Steen said.