Generative AI: It shows up everywhere. From the endless pits of the YouTube recommended page to notifications from The New York Times, there always seems to be something trending about ChatGPT that begs for our attention. Recently, generative AI has taken over the Lawrenceville community, with an influx of school-wide policies aimed at addressing students’ use of generative learning. However, the policies’ details remain unclear; rather than ban this tool, the School should find comprehensive ways to incorporate generative AI into our daily lives, as it will continue to play a major role in academics and beyond. As President-elect Bryce Langdon ’24 advocated in his campaign, the School’s policy should be focused “on educating students on tools [as opposed to] going straight to discipline.”
On Monday, April 10, 2023, Dean of Academics Alison Easterling informed students of the provisional policy surrounding the usage of generative AI:
"The Lawrenceville School’s current position on generative AI is that unless a student has clear and specific permission from their teacher to use AI tools in completing an assignment, using them will be considered a form of academic dishonesty (specifically, a form of contract cheating) that may result in both an academic and disciplinary response."
Despite the recent school-wide announcement discussing policies relating to generative AI, the implementation and details of these policies are not clear. How will the administration define “completing an assignment”? Is generative AI usage limited to work assigned by teachers, or does it extend to any studying related to the School curriculum? Similarly, how will Langdon “educate students,” and what will it look like for students to not go “straight to discipline”? ChatGPT and generative AI are evolving tools, and the School’s commitment in making policies “provisional” represents the lack of clarity surrounding what education with generative AI looks like. With disciplinary hearings surrounding the usage of tools such as ChatGPT already held, the administration must inform students of the specific boundaries of academic honesty violations in relation to generative AI.
Already game-changing in its infancy, the progress of generative AI is unpredictable. Even if specific exceptions are passed, these policies regarding the use of generative AI risk becoming obsolete in a few months—hence why the School has only proposed provisional legislation so far. Lawrenceville’s new blanket ban on generative AI in completing any assignment makes sense, allowing Lawrenceville students to learn and succeed as they always have without using ChatGPT. However, does Lawrenceville want to resemble John Henry, racing the steam engine that will eventually take over the world, or does the School want to embrace revolutionary technology, which can access all the information on Google, increasing students’ learning efficiency?
Lawrenceville must seek a balance and produce students who can independently write, analyze, calculate, and perform all the skills that develop one’s intelligence. The ability to generate essays, poems, or summaries in the matter of seconds undermines students’ opportunity to learn through trial. But generative AI systems, like ChatGPT, have the potential to augment students’ learning for the better. ChatGPT can generate practice prompts for in-class essays and final exams, create study guides for students after receiving just a few key terms, and help students find textual evidence in assigned reading, albeit not with incredible accuracy.
None of the uses of generative AI mentioned above neatly fit within the “completing an assignment” label, recently restricted by the Dean of Academics Office. Moreover, Lawrenceville’s generative AI policy should permit the uses above. Through utilizing ChatGPT to generate practice prompts or making study guides, students are not skipping assignments by feeding them into ChatGPT; rather, students are consulting ChatGPT as they would a friend or a search engine to fill in gaps that the Lawrenceville curriculum does not cover. Every informed student knows not to use ChatGPT for research––ChatGPT intakes everything posted on the internet, and thus the 1 percent of information on Google that might pertain to a research topic is diluted with the 99 percent chaff of false information and extremist websites. ChatGPT should not write students’ essays, as it cannot effectively do students’ research for them or replace the critical learning and thinking so essential to the writing process, but it can present students with abridged perspectives. Lawrenceville should consider allowing students to use generative AI, with access to compiled resources on almost any subject, as a consulting tool to see what other people think of the issues they are learning about in school. ChatGPT can extend the Harkness method from a small table in a locked building on a gated campus to the greater world around us.
So how should generative AI regulation look at Lawrenceville? The easiest, most reliable, and most inclusive decision is to fit generative AI use within Lawrenceville’s tried and true Academic honesty regulations. Looking at generative AI as solely a consulting tool to augment learning, Lawrenceville should define consulting AI as legally equivalent to consulting a friend or looking something up on a search engine. Anytime students are allowed to talk to a friend about an assignment (unless pertaining to a math problem set, where students can only discuss the problems with people in their class), students should have the right to use ChatGPT. Anytime students are allowed to consult a search engine while completing or researching an assignment, they should have the ability to use ChatGPT. Outside of these consultation uses, all generative AI use should, as we see it at this moment, be banned to allow students to work, struggle, and learn as much as they can at Lawrenceville. Lawrenceville should incorporate ChatGPT and other generative AI’s into its current regulations, merely as a consulting tool.
Lawrenceville and its policies do not contain all the answers. However, the School doesn’t need all the answers. We don’t need a permanent policy that accounts for all the details regarding generative AI—what we need is a clear policy on what is and isn’t academic dishonesty. The recent email detailing a one paragraph policy buried deep in our inbox isn’t enough communication. As generative AI continuously develops and academic institutions around the nation fine-tune policies, the legality of ChatGPT is not a question about whether these permanent policies will develop, but when they will, and who will get harmed in the meantime.