AI and Academic Integrity at Cate
- miagroeninger5
- Apr 25
- 5 min read
By: Avery Polynice '25
Whether it’s to clean up a clunky sentence or bounce off ideas for an upcoming paper, AI tools are increasingly integrated into the academic routines of Cate students, and as global usage grows and software advances, the line between AI support and substitution becomes increasingly blurred. In response, Cate School introduced a new AI policy to the Honor Code in 2023.
“We had to respond super quickly because AI burst on the scene in late fall 2023,” stated Annalee Salcedo, Dean of Academics. “And then by March 2024, it had exploded even further. And as is often the case with technology, students are two steps ahead of us.”
Cate’s AI policy is designed to help students use these resources responsibly while reinforcing academic integrity. The expectation is that students are “not allowed to use AI tools unless explicitly approved [by the teacher]”.
From an administrative standpoint, the goal is to promote inquisitive and intentional learning, particularly with regard to writing and critical thinking. “We wanted to emphasize our values around the craft of writing and learning to write, and not taking those shortcuts, and then really enumerating the expectations around permissible and non-permissible use of AI,” Salcedo says.
While the policy addresses academic dishonesty, its application is driven by more significant psychological concerns. “Overreliance on AI is a form of cognitive offloading,” Head of History Department, Rebekah Barry, explains. “The more AI does, the less the students lay down the foundations needed to grow their brains.”
As a department head and teacher, Barry witnesses student growth and learning firsthand. “It’s such a gift to be in a small school where you can do meaningful assignments and have a teacher walk you through the process. You’re in a position to develop real learning skills and writing skills, which we know neurologically have tremendous benefits on critical thinking,” she says.
Barry acknowledges that enforcing this policy comes with challenges. “When you establish clear boundaries like, ‘You can use AI here, but not here—and here’s why,’ that removes some of the temptation to bend the rules.” For that reason, she believes clear communication and teaching self-regulation are more effective than outright prohibition. “It’s like saying, ‘You can never eat chocolate again.’ Then suddenly all you want is chocolate!” She continues, “That’s not an effective way to build restraint or thoughtful decision-making. It’s about knowing when it’s appropriate and also building limitations for yourself.”
Developing a clear and functional AI policy has required ongoing revision and collaboration. “Faculty members are in the same place as students. We’re busy, and it’s hard to find time to be thoughtful and collaborative,” Barry says. “At Cate, students get an exceptional education, so anything we do with AI must meet those standards. We’re not going to rush into it – we want to get it right.”
From a student perspective, Daisy Gemberling ‘25 expresses how the policy can feel ambiguous at times and lack consistency across disciplines, saying, “Honestly, I understand the policy is not to use it in any part of the creative process, so don't even use it to brainstorm… but then for STEM, I understand it is to be used at your teacher's discretion.” She goes on to share, “Someone I know got some sort of disciplinary action for having AI notes at the bottom of an essay, but not actually in the essay, and I felt like that was a little bit of an overreaction because it was their brainstorming process.”
Barry agrees that communicating expectations about AI in the classroom can be inconsistent, saying, “It can get confusing on specific assignments if there’s no clarity around AI use.” She explains, "Teachers have the responsibility to be clear about what those parameters are and to communicate them clearly in the History Department, but how and where teachers use them is not very widespread.”
To effectively address these grey areas, multiple members of the Cate faculty – including Barry and Salcedo – have been attending workshops to learn the effects of AI on students’ learning and ways to implement it responsibly into the Cate curriculum. “I’ve done a lot of workshops, and right now I’m working on redesigning the sophomore World History research paper to pull in specific AI use,” Barry says.
Cate’s stance on AI integration has evolved in the last 20 months. Today, faculty and administrators focus on teaching students AI literacy and how to self-regulate their AI usage, not only what is and isn’t allowed, but also how to work responsibly with technology. “We’re now looking at techniques for developing AI-resistant assignments as well as AI-assisted assignments, and how those two can coexist in the same curriculum and school,” Salcedo says.
Although the initial drafting process was faculty-led, the need for student input has become increasingly apparent. “We have not yet found a way to get students and teachers in the same room to develop our AI stance and policy,” Salcedo says. “Partly because we’re not yet getting teachers in the same room in effective ways, just ourselves.”
Drawing from personal experience, Barry adds, “AI adds this entire new layer. It’s like we’re flying the plane while building it. So maybe there needs to be a bit of grace on all sides – we’re all figuring this out together.”
Salcedo continues to explain how the lack of open communication between faculty and students about AI use in the classroom has contributed to growing stigma and confusion around expectations, saying, “I think as we get better at developing assignments where AI is a part of it, then we can distinguish those from the assignments where AI is not part of it.” She acknowledges that there is still work to be done, using Barry’s chocolate analogy to report, “But right now, because there isn’t enough experience with both, it still kind of feels like, ‘You just don’t eat chocolate at all.’ And I think that makes it harder for students to distinguish what’s okay and what’s not.”
Optimistic for AI's potential for enhancing student work, Nyle Ahmad ‘25 shares, “I did a project on Clarence Thomas and his scandals. I used ChatGPT as a bouncing board to get me sources that would be relevant.” He believes that the policy and conversations surrounding AI use at Cate should be reframed with a more open mind, saying, “If we had a progressive education where students are learning the science behind AI, like prompt engineering, we’d be going into higher education with a foot up.”
In agreement, Daisy adds, “I think teachers need to recognize that AI is not going away…What can AI not do that you can teach? I think that’s a really important question for teachers to ask.”
As for her hopes for the future, Barry says, “I’d love to create space for teachers to sit down with students and hear what they’re doing with AI. Imagine if some of our professional development came from students. That would be incredible.”
Salcedo shares these aspirations of moving toward a collaborative approach. She believes, “Once we do, it’ll be like, ‘Okay, it’s an appropriate time to eat chocolate now,’ versus, ‘Actually, I shouldn’t eat chocolate—it’s eight in the morning.’”
However, in the meantime, Barry states, “There just isn’t time. We’re trying to do this while doing a million other things. It just feels like we need 5 seconds to land the plane and figure things out.”

Comments