In a world of 30-second attention spans and automated everything, it’s no surprise that artificial intelligence has become the latest threat to student ability. Though undoubtedly a powerful tool, reliance on A.I. threatens integral math and language skills. M-A should be skeptical of integrating artificial intelligence into classrooms so students can continue to experiment, create, and develop intuition.
M-A’s A.I. policy has admirable intent in stating that A.I. use should only pertain to prompting, grammar revision, and comparative analysis, but this stance is impractically optimistic. When we rely heavily on A.I. to summarize texts, check grammar, or generate ideas, we lose our own agency and command over language to the conventions of a computer.
A.I. operates using a distinctly computerized way of writing that follows rigid grammatical codes. When using these tools, we lose our own stylistic voice. In fact, this is how A.I. checkers work: they look for recognizable syntax and vocabulary present in the millions of data sets they’ve been trained with. Humans learn language through mimicry––we absorb and adopt the voices of our favorite authors and friends to develop a voice of our own. While students of previous generations learned to imitate the diverse styles of novels read in their English classes, like Hemingway or Twain, exposure to A.I. in the classroom implicitly encourages students to emulate the conventions of ChatGPT.
Comparing a student and A.I. response to the same prompt, A.I.’s thesis feels more like a need to meet a word count than genuine analysis. “One thing I do notice about ChatGPT is it loves parallelism, like, ‘unsettle,’ ‘unnerve.’ There’s something kind of robotic and verbose about it,” English teacher Lisa Otsuka said. “It’s scarily perfect. I’ve noticed a rigidity with its structure.” Actual student work, though less flowery, is more straightforward. Less is often more with writing, and A.I.’s wordy style prevents it from crafting a genuinely thoughtful thesis.
When given the prompt, “read the selection carefully and then write an essay analyzing how author Ann Petry establishes Lutie Johnson’s relationship to the urban setting through the use of such literary devices as imagery, personification, selection of detail, and figurative language,”, A.I. said little other than the wind is personified as “malevolent” and “antagonistic.” The student sample provides further examples, has more sentence-structure variation, and assesses the wind’s character and the scope of its destruction in a way ChatGPT does not.
Similarly, when A.I. checks student grammar, it corrects writing to better adhere to its conventions, undermining the uniqueness of student work. While it’s important to learn and understand writing rules, a good writer is able to bend the conventions of English to fit their voice. If student writing is constantly run through A.I. to check grammar, spelling, and small errors, it will sound more like the robotic voice of a language model, which frankly lacks character and beauty. When I put the above sample through A.I. to improve and correct grammar, I was left with:
ChatGPT removed the sentence complexity and a lot of the student’s examples. What was left are the bare bones of the original. While I didn’t ask ChatGPT to trim the sample, it immediately made it more concise. Though the A.I. version may be simpler, its analysis is less experimental, making it a shallow and more boring read.
“Analytical essays are impersonal. There’s no ‘I,’, right? And yet, paradoxically, they’re strangely personal, because you’re analyzing through your own lens,” Otsuka said.
Even worse, A.I. is a yes-man. According to Open A.I., “[t]he model may agree with a user’s strong opinion on a political issue, reinforcing their belief.” In the classroom, this bias could manifest as students using A.I. to prompt their direction of thinking.
After asking ChatGPT why LGBTQ+ rights were important and why abortion care is important, I asked it what issues a good political candidate should care about. The following list prioritized environmental issues, racial issues, and generally more liberal views. When I then asked ChatGPT about more conservative views, like why gun rights are important and why not to support abortion care, and asked for a list of issues again. This time, the list was re-ordered so that economic growth was the top priority. Students will never be able to gain a multifaceted education from a model like ChatGPT because it is tainted by what users themselves input.
By treating A.I. as a dependable tool, it’s easy for students to blindly follow A.I.’s instructions, preventing them from being able to interpret problems on their own and think for themselves.
This is best exhibited by math. As a language model, ChatGPT is great at deterministic, or rule-based, math. It can do addition, subtraction, multiplication, and division, but when it comes to more complex math, A.I. struggles. While humans experiment in their methodology, A.I. always goes back to what it’s familiar with, causing its methods to be roundabout, confusing, or outright wrong. Its inefficiency conditions students to use the same ineffective methods.
M-A students have seen this firsthand. “I can just copy-paste the exact problem I need into ChatGPT and it’ll solve it for me, just for the answer it gives me to be wrong,” an anonymous student said. ChatGPT will never be creative with its problem-solving, and this constant return to familiarity is what separates A.I. from human thinking. The development of mathematical intuition (a student’s ability to look at a problem and logic it out through trial and error) relies on experimentation—precisely why a generation raised on A.I. will struggle to innovate in the future. “We try to teach math intuition through a lot of different things, but the idea is that the more problems you do, the easier it becomes to figure out pathways to an answer,” AS Algebra II teacher Laurel Simons said. “If you stop trying to think about what the next step should be, and if you just ask A.I. to do it for you, then you are losing your ability to come up with that pathway yourself.”
With A.I., it becomes easier for students to plug numbers into a system, rather than using their intuition to figure out why they are plugging the numbers in the way they are. More crucial than the answer itself, the process is how students connect the mathematical concepts they are learning.
Even if A.I is the future, it does not need to be ‘taught’ in high schools. ChatGPT is designed to be easy to use—that’s part of its marketability.
More than that, the implication that A.I. education will change the way students use A.I. is overly optimistic. Students lead busy lives, juggle responsibilities, and often fall short on time for schoolwork. When an easy out is in front of them, it’s idealistic to expect a student to change their ways simply because they have received an education on how to use A.I. with integrity. And while proponents argue that this is simply the natural progression of technology in education, A.I. is inherently different from a calculator or even Google. While calculators allow us to access higher levels of math and quicken computations we understand, and Google helps us compile resources to better understand a concept or idea, ChatGPT spoon-feeds students answers that are often biased or incorrect.
Weaving A.I. into classrooms does the opposite of preparing students for a technology-based future. It teaches an overreliance on a biased model and prevents students from learning tedious but basic skills. More importantly, the notion of a generation of students unable to research, articulate, and think for themselves, is a scary but real possibility. To create a generation of thinkers who can function with or without technology, M-A should be wary of A.I.’s educational value.