Editorial: Don't let AI kill creativity
Not unlike an athlete's muscles, the human brain needs to be exercised, challenged, worked to near exhaustion, rested and put back to work. For a child to enter adulthood with a properly functioning brain, he needs to spend years reading, writing, solving math problems, and acquiring the ability to wield the knowledge he has learned. Yet America's schools and culture seem inclined to create an effortless path to achievement.
Artificial-intelligence tools like LLM – large-language models – are the modern version of the Civil War soldier who was paid by a well-to-do draftee to serve in his place. The soldier learns how to fight, to forage, to follow orders, to endure privation and overcome fear. The fellow who paid the soldier to serve in his stead learns nothing.
Writing in The New York Times July 18, Meghan O'Rourke, a professor of creative writing at Yale University, cited a Massachusetts Institute of Technology Media Lab study that tells the tale of AI misuse. The study monitored 54 students, some of whom wrote essays using LLM while others did not use the tool. Using an EEG machine to measure brain activity, the MIT researchers found the students who used LLM "demonstrated weaker brain connectivity, poorer memory recall of the essay they had just written, and less ownership over their writing, than the people who did not use LLMs." The study concluded that the "results raise concerns about the long-term educational implications of LLM reliance."
Teachers, of course, are scouting for ways to make sure that student papers are being written by the students, not by clever technology. Not many years ago, it usually was enough to google a few choice words from a student's paper to detect possible plagiarism. Such tactics will not expose AI-produced essays.
Retired public-school teacher Barth Keck, currently teaching a college English-composition course, has devised an "old school" regimen of in-class writing assignments. "On occasion, I'll give a pencil-and-paper pop quiz based on the homework reading to keep the students honest," he wrote for CT News Junkie on Dec. 2. "Also, I've assigned on-demand, in-class written responses that students must compose by hand."
A student whose poor grammar, spelling and thinking skills are exposed in his in-class work won't get away with turning in flawless AI-generated essays – but that preventative measure requires the teacher to go to the trouble of assigning in-class writing, and actually reading and evaluating it.
Mr. Keck homed in on one of the major dangers of AI: "Perhaps the most worrisome factor with artificial intelligence is how platforms produce ‘hallucinations,' or information that is misleading, biased, or simply wrong," he wrote. Quoting from the journal Science, he added: "Large language models such as those that underpin OpenAI's popular ChatGPT platform are prone to confidently spouting factually incorrect statements." Widespread use of AI therefore poses a heightened risk of widespread distribution of false data.
An argument also could be made, however, that students should learn the capabilities and limitations of AI. Banning its use in the classroom may be perceived as stunting their growth as eventual participants in the modern economy. Only years of experience with this technology will expose the full range of its benefits and hazards – and by then, it will be too late to steer users away from the primrose paths flung open by AI.
Sadly, human nature dictates that most people will follow the path of least resistance. They'll use AI in ways that make everyday tasks easier, in the classroom and the workplace; they won't question whether the outcomes are better in the long term. And better, they assuredly will not be, if they fulfill their potential to eliminate human creativity from the formation of thought and expression.
This article originally published at Editorial: Don't let AI kill creativity.



















