What do we lose through using Generative AI to write?

A photo of a computer using Generative AI through ChatGPT

09/02/2026

Since the earliest times, advances in technology have often marked the loss of skills we once carried out ourselves, unaided. This is sometimes invoked as an argument in favour of growing use of Generative AI. We don’t fret about these past losses in light of the subsequent leaps in innovation made possible by the introduction of new technologies, and likewise, any losses associated with Generative AI use will be outweighed by what it makes possible. In this blog post, Dr Andrew Kirton considers how we should think about what we lose when we turn to Generative AI to write for us.

When our early hominid ancestors started cooking with fire, the high energy burden of chewing food to break it down was taken off our jaws and muscles, and placed onto burning materials instead. Our bodies gaining a surplus of energy, we evolved smaller jaws and bigger craniums, which allowed the development of complex language and, eventually, writing. Now, by consuming huge amounts of fuel, precious metals and water, we can offload the work of digesting the sum total of recorded communication onto Large Language Models, powered by data centres; centralised cauldrons for cooking up the entire corpus of recorded communication in a giant algorithmic stew. The books we could write after being freed from endless chewing are now being fed to the models, to think and write about them for us. We can ask the models to summarise, analyse and critique the corpus, because the models have a baked in impression of how those verbs knock a stream of words off course.

What aspects of ourselves might we turn vestigial if we use GenAI to think and write for us? I’m not sure if Homo Heidelbergensis sometimes felt a vague out-of-reach longing for big teeth and honest, raw cellulose. But if we routinely offload writing abstractly, imaginatively and expressively onto machines, we presumably lose something fundamental about who we are. I can’t see what having to imagine is holding us back from (unless GenAI has rotted my imagination to the extent that I can't see the heights of what’s possible once I am freed from having to imagine). Or were a large brain and complex language just a tree-of-life quirk of adapting to the African savannah, and which in the long term LLMs can carry the burden of?

An optimistic take is that offloading a writing task onto GenAI is no more a threat than using a calculator to solve a maths problem. But a calculator distils a deductive space that stretches beyond us, allowing us to always arrive at a result that is simply the answer, based on the axioms of mathematics encoded into it; the same ones we also follow when tackling a problem by hand. If we’re good enough at maths we inevitably converge on the same result when doing it by hand as when using a calculator. A calculator also tackles a circumscribed domain of non-everyday problems, rather than - as promised by adverts from AI companies - any and all questions we might face, like what to have for dinner.

If we ask a GenAI to write something vs. do it by hand/brain ourselves, we’re not converging on the answer. GenAI is a system that regresses toward an average or predictable result that’s gleaned from a corpus of human artefacts, so its gravitational pull is toward a pastiche of what it has seen before in relation to the words we give it. Outsourcing your thinking or writing to the LLM, unlike using a calculator, is not having the tool ‘take care of’ a component part of the wider project to save time. There is no ‘taking care of’ that thing in a way except by giving up your own interpretation or perspective, and choosing instead to defer to the predictable average, or what I think of as the mulch, instead.

Students and people in general want to defer to the mulch because we’re so often writing to fit in with what’s expected. I notice a temptation to use GenAI to write this blog when I see it as a deadline I committed to meet with an expected wordcount. I don’t use GenAI though because I like the process of thinking and writing about what I’m interested in. Typically, that’s not what students are asked to do. We tell them to think and write for themselves, while giving set questions and a marking criterion, saying we want them to do it like this. Other teachers I’ve spoken to have had the experience of a student asking for their advice on an essay plan, where we steer them away from ambitions to really get to grips with the question, and instead direct them to play the game and do what’s expected. Students usually step in line because a higher grade and the opportunities that unlocks are at stake. When the parameters of a writing task are narrow and departing from what’s expected is pointless or punishable, then it makes sense to defer to the mulch.

If we use GenAI to think and write we do lose something, as we resort to what’s expected. Society doesn’t move forward; everything stagnates. I suspect that so long as people have an urge to pin down and express in words what they really think, then independent thought and writing won’t die. It might just happen outside of universities, if their incentive structures primarily reward what’s expected, rather than what’s genuine.

Dr Andrew Kirton has written about the nature of trust and its role in interpersonal relationships, institutions and societies. He did his PhD at the University of Manchester and has worked in IDEA The Ethics Centre at the University of Leeds’ School of Philosophy, Religion and History of Science since 2018. His teaching specialism is AI Ethics. His research specialisms are in moral philosophy and the philosophy of mind and action. He is currently trying to work out various ideas surrounding the way we settle on viewpoints through dealing with cognitive dissonance and our needs for attachment and social acceptance, and how morality falls out of this.