Why I Let AI Write My Code, But Not My Writing
A meditation on unpleasant stinks
Since the very early days of ChatGPT, I’ve noticed what every other person worth reading online has noticed: AI writing stinks.
The “writing” that comes out of LLMs stinks in both meanings of the word. It stinks in sense that it is not good, that it sucks. AI writing is boring, bland, soulless, eye-glazing, mind-numbing. There’s something about it that actively pisses me off as I read it. I think it’s the utter averageness of it. By the very nature of an LLM, its writing could only ever be the baby-food average of the collective writings of humanity1.
But it also stinks in the more visceral way. You can sense that it smells bad. It’s wrong somehow. We’ve been gifted, through an eons-long chain of evolutionary history, with senses that, when presented with certain stimulus, inform us something’s off, at the gut level. That sense bypasses all that pesky modern human cogitation and tells us to get the fuck away from that shit, or vomit, or decay - before it sickens or kills us. And at a higher level, we’ve got the uncanny valley reaction - that disgusted sense we get when seemingly human faces or movements feel off, inhuman somehow - to protect us from demons or vampires or HR reps.
The stink of AI writing is like the collective soul of every HR department was distilled into some bland beige beverage and, just to ensure you don’t actually taste anything, someone injected it straight into your arm. It’s the smell of corporate bullshit mixed with the annoyance, sadness, and disgust you feel from knowing that some tactless motherfucker put this garbage in front of your eyes and expects you to read it. It’s the distillation of every dehumanizing thing about our modern tech-centered work environment into some awful liquor that simultaneously tastes awful and like nothing at all.
Have I sufficiently established that I really dislike AI writing?
And yet… I let Claude write probably 90% of my code these days. How, you might ask, can I feel so strongly about AI writing sucking, while simultaneously letting an LLM perform the craft that puts bacon on my table? The fact that I can produce the polemic above, but then go on to let Claude do my day job, feels like a big indicator of some kind of cognitive dissonance.
You know what’s funny? When I sat down to write this, I thought what was going to come out was a justification - an explanation for how viscerally hating AI writing but simultaneously letting it code for me made sense. But I think, in the process of writing this, I’ve come to a more nuanced understanding of how I feel2. Let’s try to unpack this.
Writing Is Different
Writing is a tool for thinking. In Ted Chiang’s excellent story The Truth of Fact, The Truth of Feeling, the main character, Jijingi, learns to read and write from a European missionary. As a result, he experiences a sort of cognitive shift. His literacy shows him a new way of thinking. Writing and reading are cast as a kind of mental technology - a way of externalizing thoughts and, in so doing, making them easier to work with. There’s a lot more to the story - it’s Ted Chiang, after all, so you know there will be some depth based on the author alone. It’s a story well worth reading. But I really only need this excerpt to make my point:
“As he practiced his writing, Jijingi came to understand what Moseby had meant: writing was not just a way to record what someone said; it could help you decide what you would say before you said it. And words were not just the pieces of speaking; they were the pieces of thinking. When you wrote them down, you could grasp your thoughts like bricks in your hands and push them into different arrangements. Writing let you look at your thoughts in a way you couldn’t if you were just talking, and having seen them, you could improve them, make them stronger and more elaborate.”
- Ted Chiang, The Truth of Fact, The Truth of Feeling
The point is that the process of writing is important. The writing itself is the thing that you transmit to another person, in the hope that you can communicate your thinking to them.
Thought itself is fluid, unstructured, and often annoyingly difficult to pin down. Capturing your thoughts in words is hard, and nearly always feels incomplete. By the time you wrestle some idea into words, a hundred others have flowed by in the stream of consciousness. The process of writing anything worth reading is that struggle itself. To write well is to dip a ladle into that river of thought, pull something out, and form it into something solid, and then continue that often frustrating process until you manage to pin down an argument. A story. A point.
If, instead, you just take your unstructured brainstorming, dump it into your favorite LLM, and let it summarize into neat little bullet points, you miss out on that process. Maybe, for some class of writing (like boring corporate work communications), that’s OK. But if your reader can tell that you did this - if your writing has that stink I talk about above - they’re immediately going to lose all interest. I find myself almost comically angry when I get a Slack message or an email or a Google Doc that clearly came straight from an LLM.
Even if the output of the LLM is good3, if you fail to cover up the stink, the reader will have a gut-level reaction and dismiss it.
In short: If you couldn’t be bothered to write it, why should I bother to read it?
Now, computer code is different in at least one critical way: the code itself is not the thing you are making. Code is a means to end. Users don’t interact with the code itself, but rather the resulting application. No one is reading your code for entertainment4. They just want the fucking thing to work5. They don’t care if you wrote the code in Java, or Python, or C++, or brainfuck. They just need to get their work done, or play their video game, or whatever. The code itself is not the point6.
In my entire professional career, I’ve never once thought about the raw assembly that my code generates. Now, this metaphor is a bit specious, because assembly comes from code through a deterministic process (i.e. the compiler or interpreter), and LLMs are anything but deterministic - at least in the way most software devs are using them. But there’s something to the comparison. At the end of the day, the code is not the thing we care about.
To whatever degree your code produces something cool, or beautiful, or entertaining, or useful - the code doesn’t matter.
Outer Wilds is a fantastic video game, possibly the single best piece of evidence that games can deserve to be called art. I don’t know what language or engine it was coded in, and I don’t care. The code isn’t the thing. The experience works.
So, that’s it, right? We’ve established that you should do your own writing, but that it’s fine to generate your code with AI.
Well, maybe it’s a little more complicated than that. Because…
Code is a tool for thought, too
Sure, code isn’t the thing you’re making. But as any software engineer is well aware, the code does matter, at least to other developers. We need our code to be understandable, well-organized, clean - because if it’s not, it becomes progressively more difficult to work on the code.
one day code base understandable and grug can get work done, everything good!
next day impossible: complexity demon spirit has entered code and very dangerous situation!
grug no able see complexity demon, but grug sense presence in code base
demon complexity spirit mocking him make change here break unrelated thing there what!?! mock mock mock ha ha so funny grug love programming and not becoming shiney rock speculator like grug senior advise
- grugbrain.dev, on Complexity
The process of coding, at its best, can be just like the experience Jijingi encounters in the excerpt above. Putting ideas on the virtual page, moving them around, taking the time to express the flow of data or execution in a succinct, clean, well-argued manner results in better code, and as a side effect increases your understanding of the problem. It forces you to grapple with tradeoffs, with the parts of the process you’re modeling that have friction, with the edge cases - all those places where the complexity demon can creep in if faithful developers of good will don’t continually fight against entropy.
So, if I just let Claude extrude the code equivalent of pink slime all over my repos, aren’t I missing out on that same process of thought? And don’t I show a certain disrespect for my teammates in so doing? Aren’t I trading easiness today for difficulty tomorrow?
I think the answer, for me, is: not quite. To be clear, I think there’s a very real danger, when doing AI-driven coding, of creating the situation in the preceding paragraph. But there are a number of practices which are basically standard in software development that help to prevent this. On any healthy dev team, there are processes of code review, and automated testing, and manual testing. All these things combined, when everything is going well, help a team ensure that their code stays functional - by which I mean: the thing that matters (the product, the app, the game, the script, whatever) continues to work.
And so, when I let Claude write code for me, I don’t just submit the code blindly. I read through it, try to understand it, clean it up. I make changes if I think they’re needed. I build the app locally and test it out. I run the old tests and ensure they still pass. And my code, at the end of the day, isn’t getting merged without buy-in from the team, without a passing test suite, and without a demonstration of functionality when the changes get QAed7.
There’s also another pragmatic argument for this process of ensuring that the code is understandable and well tested - it’s more likely to be successfully worked upon by an LLM!
And look, if I leave the engineer’s moral high ground for a minute, and move from the realm of the hypothetical into the actual daily experience I have as a programmer, maybe this is the reality: I just don’t care that much about the code I write for work. I think that’s worth calling out. It’s possible I’m just trying to come up with an argumentative solution for a spiritual problem.
Maybe I’m justifying this distinction between belief and behavior because I’m lazy, decadent, or just burned out. Maybe the modern software dev job has just become so disconnected from users, from the actual results of the work that it’s become harder to maintain the appropriate level of give-a-shit.
But there have been times, recently, where I worked on something that mattered to me, and I still used AI. Just last weekend, I spent several precious hours of weekend free time submitting a PR to an open source repo. Yes, I let Claude generate the code. But there was still a lot of work involved! It took human judgment and reason and experimentation to categorize and reproduce the bug, and when it was time to submit my PR, I went over it line by line. I didn’t want the maintainers to dismiss my change out of hand because it had that AI stink.
I cared about the result. I didn’t want other users to deal with the same bug. And I put a bunch of time in, for no tangible reward, to shepherd that PR through. I didn’t have to do that. I have the fix locally. It was an act of some kind of care to take the time to work through generating the code at all, let alone all the extra stuff.
But honestly? Most of the time, I’m just trying to get enough shit done to keep the boss off my back. If it’s not perfectly clean code, who cares? As long as nothing completely catastrophic happens when I deploy, I live to draw another paycheck, and if the AI saves me some time, so that I can do something I care about more - like writing, or reading, or just fucking off early and playing a video game, I think I’m OK with sacrificing a little bit of my craftsmanship. Maybe that’s hypocritical.
A Counterpoint
Now, the astute or annoying among you might be getting ready to deploy a counterargument. Let me try to simulate this, even though I’m neither astute nor annoying:
“Ah-ha, Mr. Woodsman! I’ve caught you in a contradiction! You say that it’s OK to deploy AI-generated slop if the thing itself doesn’t matter. You say that, so long as you take the time to review and re-human your code, you’re morally above board! You say that if you’re just getting shit done to earn your paycheck, it’s fine to use AI to save some time! Why, then, O Woodsman, do you rail so harshly against AI-generated writing when it’s work-related? Sure, if you’re writing some vaguely mystical piece for your little Substack, which no one will read, you should write it yourself. But why are you so butthurt when someone does the same thing you’ve just admitted to, and uses AI to write their work communications?”
Thanks, annoying and astute strawman - you’ve got a point. Here’s what I think it comes down to.
Don’t Bullshit Me
Honestly, that heading probably does enough to make the point - but let me spell it out. What I find so maddening about the AI-written work communications I am so often forced to read, is that they’re bullshit. Bullshit is so pervasive in the modern workplace that it’s almost unfair to blame AI, and a taxonomy of all the types of bullshit on display in the typical work environment would take a whole ‘nother article, but suffice it to say: if the message or document you produced was able to be generated by AI, it’s probably bullshit.
No, I can’t really critique you aesthetically, if it’s just a Slack message or a Google Doc with “Company Goals for 2026”. I can’t even really fault you for not doing the “intellectual work” implied by careful writing. It’s work bullshit, someone’s gotta do it, and I just admitted above that I want to save time too. We’re all trying to get shit done. I get it.
But look. In the modern era, bullshit is everywhere. Our attentional inboxes are so full of crap that distinguishing the signal from the noise leaves us with little time to actually do anything worth doing. We’ve become so inured to bullshit that it just bleeds into the background, it doesn’t even register. You should not contribute to that. We all must find ways to cut the bullshit, to do things that actually matter, to make things that mean something.
So, I’ll come off my high horse, briefly. If it’s just make-work, if it’s just the boring communication needed to coordinate labor and keep people on the same page, and most importantly, if you have taken the time to refine the output into something that communicates your boring work stuff clearly and succinctly - I’ll give you a pass. I suspect, in practical terms, that when it comes to writing, it would be easier to just write the thing yourself than to try to remove the LLM stink from generated output. But you do you.
And if your code smells like AI, but it works, and it doesn’t seem obviously stupid - again, I’ll give you a pass. Here, I think it’s probably more plausible to bring LLM output to a worthy state, and you get the added benefit of letting the tests and functionality speak for themselves.
But in any case, don’t bullshit me. If I smell bullshit - if I get the sense that you’re trying to make me read something you didn’t care enough to write and refine - whether it’s art, code, or boring work Slack messages - I’m not going to read it, and I’m going to think less of you for putting it in front of me.
Because on some deep level, that’s the whole reason for this project. The whole reason I’m out here in the woods, doing DIY metaphysics or just mundane attempts at material self-sufficiency - is that I’m sick of bullshit. Don’t make it worse.
including, presumably, such notable works of literature as “all reddit comments” and “all tweets.”
yes, this is a significant sentence.
it’s not
at least, no one well-adjusted
it never does
this is a truth that’s often really difficult for software engineers to get their heads around, and this difficulty has been the cause of a great deal of suffering in the world of software development
this is a highly idealized version of how software development actually works in practice, and we never quite live up to the ideal, but healthy teams at least try
