What’s the point of confirming? AI can lie and so can humans as well.
I believe you but that’s just a gut feeling. I guess the best way to put this is anyone can write what you wrote with AI and claim it wasn’t written by AI.
The decision to stop hiring technical writers usually feels reasonable at the moment it’s made. It does not feel reckless. It feels modern. Words have become cheap, and documentation looks like words. Faced with new tools that can produce fluent text on demand, it is easy to conclude that documentation is finally solved, or at least solved well enough.
That conclusion rests on a misunderstanding so basic it’s hard to see once you’ve stepped over it.
Documentation is not writing. Writing is what remains after something more difficult has already happened. Documentation is the act of deciding what a system actually is, where it breaks, and what a user is allowed to rely on. It is not about describing software at its best, but about constraining the damage it can do at its worst.
This is why generated documentation feels impressive and unsatisfying at the same time. It speaks with confidence, but never with caution. It fills gaps that should remain visible. It smooths over uncertainty instead of marking it. The result reads well and fails quietly.
Technical writers exist to make that failure loud early rather than silent later. Their job is not to explain what engineers already know, but to notice what engineers have stopped seeing. They sit at the fault line between intention and behavior, between what the system was designed to do and what it actually does once released into the world. They ask the kinds of questions that slow teams down and prevent larger failures later.
When that role disappears, nothing dramatic happens. The documentation still exists. In fact, it often looks better than before. But it slowly detaches from reality. Examples become promises. Workarounds become features. Caveats evaporate. Not because anyone chose to remove them, but because no one was responsible for keeping them.
What replaces responsibility is process. Prompts are refined. Review checklists are added. Output is skimmed rather than owned. And because the text sounds finished, it stops being interrogated. Fluency becomes a substitute for truth.
Over time, this produces something more dangerous than bad documentation: believable documentation. The kind that invites trust without earning it. The kind that teaches users how the system ought to work, not how it actually does. By the time the mismatch surfaces, it no longer looks like a documentation problem. It looks like a user problem. Or a support problem. Or a legal problem.
There is a deeper irony here. The organizations that rely most heavily on AI are also the ones that depend most on high-quality documentation. Retrieval pipelines, curated knowledge bases, semantic structure, instruction hierarchies: these systems do not replace technical writing. They consume it. When writers are removed, the context degrades, and the AI built on top of it begins to hallucinate with confidence. This failure is often blamed on the model, but it is really a failure of stewardship.
Responsibility, meanwhile, does not dissolve. When documentation causes harm, the model will not answer for it. The process will not stand trial. Someone will be asked why no one caught it. At that point, “the AI wrote it” will sound less like innovation and more like abdication.
Documentation has always been where software becomes accountable. Interfaces can imply. Marketing can persuade. Documentation must commit. It must say what happens when things go wrong, not just when they go right. That commitment requires judgment, and judgment requires the ability to care about consequences.
This is why the future that works is not one where technical writers are replaced, but one where they are amplified. AI removes the mechanical cost of drafting. It does not remove the need for someone to decide what should be said, what must be warned, and what should remain uncertain. When writers are given tools instead of ultimatums, they move faster not because they write more, but because they spend their time where it matters: deciding what users are allowed to trust.
Technical writers are not a luxury. They are the last line of defense between a system and the stories it tells about itself. Without them, products do not fall silent. They speak freely, confidently, and incorrectly.
Language is now abundant.
Truth is not.
That difference still matters.
Let me explain what happened here, because this is very human and very stupid, and therefore completely understandable.
We looked at documentation and thought, Ah yes. Words.
And then we looked at AI and thought, Oh wow. It makes words.
And then we did what humans always do when two things look vaguely similar: we declared victory and went to lunch.
That’s it. That’s the whole mistake.
Documentation looks like writing the same way a police report looks like justice. The writing is the part you can see. The job is everything that happens before someone dares to put a sentence down and say, “Yes. That. That’s what this thing really does.”
AI can write sentences all day. That’s not the problem. The problem is that documentation is where software stops flirting and starts making promises. And promises are where the lawsuits live.
Here’s the thing nobody wants to admit: technical writers are not paid to write. They are paid to be annoying in very specific, very expensive ways. They ask questions nobody likes. They slow things down. They keep pointing at edge cases like a toddler pointing at a dead bug going, “This too? This too?”
Yes. Especially this too.
When you replaced them with AI, nothing broke. Which is why you think this worked. The docs still shipped. They even looked better. Cleaner. Confident. Calm. That soothing corporate voice that says, “Everything is fine. You are holding it wrong.”
And that’s when the rot set in.
Because AI does not experience dread. It does not wake up at 3 a.m. thinking, “If this sentence is wrong, someone is going to lose a week of their life.” It does not feel that tightening in the chest that tells a human writer, This paragraph is lying by omission.
So it smooths. It resolves. It fills in gaps that should stay jagged. It confidently explains things no one actually understands yet. It does what bad managers do: it mistakes silence for agreement.
Over time, your documentation stops describing reality and starts describing a slightly nicer alternate universe where the product behaves itself and nobody does anything weird.
This is how you get users “misusing” your product in ways your own docs taught them.
Then comes my favorite part.
You notice the AI is hallucinating. So you add tooling. Retrieval. Semantic layers. Prompt rules. Context hygiene. You hire someone with “AI” in their title to fix the hallucinations.
What you are rebuilding, piece by piece, is technical writing. Only now it’s worse, because it’s invisible, fragmented, and no one knows who’s responsible for it.
Context curation is documentation.
Instruction hierarchies are documentation.
If your AI is dumb, it’s because you fired the people who knew what the truth was supposed to look like.
And don’t worry, accountability did not get automated away while you weren’t looking. When the docs cause real damage, the model will not be present. You cannot subpoena a neural net. You cannot fire a prompt. You will be standing there explaining that “the system generated it,” and everyone will hear exactly what that means.
It means nobody was in charge.
Documentation is where software admits the truth. Not the aspirational truth. The annoying truth. The truth about what breaks, what’s undefined, what’s still half-baked and kind of scary. Marketing can lie. Interfaces can hint. Documentation has to commit.
Commitment requires judgment.
Judgment requires caring.
Caring is still not in beta.
This is not an anti-AI argument. AI is great. It writes faster than any human alive. It just doesn’t know when to hesitate, when to warn, or when to say, “We don’t actually know yet.” Those are the moments that keep users from getting hurt.
The future that works is painfully obvious. Writers with AI are dangerous in the good way. AI without writers is dangerous in the other way. One produces clarity. The other produces confidence without consent.
Technical writers are not a luxury. They are the people who stop your product from gaslighting its users.
AI can generate language forever.
Truth still needs a human with a little fear in their heart and a pen they’re willing to hesitate with.
If the business can no longer justify 5 engineers, then they might only have 1.
I've always said that we won't need fewer software developers with AI. It's just that each company will require fewer developers but there will be more companies.
IE:
2022: 100 companies employ 10,000 engineers
2026: 1000 companies employ 10,000 engineers
The net result is the same for emplyoment. But because AI makes it that much more efficient, many businesses that weren't financially viable when it needed 100 engineers might become viable with 10 engineers + AI.
The person you're replying to is obviously and explicitly aware that that is another scenario, and the whole point of their comment was to argue against it and explain why they think something else is more likely. Merely restating the thing they were already arguing against adds nothing to the discussion.
Economics 101, right? Since developing software, writing the technical documentation, and exercising QA became all of the sudden 1000x cheaper than it was a year ago, how come we don't see the substantial increase in demand of software/QA/doc engineers then? I see the opposite happening right now, e.g. many people losing their jobs to $30/month AI model.
Not really a contradiction, since the entire point of jobs and the economy at all is to serve the specific needs of humanity and not to maximize paper clip production. If we should be learning anything from the modern era it's something that should have always been obvious: the Luddites were not the bad guys. The truth is you've fallen for centuries old propaganda. Hopefully someday you'll evolve into someone who doesn't carry water for paperclip maximizers.
Zero labor cost should see the number of engineers trend towards infinity. The earlier comment suggested the opposite — that it would fall to just 1000 engineers. That would indicate that the cost of labor has skyrocketed.
What difference does that make? If the cost of an engineer is zero, they can work on all kinds of nonsensical things that will never be used/consumed. It doesn't really matter as it doesn't cost anything.
> That's just not how people or organizations run by people operate.
Au contraire. It's not very often that the cost of labor actually drops to anywhere close to zero, but we have some examples. The elevator operator is a prime example. When it was costly to hire an operator we could only hire a few of them. Nowadays anyone who is willing to operate an elevator just has to show up and they automatically get the job.
If 1,000 engineers are worth having around, why not an infinite number of them, just like those working as elevator operators? Again, there is no cost in this hypothetical scenario.
> Cost is not the only driver to demand.
Technically true, but we're not talking about garbage here. Humans are always valuable to some degree, just not necessarily valuable enough when there is a cost to balance. But, again, we're talking about zero cost. I expect you are getting caught up in thinking about scenarios where labor still has a cost, perhaps confusing zero cost with zero payroll?
Five engineers could be turned into maybe two, but probably not less.
It's the 'bus factor' at play. If you still want human approvals on pull requests then If one of those engineers goes on vacation or leaves the company you're stuck with one engineer for a while.
If both leave then you're screwed.
If you're a small startup, then sure there are no rules and it's the wild west. One dev can run the world.
This was true even before LLMs. Development has always scaled very poorly with team size. A team of 20 heads is like at most twice as productive as a team of 5, and a team of 5 is marginally more productive than a team of 3.
Peak productivity has always been somewhere between 1-3 people, though if any one of those people can't or won't continue working for one reason or another, it's generally game over for the project. So you hire more.
This is why small software startups time and time again manage to run circles around with organizations with much larger budgets. A 10 person game studio like Team Cherry can release smash hit after smash hit, while Ubisoft with 170,000% the personnel count visibly flounders. Imagine doing that in hardware, like if you could just grab some buddies and start a business successfully competing with TSMC out of your garage. That's clearly not possible. But in software, it actually is.
The tech writer backlog is probably worse, because writing good documentation requires extensive experience with the software you're writing documentation about and there are four types of documentation you need to produce.
Yes. I have been building software and acting as tech lead for close to 30 years.
I am not even quite sure I know how to manage a team of more than two programmers right now. Opus 4.5, in the hands of someone who knows what they are doing, can develop software almost as fast as I can write specs and review code. And it's just plain better at writing code than 60% of my graduating class was back in the day. I have banned at least one person from ever writing a commit message or pull request again, because Claude will explain it better.
Now, most people don't know to squeeze that much productivity out of it, most corporate procurement would take 9 months to buy a bucket if it was raining money outside, and it's possible to turn your code into unmaintainable slop at warp speed. And Claude is better at writing code than it is at almost anything else, so the rest of y'all are safe for a while.
But if you think that tech writers, or translators, or software developers are the only people who are going to get hit by waves of downsizing, then you're not paying attention.
Even if the underlying AI tech stalls out hard and permanently in 2026, there's a wave of change coming, and we are not ready. Nothing in our society, economy or politics is ready to deal with what's coming. And that scares me a bit these days.
"And it's just plain better at writing code than 60% of my graduating class was back in the day".
Only because it has access to vast amount of sample code to draw a re-combine parts. Did You ever considered emerging technologies, like new languages or frameworks that may be a much better suited for You area but they are new, thus there is no codebase for LLM to draw from?
I'm starting to think about a risk of technological stagnation in many areas.
> Did You ever considered emerging technologies, like new languages or frameworks that may be a much better suited for You area but they are new, thus there is no codebase for LLM to draw from?
Try it. The pattern matching these things do is unlike anything seen before.
I'm writing a compiler for a language I designed, and LLMs have no trouble writing examples and tests. This is a language with syntax and semantics that does not exist in any training set because I made it up. And here it is, a machine is reading and writing code in this language with little difficulty.
Caveat emptor: it is far from perfect. But so are humans, which is where the training set originated.
> I'm starting to think about a risk of technological stagnation in many areas.
That just does not follow for me. We're in an era where advancements in technology continues to be roughly quadratic [1]. The implication you're giving is that the advancements are a step function that will soon (or has already) hit its final step.
This suggests that you are unfamiliar or unappreciative of how anything progresses, in any domain. Creativity is a function of taking what existed before and making it your own. "Standing on the shoulders of giants", "pulling oneself up by the bootstraps", and all that. None of that is changing just because some parts of it can now be automated.
Stagnation is the very last thing I would bet on. In part because it means a "full reset" and loss of everything, like most apocalyptic story lines. And in part because I choose to remain cautiously optimistic.
The post is brilliant, interesting, and deeply performative. It can be all those things, and more. It feels like being shown a display case at your friend's private library ("Did you read them all?" "Oh, these are just for this week" — Umberto Eco used this reply when folks asked him about his 50k books). Obscure references, namedropping, the right doses of self-deprecation, the footnotes (gosh, the footnotes!)
Nobody writes like this just for themselves. It's for the show. It's their mansion of words and it's there to wow bystanders. Mind you, I'm not condemning, just merely stating why the post somewhat irks me. However, I respect the intellectual depth of the author; I might even have a beer with them (though it couldn't be a standard lager, I guess). The Internet would be a better place if it'd be full of content like this post.
Edit: I'm commenting on the post, not on the author. I don't know them. I'd love to.
Using "performative" as a pejorative is dismissive. I like to read and I like to write. These are my hobbies and as a result posts like this come out. I will not apologize for finding certain topics exciting and being excited by a desire to share my excitement with the world. You say that the "Internet would be a better place if it'd be full of content like this post." I agree, and so I share.
Apologies: "performative" was a poor word choice and I can no longer edit the comment. I didn't mean to suggest the enthusiasm isn't genuine. What I was trying to say (clumsily) is that the post is clearly crafted with care for how it lands, which isn't a bad thing.
It strikes me as a little disappointing the way commenters seem to think they know you, and seem to respond to your thoughtful work by picking at you personally.
From the root comment that speculates about your existential happiness (he chose a partner and kids instead, and is happier that way than whatever he assumes your life is like!), to the gp comment that passes judgment on your intentions in writing at all.
I’m not really sure what to make of that, but that kind of behavior is the reason I keep my writing to myself (and specific people I email directly) and never share it. I don’t have the patience to deal with the uninvited judgment, and I worry that I’d respond to the unjustified demands by internalizing them.
My life is richer as a result of you being able and willing to deal with all this, and sharing what stimulated you this year. If I didn’t like it, I’d go read something else and politely abstain from judgment. As it happens, I liked it very much, and I did not go read something else. Thank you.
Thank you for that, thank you for not letting various ancillary grumps dissuade you, and a healthy and stimulating and prosperous new year to you!
The comments are so often people just telling on themselves, it's really wild to see. I'm glad people still create in spite of this instead of letting misanthropic "tastemakers" get their way, the creators are literally increasing the amount of meaning in the world and that is valuable.
— Learn Rust. I'm halfway through the Rustlings exercises and I'll continue with more challenges. Advice is welcome. I might also ask LLMs to pose as teachers and create exercises for me and check them.
- Cooking. This has been something I neglected all my life and I really want to get better at it. It's so fundamental to quality of life.
- Persian language. Studied it for six months, I can read and write the script and I understand basic sentences, but I want to get better at it. If there are any Persian folks reading this, ping me. It's a beautiful language and culture.
There are also project-based resources beyond Rustlings like Entirely Too Many Linked Lists[1] you might check out. I've found Gjengset's videos[2] great for intermediate content, and they include both project and lecture formats.
For me, what sparked my interest in cooking was two things: (1) getting older (in youth, food was simply fuel, not so much to be enjoyed), (2) wanting to replicate favourites dishes from restaurants (my first first was channa masala). You can learn a lot from middle-aged house wives that have a YouTube cooking channel that show you how to cook classic dishes from various cultures. One thing that has been incredibly liberating is making small tweaks to recipes that will trigger a cultural native to immediately declare: "Oh, that's not authentic." To that I say: "Who cares, it is my food, and I will enjoy it!"
I really like Rustfully on YouTube. Every video is under 10 minutes long and he goes over one concept and where it would be used in practice. Great for reinforcement learning.
reply