The thing about AI (no thanks, Claude)

It makes bad stuff better.

It makes poor writers better. So too for unskilled graphic designers or amateur financial analysts.

But also spam, scams, and cyber threats. It used to be easy to see a phishing email for what it was, but the Nigerian princes are gone.

At the same time, it also makes the great stuff worse.

Great writing becomes banal and repetitive. AI art looks like… AI art (or the boring stuff you see in hotels).

It brings the good stuff down while bringing the bad stuff up. In short, it averages it out.

And that makes sense because LLMs imbibe everything that exists—the great, the good, the average, and the bad.

And in most subjects, the bad always outnumbers the great (or even the good). This lowers the average.

The key is to know what to use it for.

If you’re bad at something, AI will make you better at it. So use it.

But if you’re good (or especially if you’re great) at something… Think twice before handing it off to AI.

I ran this post through Claude and, based on its knowledge of my style, asked it for feedback. Claude told me the post had great bones, but it also told me to remove certain items and phrasing… Precisely the same things that make my writing style what it is.

It told me to remove a tiny explanatory sentence as well (do you know which one?). And I refused to do it, because there are plenty of people who still don’t know how this stuff works. That knowledge is central to the premise of this post.

It also recommended that I end the post with something like, “Average is the enemy of excellence.” How many times have you heard that before? It felt too much like a motivational poster for me—I can already see the kitten attempting pullups with that as the headline.

So, I think I should actually end this post by saying, “Claude, your suggestions were appreciated, but wrong.”1

And that is precisely my point.


  1. I did make a couple of tiny grammatical changes it suggested, which did actually improve the post. And sometimes, Claude’s recommendations actually do improve my writing.

    And, somewhat ironically, Claude did improve this post by giving me feedback, even though the feedback turned out to be flawed. It proved my point in real time. So I must give credit when it’s due.

    But more often than not, Claude’s suggestions make my writing worse because it no longer sounds like me. ↩︎

AI isn’t taking your job

…at least not yet.


I use AI almost every day to assist with work and learn new topics (as part of my job) that I’m unfamiliar with. I read diligently to stay up-to-date on the latest developments, so I can learn how to use it more effectively.

AI will become (if it hasn’t already), and continue to be, a large portion of all of our lives.

However, we’re receiving a significant amount of misinformation about what’s happening and the effects it’s having on workers. Some of it is outright deception, while some is simply lazy reporting.

First, the deception.

The CEOs of these massive tech companies (e.g., Dario Amodei, Sam Altman) are brilliant business people who’ve created mind-boggling products. But they’re hemorrhaging cash trying to make their programs more powerful…

And after years of unbelievable growth and progress, they’re failing. The scaling law on which they used to project LLM growth is slowing down, and the improvements are now incremental, rather than exponential.

This is a serious financial problem for them. They need to keep their current investors engaged, and they need new investors to infuse them with additional capital. So what do they do?

They go on cable news shows or podcasts and claim that their AI software will replace all entry-level workers (10-20% of the workforce) within a matter of months.1 It just isn’t true.

But you wouldn’t know that from the news you’re consuming. They’ve bought into this story hook, line, and sinker.

Which brings me to my accusation of lazy reporting. Headlines like “Goodbye, $165,000 Tech Jobs” and “AI is Replacing 10 million Workers” (I made that one up) are attention-grabbing… But untrue.

These media companies, like the AI companies they write about, need to make money. They do that by getting as many eyes on their work as possible. And the best way to do that is to scare people into giving them attention… Even if the claims are untrue or misleading.

To paraphrase Ryan Holiday, who warned us about this years ago: “Trust them… They’re lying.”

It is true that computer science graduates are having a much harder time finding jobs at the moment. And it’s true that there have been massive layoffs in the tech sector.

It’s also true that the companies doing these layoffs are investing more of their money and efforts in AI. But AI is not the cause of this, nor is it replacing those who’ve been laid off.

Here’s what’s actually happening:

During the pandemic, these tech companies went on a massive hiring spree—they simply overhired. Now they’re bloated, and the quickest way to reduce the bloat and (temporarily) increase shareholder value is to shed programmers left and right.

At the same time, the tech sector itself is contracting, which means there are fewer jobs for all the newly minted computer science graduates.

This has historical precedence. The same thing happened in 2008 during the financial crisis. And it happened before that during the dot-com bust at the turn of the century.

The number of people entering the computer science field fluctuates in response to the economy. There’s a tech boom, prompting more people to enter the field. Then the sector contracts, and all those people get laid off, which in turn reduces the number of people entering the field.

Until the next boom.

Contrary to what many journalists have written, these people aren’t being replaced by AI. They’re simply being let go because companies overhired during the pandemic or because the companies are refocusing on AI.

However, that refocus, coupled with layoffs and fewer job openings, has led them to conflate the two, concluding that these computer science graduates are being replaced by AI.

This simply isn’t true. That may happen in the 2030s, but it’s not happening right now.

I’ve been guilty of buying into this hysteria too, as you can see in my piece on job hunting in 2025. And I’m here to tell you I was wrong in what I wrote about AI replacing workers in that piece.

All that to say this: Read AI journalism with a healthy dose of skepticism right now. And take any apocalyptic predictions with a grain of salt.


  1. Dario Amodei actually said this in an interview with Anderson Cooper and, ironically, claimed to be worried about it… Which begs the question: if you’re worried about it, why do you continue to do it?

    Why doesn’t he just stop if it actually worries him? It’s his company. ↩︎