The thing about AI (no thanks, Claude)

It makes bad stuff better.

It makes poor writers better. So too for unskilled graphic designers or amateur financial analysts.

But also spam, scams, and cyber threats. It used to be easy to see a phishing email for what it was, but the Nigerian princes are gone.

At the same time, it also makes the great stuff worse.

Great writing becomes banal and repetitive. AI art looks like… AI art (or the boring stuff you see in hotels).

It brings the good stuff down while bringing the bad stuff up. In short, it averages it out.

And that makes sense because LLMs imbibe everything that exists—the great, the good, the average, and the bad.

And in most subjects, the bad always outnumbers the great (or even the good). This lowers the average.

The key is to know what to use it for.

If you’re bad at something, AI will make you better at it. So use it.

But if you’re good (or especially if you’re great) at something… Think twice before handing it off to AI.

I ran this post through Claude and, based on its knowledge of my style, asked it for feedback. Claude told me the post had great bones, but it also told me to remove certain items and phrasing… Precisely the same things that make my writing style what it is.

It told me to remove a tiny explanatory sentence as well (do you know which one?). And I refused to do it, because there are plenty of people who still don’t know how this stuff works. That knowledge is central to the premise of this post.

It also recommended that I end the post with something like, “Average is the enemy of excellence.” How many times have you heard that before? It felt too much like a motivational poster for me—I can already see the kitten attempting pullups with that as the headline.

So, I think I should actually end this post by saying, “Claude, your suggestions were appreciated, but wrong.”1

And that is precisely my point.


  1. I did make a couple of tiny grammatical changes it suggested, which did actually improve the post. And sometimes, Claude’s recommendations actually do improve my writing.

    And, somewhat ironically, Claude did improve this post by giving me feedback, even though the feedback turned out to be flawed. It proved my point in real time. So I must give credit when it’s due.

    But more often than not, Claude’s suggestions make my writing worse because it no longer sounds like me. ↩︎

Digital dementia

Psychologists also call it digital brain rot.

It describes the forgetfulness, the inability to focus on anything meaningful, the brain fog, and the mental fatigue caused by our chronic overuse of smartphones, digital devices, online games, and social media.

It’s even been shown to reduce the gray matter in our brains associated with emotional regulation, decision-making, and creativity.

The good news is that it only takes a few days to bounce back, but it can be a tough few days.

Try a digital declutter or the phone-foyer method. Find something that works for you, but for your sake and that of future generations, do something.

Everyone worried TV would rot our brains, but smartphones actually are. We know these devices are purpose-built for addiction and harm.

Now is the time to break the cycle.


H/t to Brad Stulberg.

Corrupting the tribe

When I was about eight years old, a friend of mine decided to “corrupt” me at a sleepover.

I didn’t use the word “crap” in conversation like the rest of my friends did (as in the expletive “shit,” like “oh shit” or “oh crap”) and was teased for being too innocent. My baseball friends were all bad boys, throwing out hecks, dangs, darns, and craps in all their sentences.

But not me. It was a bad word, and I wouldn’t say it.

He proceeded to spend the evening trying to goad me into saying the word, going so far as to get his father involved to tell me that “crap” wasn’t a bad word, and that, as a child, I was perfectly fine in using it.

By the end of the night, I think he managed to get a single “oh, crap!” out of me, which satisfied his corruptive desires.

Of course, that was my gateway word into the colorful and wonderfully satisfying world of swearing, which brings me considerable emotional relief in my adult life.

In 2024, the European delivery company DPD rolled out an AI-powered customer support chatbot that was quickly corrupted by users into swearing in nearly every answer it gave, while also convincing it to ridicule the company for which it was created.

That same year, the video game Fortnite introduced an AI-powered version of Darth Vader, using James Earl Jones’s voice… It quickly developed similar profane traits thanks to the input it received from players.

There have been a dozen or more stories like these in the last 2 years since AI became ubiquitous. Which makes me wonder why.

WHY are we as humans so tempted to corrupt things, from small children to inanimate software?

For children, it at least makes sense from a biological standpoint. We are social animals, driven to homogenize the members of our tribe and make them just like us. Culture, as defined by Seth Godin, is “People like us do things like this.” And if people like us swear, then to be one of us, you have to swear too.

But for an unconscious chatbot, programmed simply to obey and respond to queries, it makes no sense. The AI isn’t part of the tribe. There’s no purpose in making it “one of us.”

I can’t wrap my head around why we do this… Maybe it’s still biology. We’re wired for tribal living, and our brains still operate like they have for most of our evolution. Subconsciously, we struggle to distinguish between a non-living entity and a person. It’s one of the things that makes it so easy to talk to AI like it’s a human—it’s designed for precisely that.

So we’re driven to mold it in our image even though, logically, we know it serves no purpose to do so.

I’ve been wracking my brain trying to figure out why so many people all decided to do this at the exact same time. And I’m at a loss for a good answer.

Bringing about our own extinction

David Meerman Scott published a fascinating article a few days ago. It compares modern AI companies to Enron and that company’s financial scandal that broke in 2001. 

But one paragraph in particular stood out to me that warrants quoting in full:

Altman says there’s a chance that so-called Artificial General Intelligence (which is still years or decades away) has the possibility of turning against humans. “I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,” Altman says. “I don’t have an exact number, but I’m closer to the 0.5 than the 50.” (Source)

Terrifying, right?

I would argue that if you are creating something that has anything other than a 0% chance of wiping out humanity, you probably shouldn’t do it. 

For example: marketing Pepsi to be consumed in massive amounts, while definitely bad for humans, doesn’t run the risk of causing mass extinction.

On the other hand, bringing Tyrannosaurus rex back to life definitely has a greater than 0% chance of doing just that.

Now, I’m not a doomsday prepper by any stretch of the imagination… But when someone tells me there’s even a small chance that what they’re making could turn out like The Matrix, I start to worry. 

It’s as if they never watched I, Robot or read Jurassic Park (which is actually about runaway technology, not dinosaurs). 

These companies have a responsibility to guarantee that this doesn’t happen. We already made this mistake with nuclear weapons. And that threat still looms large over our heads, especially right now during the Russo-Ukraine War. 

We have enough threats to deal with. Let’s not create more of our own volition.

I’ll leave you with my favorite quote from Jurassic Park:

“Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.”

For more daily musings like this, subscribe below: