Corrupting the tribe

When I was about eight years old, a friend of mine decided to “corrupt” me at a sleepover.

I didn’t use the word “crap” in conversation like the rest of my friends did (as in the expletive “shit,” like “oh shit” or “oh crap”) and was teased for being too innocent. My baseball friends were all bad boys, throwing out hecks, dangs, darns, and craps in all their sentences.

But not me. It was a bad word, and I wouldn’t say it.

He proceeded to spend the evening trying to goad me into saying the word, going so far as to get his father involved to tell me that “crap” wasn’t a bad word, and that, as a child, I was perfectly fine in using it.

By the end of the night, I think he managed to get a single “oh, crap!” out of me, which satisfied his corruptive desires.

Of course, that was my gateway word into the colorful and wonderfully satisfying world of swearing, which brings me considerable emotional relief in my adult life.

In 2024, the European delivery company DPD rolled out an AI-powered customer support chatbot that was quickly corrupted by users into swearing in nearly every answer it gave, while also convincing it to ridicule the company for which it was created.

That same year, the video game Fortnite introduced an AI-powered version of Darth Vader, using James Earl Jones’s voice… It quickly developed similar profane traits thanks to the input it received from players.

There have been a dozen or more stories like these in the last 2 years since AI became ubiquitous. Which makes me wonder why.

WHY are we as humans so tempted to corrupt things, from small children to inanimate software?

For children, it at least makes sense from a biological standpoint. We are social animals, driven to homogenize the members of our tribe and make them just like us. Culture, as defined by Seth Godin, is “People like us do things like this.” And if people like us swear, then to be one of us, you have to swear too.

But for an unconscious chatbot, programmed simply to obey and respond to queries, it makes no sense. The AI isn’t part of the tribe. There’s no purpose in making it “one of us.”

I can’t wrap my head around why we do this… Maybe it’s still biology. We’re wired for tribal living, and our brains still operate like they have for most of our evolution. Subconsciously, we struggle to distinguish between a non-living entity and a person. It’s one of the things that makes it so easy to talk to AI like it’s a human—it’s designed for precisely that.

So we’re driven to mold it in our image even though, logically, we know it serves no purpose to do so.

I’ve been wracking my brain trying to figure out why so many people all decided to do this at the exact same time. And I’m at a loss for a good answer.

Bringing about our own extinction

David Meerman Scott published a fascinating article a few days ago. It compares modern AI companies to Enron and that company’s financial scandal that broke in 2001. 

But one paragraph in particular stood out to me that warrants quoting in full:

Altman says there’s a chance that so-called Artificial General Intelligence (which is still years or decades away) has the possibility of turning against humans. “I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,” Altman says. “I don’t have an exact number, but I’m closer to the 0.5 than the 50.” (Source)

Terrifying, right?

I would argue that if you are creating something that has anything other than a 0% chance of wiping out humanity, you probably shouldn’t do it. 

For example: marketing Pepsi to be consumed in massive amounts, while definitely bad for humans, doesn’t run the risk of causing mass extinction.

On the other hand, bringing Tyrannosaurus rex back to life definitely has a greater than 0% chance of doing just that.

Now, I’m not a doomsday prepper by any stretch of the imagination… But when someone tells me there’s even a small chance that what they’re making could turn out like The Matrix, I start to worry. 

It’s as if they never watched I, Robot or read Jurassic Park (which is actually about runaway technology, not dinosaurs). 

These companies have a responsibility to guarantee that this doesn’t happen. We already made this mistake with nuclear weapons. And that threat still looms large over our heads, especially right now during the Russo-Ukraine War. 

We have enough threats to deal with. Let’s not create more of our own volition.

I’ll leave you with my favorite quote from Jurassic Park:

“Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.”

For more daily musings like this, subscribe below: