Corrupting the tribe

When I was about eight years old, a friend of mine decided to “corrupt” me at a sleepover.

I didn’t use the word “crap” in conversation like the rest of my friends did (as in the expletive “shit,” like “oh shit” or “oh crap”) and was teased for being too innocent. My baseball friends were all bad boys, throwing out hecks, dangs, darns, and craps in all their sentences.

But not me. It was a bad word, and I wouldn’t say it.

He proceeded to spend the evening trying to goad me into saying the word, going so far as to get his father involved to tell me that “crap” wasn’t a bad word, and that, as a child, I was perfectly fine in using it.

By the end of the night, I think he managed to get a single “oh, crap!” out of me, which satisfied his corruptive desires.

Of course, that was my gateway word into the colorful and wonderfully satisfying world of swearing, which brings me considerable emotional relief in my adult life.

In 2024, the European delivery company DPD rolled out an AI-powered customer support chatbot that was quickly corrupted by users into swearing in nearly every answer it gave, while also convincing it to ridicule the company for which it was created.

That same year, the video game Fortnite introduced an AI-powered version of Darth Vader, using James Earl Jones’s voice… It quickly developed similar profane traits thanks to the input it received from players.

There have been a dozen or more stories like these in the last 2 years since AI became ubiquitous. Which makes me wonder why.

WHY are we as humans so tempted to corrupt things, from small children to inanimate software?

For children, it at least makes sense from a biological standpoint. We are social animals, driven to homogenize the members of our tribe and make them just like us. Culture, as defined by Seth Godin, is “People like us do things like this.” And if people like us swear, then to be one of us, you have to swear too.

But for an unconscious chatbot, programmed simply to obey and respond to queries, it makes no sense. The AI isn’t part of the tribe. There’s no purpose in making it “one of us.”

I can’t wrap my head around why we do this… Maybe it’s still biology. We’re wired for tribal living, and our brains still operate like they have for most of our evolution. Subconsciously, we struggle to distinguish between a non-living entity and a person. It’s one of the things that makes it so easy to talk to AI like it’s a human—it’s designed for precisely that.

So we’re driven to mold it in our image even though, logically, we know it serves no purpose to do so.

I’ve been wracking my brain trying to figure out why so many people all decided to do this at the exact same time. And I’m at a loss for a good answer.

Fear keeps the majority out of power

It only takes one person for something evil to occur. For example, one of the reasons many authoritarian countries haven’t changed their regime already is that the vast majority of people live in fear of the handful of people who would commit evil on behalf of the leaders.

This is a question of power. If every single person in the country realized that they only have power because they can get other people to do bad things, the leaders would no longer be in power. 

The flipside of that is that it only requires one person being willing to harm or kill another for these people to be able to keep their power. 

It’s contagious—one person begets another person willing to commit harm (or too scared to refuse). Pretty soon, a tiny minority of people grows who are willing to commit evil to keep this one person in power.

Because not everyone says no, the minority rules, and the majority seems powerless. As such, the people who are in the majority must seemingly be willing to face death at the hands of the minority to effect change.