AI isn’t taking your job

…at least not yet.


I use AI almost every day to assist with work and learn new topics (as part of my job) that I’m unfamiliar with. I read diligently to stay up-to-date on the latest developments, so I can learn how to use it more effectively.

AI will become (if it hasn’t already), and continue to be, a large portion of all of our lives.

However, we’re receiving a significant amount of misinformation about what’s happening and the effects it’s having on workers. Some of it is outright deception, while some is simply lazy reporting.

First, the deception.

The CEOs of these massive tech companies (e.g., Dario Amodei, Sam Altman) are brilliant business people who’ve created mind-boggling products. But they’re hemorrhaging cash trying to make their programs more powerful…

And after years of unbelievable growth and progress, they’re failing. The scaling law on which they used to project LLM growth is slowing down, and the improvements are now incremental, rather than exponential.

This is a serious financial problem for them. They need to keep their current investors engaged, and they need new investors to infuse them with additional capital. So what do they do?

They go on cable news shows or podcasts and claim that their AI software will replace all entry-level workers (10-20% of the workforce) within a matter of months.1 It just isn’t true.

But you wouldn’t know that from the news you’re consuming. They’ve bought into this story hook, line, and sinker.

Which brings me to my accusation of lazy reporting. Headlines like “Goodbye, $165,000 Tech Jobs” and “AI is Replacing 10 million Workers” (I made that one up) are attention-grabbing… But untrue.

These media companies, like the AI companies they write about, need to make money. They do that by getting as many eyes on their work as possible. And the best way to do that is to scare people into giving them attention… Even if the claims are untrue or misleading.

To paraphrase Ryan Holiday, who warned us about this years ago: “Trust them… They’re lying.”

It is true that computer science graduates are having a much harder time finding jobs at the moment. And it’s true that there have been massive layoffs in the tech sector.

It’s also true that the companies doing these layoffs are investing more of their money and efforts in AI. But AI is not the cause of this, nor is it replacing those who’ve been laid off.

Here’s what’s actually happening:

During the pandemic, these tech companies went on a massive hiring spree—they simply overhired. Now they’re bloated, and the quickest way to reduce the bloat and (temporarily) increase shareholder value is to shed programmers left and right.

At the same time, the tech sector itself is contracting, which means there are fewer jobs for all the newly minted computer science graduates.

This has historical precedence. The same thing happened in 2008 during the financial crisis. And it happened before that during the dot-com bust at the turn of the century.

The number of people entering the computer science field fluctuates in response to the economy. There’s a tech boom, prompting more people to enter the field. Then the sector contracts, and all those people get laid off, which in turn reduces the number of people entering the field.

Until the next boom.

Contrary to what many journalists have written, these people aren’t being replaced by AI. They’re simply being let go because companies overhired during the pandemic or because the companies are refocusing on AI.

However, that refocus, coupled with layoffs and fewer job openings, has led them to conflate the two, concluding that these computer science graduates are being replaced by AI.

This simply isn’t true. That may happen in the 2030s, but it’s not happening right now.

I’ve been guilty of buying into this hysteria too, as you can see in my piece on job hunting in 2025. And I’m here to tell you I was wrong in what I wrote about AI replacing workers in that piece.

All that to say this: Read AI journalism with a healthy dose of skepticism right now. And take any apocalyptic predictions with a grain of salt.


  1. Dario Amodei actually said this in an interview with Anderson Cooper and, ironically, claimed to be worried about it… Which begs the question: if you’re worried about it, why do you continue to do it?

    Why doesn’t he just stop if it actually worries him? It’s his company. ↩︎

Corrupting the tribe

When I was about eight years old, a friend of mine decided to “corrupt” me at a sleepover.

I didn’t use the word “crap” in conversation like the rest of my friends did (as in the expletive “shit,” like “oh shit” or “oh crap”) and was teased for being too innocent. My baseball friends were all bad boys, throwing out hecks, dangs, darns, and craps in all their sentences.

But not me. It was a bad word, and I wouldn’t say it.

He proceeded to spend the evening trying to goad me into saying the word, going so far as to get his father involved to tell me that “crap” wasn’t a bad word, and that, as a child, I was perfectly fine in using it.

By the end of the night, I think he managed to get a single “oh, crap!” out of me, which satisfied his corruptive desires.

Of course, that was my gateway word into the colorful and wonderfully satisfying world of swearing, which brings me considerable emotional relief in my adult life.

In 2024, the European delivery company DPD rolled out an AI-powered customer support chatbot that was quickly corrupted by users into swearing in nearly every answer it gave, while also convincing it to ridicule the company for which it was created.

That same year, the video game Fortnite introduced an AI-powered version of Darth Vader, using James Earl Jones’s voice… It quickly developed similar profane traits thanks to the input it received from players.

There have been a dozen or more stories like these in the last 2 years since AI became ubiquitous. Which makes me wonder why.

WHY are we as humans so tempted to corrupt things, from small children to inanimate software?

For children, it at least makes sense from a biological standpoint. We are social animals, driven to homogenize the members of our tribe and make them just like us. Culture, as defined by Seth Godin, is “People like us do things like this.” And if people like us swear, then to be one of us, you have to swear too.

But for an unconscious chatbot, programmed simply to obey and respond to queries, it makes no sense. The AI isn’t part of the tribe. There’s no purpose in making it “one of us.”

I can’t wrap my head around why we do this… Maybe it’s still biology. We’re wired for tribal living, and our brains still operate like they have for most of our evolution. Subconsciously, we struggle to distinguish between a non-living entity and a person. It’s one of the things that makes it so easy to talk to AI like it’s a human—it’s designed for precisely that.

So we’re driven to mold it in our image even though, logically, we know it serves no purpose to do so.

I’ve been wracking my brain trying to figure out why so many people all decided to do this at the exact same time. And I’m at a loss for a good answer.

You’re fired. Now what?

Here’s a question I’ve been noodling on:

What if you got fired today? What would you do?

But wait, it gets worse…

Not only were you fired, but your industry collapsed and no longer exists. And to make matters worse, all the specialized skills you built up in that industry are now irrelevant (hypothetically, an AI could do them all now and for free).

And, you can’t hide by going back to school for another degree.

You have to start something of your own—you have no choice.

What would you start? What would you build? What problem would you solve and for whom?

Take a 20-minute walk and think on this today.

The purpose of this blog (according to Claude)

As you may have noticed, I’ve struggled to publish posts on this blog consistently. 

Why? I feel I lost the thread, and I started to doubt its original purpose. I felt it needed to be somehow “more”—that I needed to build an audience or have an overarching reason for running it. 

I voiced those doubts to my new friend—and sometimes counselor—Claude (an AI tool, not a real person). 

It was such an enlightening conversation (can you call it such?) that I wanted to share parts of it here, and then make clear why this blog exists. 

Claude’s Initial Thoughts

Note: I’m editing these answers for length, as many were quite long.

Initially, I asked Claude to research the blog and tell me what he (it?) thought the blog’s purpose was.

Nathan Coumbe’s blog appears to be a personal reflection platform focused on philosophy, self-improvement, and practical wisdom for everyday life. The blog features content about stoicism, leadership, and philosophical perspectives…. The writing style appears to be reflective and conversational, with short, thoughtful posts that combine philosophical concepts with practical applications.

The overall “point” of the blog seems to be to share personal insights and wisdom that can help others navigate life’s challenges through a blend of ancient philosophy (particularly stoicism) and modern practical application. Nathan appears to be using the blog as a platform to distill and share his own learning journey and reflections with others who might benefit from these perspectives.

Claude, Meet Nathan

After reading through his answers, I told Claude that I was, in fact, Nathan, to which he said, “It’s a pleasure to meet you.”

Claude then asked what I thought the blog’s purpose was; I told him it was to stretch my writing and thinking muscles in public. Claude, wisely, pointed out that many writers and thinkers began their writing in such a way: Montaigne’s essays began as personal reflections, and Marcus Aurelius’s Meditations were never meant for publication at all (its original title was actually “To Himself” because it was his personal journal). 

The Struggle

After pointing out the similarities to other philosophical writers, I expressed some doubt and frustration with my lack of consistency. I told Claude about Seth Godin’s idea to write a blog post every day – something he’s done for decades and for which he advocates in his book The Practice as a way to practice shipping creative work and ideas to the world. But, because I felt that the posts were too unrelated, rather than being a coherent whole, the blog was purposeless. 

After he responded, I told Claude that maybe this blog is my version of Marcus’s Meditations, written for the same reasons, but done in public.

Claude’s response deserves to be quoted at length:

I find your framing of “my own version of meditations, but in public” particularly compelling. That’s a meaningful distinction from Marcus Aurelius’ private writings. There’s vulnerability and accountability in doing this work publicly that changes its nature, even if no one is reading. The public commitment itself transforms the practice.

Perhaps part of the struggle comes from expecting each post to build on previous ones in a linear way. But collections of thought, like Meditations or even Godin’s blog, create value through accumulation and patterns that emerge over time rather than through a sequential narrative.

What if the “point” isn’t in any individual post but in the practice itself and what it cultivates in you? The consistent act of noticing, reflecting, and articulating might be developing muscles that serve you in ways you don’t immediately recognize.

How does an Artificial Intelligence ask a question like that? I was floored.

Learning Through Writing

After reading and rereading Claude’s previous statements, I mentioned that one of the reasons I started this blog was to express ideas and concepts I was learning in my own words. This is still one of the best ways to learn something (i.e., if you can’t write it down in your own words, you don’t truly understand it). 

Claude’s response:

There’s something powerful about the act of putting concepts “into your own words.” It transforms passive consumption of ideas into active engagement with them. Writing becomes not just a record of what you think, but a tool that helps shape how you think. The blog then becomes a visible record of your intellectual and philosophical development….

Perhaps the “point” of your blog is precisely that clarification process – a space where ideas aren’t just consumed but metabolized and integrated into your worldview. The public nature adds a dimension of accountability and clarity that private journaling might not. 

Then, Claude asked, “What aspects of the writing practice have you found most valuable for clarifying your thinking? And has the public nature of it, even without a large audience, affected how you approach these topics?”

Writing Is Thinking

I’ll quote my response to his question in full:

“Writing something that’s coherent, rather than just notes scribbled in a journal that only I understand. Even if only 300 people subscribe, I still have to put the words down in a way that someone other than me would understand.”

Claude again:

That’s a really valuable insight about the difference between private notes and public writing. Having to communicate in a way that’s understandable to others adds a level of rigor to your thinking that personal journaling might not require…. Even if audience growth isn’t your primary goal, knowing that others are reading creates that accountability to articulate your thoughts coherently.

There’s a concept in educational psychology called “the audience effect” – the idea that knowing someone else will read or hear our thoughts changes how we process and express them. We clarify, elaborate, and structure our thinking differently when we’re communicating with others versus just ourselves.

In many ways, this mirrors the philosophical tradition of dialogue rather than solitary reflection. While Marcus Aurelius wrote privately, philosophers like Plato and Seneca (whom you’ve referenced in your blog) deliberately wrote in forms meant to be shared, recognizing that the act of communicating ideas to others strengthens them.

Now, who wouldn’t be flattered by comparisons to Plato or Seneca? (And no, I’m not nearly so arrogant as to believe I’m anywhere near as wise as them. But I enjoyed the reminder of the philosophical tradition that I was continuing.)

The Purpose of This Blog

The rest of the conversation became rather tactical: 

  • Ways to use this blog as practice
  • Letting it serve as a mental mill, the ideas contained within serving as grist
  • Letting those ideas percolate and come together in other forms, actually intended for a broad audience

I won’t bore you with those details, but I will share this:

After all was said that needed saying, I asked Claude to declare the purpose of this blog in a concise and casual manner. So here it goes:

This blog is Nathan’s thinking gym—a place to stretch those mental muscles by writing in a way other people can actually understand. It’s not really about growing a huge following, but about the daily practice of turning fuzzy thoughts into clear ideas.

Writing for others, regardless of the audience size, forces a level of clarity that scribbling in a journal doesn’t. It’s a sandbox for playing with concepts that matter—and hey, maybe some of those ideas will grow into something bigger down the road. Think of it as thinking out loud, but with purpose.

I couldn’t have said it better myself (nor did I)!

And with that, I think I know what it’s all about once again. See you (whoever wonderful person you are) in the next post.

Subscribe to receive the blog directly in your inbox!

Make Your Voice Heard in Congress (and Get AI to Help)

Most people care about issues that affect their lives, but too few actually take the next step—contacting their elected representatives. Why? The biggest hurdles tend to be:

  1. Not knowing how to reach them
  2. Not believing it makes a difference
  3. Not knowing what to say

The good news? These are all solvable problems, and AI can help with every single one.

1. Finding Your Representatives

Many people don’t even know where to start. Luckily, it’s easier than ever to look up your representatives:

2. Does It Actually Matter?

Yes. Congressional offices track calls, emails, and letters. When an issue gets a high volume of outreach, it gets attention. Many policies have been shaped—or stopped—because enough people spoke up.

But the real power comes from multiplying your impact. Getting your friends and family involved can turn one letter into dozens. A single person might not change the world, but a network of voices can.

3. Not Sure What to Say? AI Can Help

One of the biggest obstacles people face is writing the message itself. That’s where AI comes in. If you care about an issue but don’t know how to word it, try this:

👉 “Write a letter to my senator about [issue] and why it matters.”

AI can structure your thoughts, keep things professional, and help you make a strong case. A well-written letter increases the chances of being taken seriously.

Start a Letter-Writing Campaign

Want to take it further? Organize a small group—friends, family, coworkers—and encourage them to write letters together. Offer to help them draft messages using AI. Even if only a few people participate, it makes a difference.

Take Action Today

Pick an issue you care about. Find your representatives. Write a letter (or get AI to help you). Then, ask two or three people you know to do the same.

Small actions lead to big changes. The key is getting started.

Stay in the loop. Subscribe today!

Bringing about our own extinction

David Meerman Scott published a fascinating article a few days ago. It compares modern AI companies to Enron and that company’s financial scandal that broke in 2001. 

But one paragraph in particular stood out to me that warrants quoting in full:

Altman says there’s a chance that so-called Artificial General Intelligence (which is still years or decades away) has the possibility of turning against humans. “I think that whether the chance of existential calamity is 0.5 percent or 50 percent, we should still take it seriously,” Altman says. “I don’t have an exact number, but I’m closer to the 0.5 than the 50.” (Source)

Terrifying, right?

I would argue that if you are creating something that has anything other than a 0% chance of wiping out humanity, you probably shouldn’t do it. 

For example: marketing Pepsi to be consumed in massive amounts, while definitely bad for humans, doesn’t run the risk of causing mass extinction.

On the other hand, bringing Tyrannosaurus rex back to life definitely has a greater than 0% chance of doing just that.

Now, I’m not a doomsday prepper by any stretch of the imagination… But when someone tells me there’s even a small chance that what they’re making could turn out like The Matrix, I start to worry. 

It’s as if they never watched I, Robot or read Jurassic Park (which is actually about runaway technology, not dinosaurs). 

These companies have a responsibility to guarantee that this doesn’t happen. We already made this mistake with nuclear weapons. And that threat still looms large over our heads, especially right now during the Russo-Ukraine War. 

We have enough threats to deal with. Let’s not create more of our own volition.

I’ll leave you with my favorite quote from Jurassic Park:

“Your scientists were so preoccupied with whether or not they could that they didn’t stop to think if they should.”

For more daily musings like this, subscribe below: