Unknown's avatar

Posts by Nathan Coumbe

My mission is to learn, inform, inspire, and improve. I am a passionate teacher, an avid writer, a leader of people, and a strategic thinker. Wherever I am, whatever the work I am called to do, my goal is the same: make my little corner of the world better for everyone in it. To do this, I ask better questions and solve more interesting problems for those I serve. Think deeply. Think often. Keep exploring. Always be curious.

The em dash exists for a reason

And I’ll be damned if I stop using them now. I’ve been using em dashes since I started writing—because they work.

They do things that other parenthetical devices like commas or parentheses don’t do.

They add force to your arguments. They separate potentially unrelated but still relevant or useful thoughts—have you ever noticed this?—from the point you’re making.

And for those of you who say that there’s no way to recreate them on a computer keyboard…

Shift + Option + – (for Mac users) gives you —.

Here are 11 of them: — — — — — — — — — — —

Also, there are two other types of dashes.

– (press only the dash key) is a hyphen most often used to separate compound words.

– (made by pressing Option + – on a Mac) is called an en dash and can be used to separate things like dates (April 20–23).1

And then you have the glorious em dash.

Why do they show up so often in AI writing? It’s simple: most of the best writers in history made (and still make) liberal use of it—because it works! And because AI has imbibed all the writing ever written, it also uses it quite often.

Does AI use them too much? Absolutely.

Does that mean we should stop using them? Absolutely not.

So what’s the solution? You get really damn good at writing.

Develop a style of your own, a voice, a way of writing that sounds like you—and only you. So when people read your writing, they know that you did it, not an AI.

And you’ll be able to use fifty em dashes in a single piece if you wanted to, and no one would care because they would know, simply because of your personal style, that you were generous enough to take time out of your day to share something worth reading.

If you write well—and like yourself—you’ll be fine.

(Ann Handley actually beat me to this a long while back. But I was so incensed by no less than 7 posts about this yesterday that I had to say something.)


  1. Unfortunately, there is a feature in WordPress’s code that prevents the three different dashes from rendering properly. I didn’t realize this until after I publish. So if you’re reading this on my site instead of in email, you won’t be able to see the difference between them.

    You can test this for yourself: pull up a blank document somewhere on your computer and try all three keystroke combinations.

    Also, this is a great reason to subscribe to my email newsletter, so you’ll see it rendered the way it’s supposed to. ↩︎

The thing about AI (no thanks, Claude)

It makes bad stuff better.

It makes poor writers better. So too for unskilled graphic designers or amateur financial analysts.

But also spam, scams, and cyber threats. It used to be easy to see a phishing email for what it was, but the Nigerian princes are gone.

At the same time, it also makes the great stuff worse.

Great writing becomes banal and repetitive. AI art looks like… AI art (or the boring stuff you see in hotels).

It brings the good stuff down while bringing the bad stuff up. In short, it averages it out.

And that makes sense because LLMs imbibe everything that exists—the great, the good, the average, and the bad.

And in most subjects, the bad always outnumbers the great (or even the good). This lowers the average.

The key is to know what to use it for.

If you’re bad at something, AI will make you better at it. So use it.

But if you’re good (or especially if you’re great) at something… Think twice before handing it off to AI.

I ran this post through Claude and, based on its knowledge of my style, asked it for feedback. Claude told me the post had great bones, but it also told me to remove certain items and phrasing… Precisely the same things that make my writing style what it is.

It told me to remove a tiny explanatory sentence as well (do you know which one?). And I refused to do it, because there are plenty of people who still don’t know how this stuff works. That knowledge is central to the premise of this post.

It also recommended that I end the post with something like, “Average is the enemy of excellence.” How many times have you heard that before? It felt too much like a motivational poster for me—I can already see the kitten attempting pullups with that as the headline.

So, I think I should actually end this post by saying, “Claude, your suggestions were appreciated, but wrong.”1

And that is precisely my point.


  1. I did make a couple of tiny grammatical changes it suggested, which did actually improve the post. And sometimes, Claude’s recommendations actually do improve my writing.

    And, somewhat ironically, Claude did improve this post by giving me feedback, even though the feedback turned out to be flawed. It proved my point in real time. So I must give credit when it’s due.

    But more often than not, Claude’s suggestions make my writing worse because it no longer sounds like me. ↩︎

Excuse me: Is that emergency button made in China?

There’s a blue emergency call tower halfway along a walking trail I frequent each week.

If you’re being attacked or having a heart attack, you smack a button on it, and it immediately calls emergency services and shares your location with them so they can find you—fast.

When I walked by it the other day, I noticed they’d added something new to it. It was a big sign, probably a square foot in size.

And on the sign, printed in big block letters, were the words, “Proudly made in the USA!”

I thought about that sign for the rest of my walk. I just kept thinking, “Who was that for? What was the purpose of that new sign?”

I don’t think it’s for the person in trouble. If you’re being chased by an axe murderer, would you check the tower for a “Made in China” stamp before you pressed the button?

I doubt it.

And if it were manufactured somewhere else, would that really deter you? Oh geez. Made in China?! Gross. I’d rather this guy just kill me than press the button.

It’s not marketing. No one who sees it has any need (or ability) to buy one and stick it in their front yard, so where it’s made doesn’t factor into a buying decision.

Is it supposed to inspire confidence in people like me who walk past it every day? I already know most things aren’t manufactured here, and they typically work just fine.1

The only answer that seems to fit is that it’s for the people who installed it.

It’s a flag. It’s performative patriotism—not for any user of the tower, but for whoever approved the purchase, or whoever installed it. It’s a tribal signal. To paraphrase Seth Godin: “People like us install things like this.”

The sign isn’t communicating with us. It’s communicating about someone else.


  1. In fact, I’ve had such horrible experiences with American-made brands (see: any American car) that it might actually be triggering the opposite effect the sign intended. Now I’m thinking, “Man… Would that thing actually work in an emergency?” ↩︎

Digital dementia

Psychologists also call it digital brain rot.

It describes the forgetfulness, the inability to focus on anything meaningful, the brain fog, and the mental fatigue caused by our chronic overuse of smartphones, digital devices, online games, and social media.

It’s even been shown to reduce the gray matter in our brains associated with emotional regulation, decision-making, and creativity.

The good news is that it only takes a few days to bounce back, but it can be a tough few days.

Try a digital declutter or the phone-foyer method. Find something that works for you, but for your sake and that of future generations, do something.

Everyone worried TV would rot our brains, but smartphones actually are. We know these devices are purpose-built for addiction and harm.

Now is the time to break the cycle.


H/t to Brad Stulberg.

Are you good enough?

That question is impossible to answer. It’s missing a vital fragment.

“At what?”

Until you know what you need to measure yourself against, there’s no point in measuring at all.

History is tragedy, not melodrama

One of my professors in college, a tiny little man from the Delta named Dr. Bo Morgan, said one of the most accurate and poignant statements about history to all of us historians-in-training on our first day in his class:

“History is tragedy, not melodrama.”

Melodrama: think of all the westerns from the 1950s and 60s. There were good guys and bad guys. And you could easily see who was who.

Tragedy: real people with real flaws acting the way humans do… And their flaws destroy them in the end.

History isn’t a Western, as much as our politicians would like to treat it that way. There are rarely obvious villains and heroic good guys that you can easily spot. It’s full of good people doing bad things and bad people doing good things. Flawed humans acting as such.

Acknowledging the bad things we’ve done doesn’t harm America. It doesn’t make America or Americans “bad.”

Acknowledging the Holocaust doesn’t make all Germans or Germany bad. Why would recognizing our history of systemic racism or chattel slavery or the destruction of Native Americans harm the US?

If anything, acknowledging it helps us because we can learn from it and improve the present.

There is no point in erasing or hiding any of it except to please a small fringe on one side of the aisle.

And it’s also true that labeling our country as pure evil is equally wrong, something an equally tiny fringe of extreme people on the other side has tried to do as well.

History is tragedy, not melodrama.


This post was inspired by Heather Cox Richardson’s newsletter from March 28.

The first rule in juggling…

Never lunge for the ball.

If you make a bad throw, just let it drop. Then start over.

Learning to juggle taught me how to handle life: sometimes you make a bad throw.

Sometimes you take on one too many projects. Or Murphy’s Law derails your plans.

Don’t lunge to save things. Let them drop.

Reset, and begin anew.

What kind of fear is it?

Is this fear keeping you safe?

Or is it the kind of fear that’s preventing you from being your best?

Learn to differentiate between the two.


Inspiration

Do you know a virtuous person?

Who do you know who is courageous?

Wise?

Disciplined?

Just?

Do you know anyone who embodies all four of these cardinal virtues?

How much better would things be if you had a boss like this? A coworker or employee?

How would the world improve if we had leaders like this?

It’s hard to succeed with only one or two. You need all four to be truly effective.

The German soldiers who steamrolled Europe were courageous and disciplined. But they were brave and disciplined for the most unwise and unjust of reasons.

You can probably think of several people who were incredibly wise… But who lacked the courage to stand up and do the right thing when the time called for action.

We need more virtuous people in the world.

They aren’t born this way. They make themselves so.

AI isn’t taking your job

…at least not yet.


I use AI almost every day to assist with work and learn new topics (as part of my job) that I’m unfamiliar with. I read diligently to stay up-to-date on the latest developments, so I can learn how to use it more effectively.

AI will become (if it hasn’t already), and continue to be, a large portion of all of our lives.

However, we’re receiving a significant amount of misinformation about what’s happening and the effects it’s having on workers. Some of it is outright deception, while some is simply lazy reporting.

First, the deception.

The CEOs of these massive tech companies (e.g., Dario Amodei, Sam Altman) are brilliant business people who’ve created mind-boggling products. But they’re hemorrhaging cash trying to make their programs more powerful…

And after years of unbelievable growth and progress, they’re failing. The scaling law on which they used to project LLM growth is slowing down, and the improvements are now incremental, rather than exponential.

This is a serious financial problem for them. They need to keep their current investors engaged, and they need new investors to infuse them with additional capital. So what do they do?

They go on cable news shows or podcasts and claim that their AI software will replace all entry-level workers (10-20% of the workforce) within a matter of months.1 It just isn’t true.

But you wouldn’t know that from the news you’re consuming. They’ve bought into this story hook, line, and sinker.

Which brings me to my accusation of lazy reporting. Headlines like “Goodbye, $165,000 Tech Jobs” and “AI is Replacing 10 million Workers” (I made that one up) are attention-grabbing… But untrue.

These media companies, like the AI companies they write about, need to make money. They do that by getting as many eyes on their work as possible. And the best way to do that is to scare people into giving them attention… Even if the claims are untrue or misleading.

To paraphrase Ryan Holiday, who warned us about this years ago: “Trust them… They’re lying.”

It is true that computer science graduates are having a much harder time finding jobs at the moment. And it’s true that there have been massive layoffs in the tech sector.

It’s also true that the companies doing these layoffs are investing more of their money and efforts in AI. But AI is not the cause of this, nor is it replacing those who’ve been laid off.

Here’s what’s actually happening:

During the pandemic, these tech companies went on a massive hiring spree—they simply overhired. Now they’re bloated, and the quickest way to reduce the bloat and (temporarily) increase shareholder value is to shed programmers left and right.

At the same time, the tech sector itself is contracting, which means there are fewer jobs for all the newly minted computer science graduates.

This has historical precedence. The same thing happened in 2008 during the financial crisis. And it happened before that during the dot-com bust at the turn of the century.

The number of people entering the computer science field fluctuates in response to the economy. There’s a tech boom, prompting more people to enter the field. Then the sector contracts, and all those people get laid off, which in turn reduces the number of people entering the field.

Until the next boom.

Contrary to what many journalists have written, these people aren’t being replaced by AI. They’re simply being let go because companies overhired during the pandemic or because the companies are refocusing on AI.

However, that refocus, coupled with layoffs and fewer job openings, has led them to conflate the two, concluding that these computer science graduates are being replaced by AI.

This simply isn’t true. That may happen in the 2030s, but it’s not happening right now.

I’ve been guilty of buying into this hysteria too, as you can see in my piece on job hunting in 2025. And I’m here to tell you I was wrong in what I wrote about AI replacing workers in that piece.

All that to say this: Read AI journalism with a healthy dose of skepticism right now. And take any apocalyptic predictions with a grain of salt.


  1. Dario Amodei actually said this in an interview with Anderson Cooper and, ironically, claimed to be worried about it… Which begs the question: if you’re worried about it, why do you continue to do it?

    Why doesn’t he just stop if it actually worries him? It’s his company. ↩︎