Strategist Beware
The viral report "The 2028 Global Intelligence Crisis" looks like smart thinking. But this kind of faux-journalism can quickly derail your strategic thinking.
Nothing is more harmful to the pursuit of good strategy than false intel.
And today, we’re only seeing more of it. Not only does AI make it easy for anyone to weave a compelling narrative of the future that seems airtight, but sensational content is also the very kind of material that algorithm-driven content platforms reward. False intel can pose a huge danger to executives working on strategy (that’s you), because it can lead you down some dark alleys:
You respond to threats that don’t really exist.
You pursue opportunities that aren’t actually feasible.
You base decisions on conjecture, not fact.
You create a false mental model of the world, without even realizing it.
False intel is so pernicious because it’s hard to spot. It can sound a lot like legitimate journalism or actual critical thinking through the use of statistics, jargon, and sheer length.
So how do you tell the difference?
Today, I’m sharing three tools you can use: Vibe Reporting, Digital Ick, and Astonishment. And we’ll use them on a piece that caused quite the stir about AI last week to show you how they work. My goal is equip you to call out nonsense on the spot, so you can build your strategy on fact, not fiction.
The Test for False Intel
When The 2028 Global Intelligence Crisis was published last week, it went viral, getting coverage in The Wall Street Journal in the process. It was even credited for triggering a selloff in the stock market. This “forecast” is presented as a report from 2028, looking back on how AI developments unfolded to cause massive unemployment, economic regression, and civil unrest. It’s 7,000 words of sheer terror. Terrorizing, because despite its doom and gloom, it sounds just plausible enough to be convincing. For a moment, I even considered liquidating my entire portfolio and holding cash for a while. (I didn’t.)
If you haven’t read it yet, go take a look now.
So how do you tell if works like this are useful models of the future, or sensationalism designed to drive virality? You need a test.
And fortunately, I found a great one. It comes from Cal Newport, the best-selling author behind books like Deep Work and Digital Minimalism. Cal has three tests you can run to see if the content you’re reading is legit (these come from episode 391 of his podcast, so I’m paraphrasing):
Vibe Reporting. When the author creates a false sensation that something is true. They’ll put loosely related quotes together and omit important details, implying a strong relationship. For example, if a story about Amazon’s layoffs appears next to a quote from CEO Andy Jassy about the potential of AI, it sounds like AI is causing those layoffs. In reality, Amazon simply overhired during the pandemic and is recalibrating.
Digital Ick. This is when the author presents an unsettling edge case in the hope of making a more general impression. For example, an AI agent in MoltBook recently created a religion called the Church of Molt. That’s icky. If you didn’t know that this behavior was likely prompted by a human, you’d likely feel icky about AI agents in general.
Astonishment. Sometimes authors can short-circuit your ability to think critically by presenting such incredible ideas that you’re simply left in awe. That’s astonishment. Anytime you see a YouTube video of “AI Secrets Sam Altman Doesn’t Want You to Know” or stories about AI superintelligence taking over humanity, Astonishment is at work.
The common thread with these techniques is that there is just enough truth in them to make them feel plausible. Most readers won’t notice the omitted information or have the time or energy to recognize flaws in the logic.
Let’s apply these tests to The 2028 Global Intelligence Crisis to see how they work.
Vibe Reporting: Isolated Changes Result in Economic Contagion
The piece consistently highlights specific negative outcomes that may occur as AI adoption accelerates, while completely omitting any positives that may result. Take a look at these snippets:
SaaS fallout causes white-collar collapse: We spoke with a procurement manager at a Fortune 500. [He told a SaaS vendor that they’d] been in conversations with OpenAI about having their “forward-deployed engineers” use AI tools to replace the vendor entirely. They renewed at a 30% discount… Software was only the opening act. The same logic that justified ServiceNow cutting headcount applied to every company with a white-collar cost structure.
The implication is that cost-cutting in SaaS leads to the widespread collapse of the white-collar labor market. There’s simply no evidence for that. What’s also missing is any mention of the new ways such enterprises could create new value with such savings. A technology like AI isn’t merely an efficiency driver, yet the article treats it as such.
A logical fallacy of this Vibe Reporting is the claim that negative outcomes for one sector (e.g., SaaS) cascade into a series of negative second-order consequences for everyone else.
Here’s another one:
The federal government isn’t capable of reacting: The system wasn’t designed for a crisis like this. The federal government’s revenue base is essentially a tax on human time. People work, firms pay them, the government takes a cut. Individual income and payroll taxes are the spine of receipts in normal years. Through Q1 of this year, federal receipts were running 12% below CBO baseline projections.
This leaves out the fact that income tax is just one lever the government has. There’s no mention of changes to corporate income tax. And although it later concedes that there could be a tax on AI itself, the authors speculate that such a tax would be stalled by government gridlock. That’s rarely how governments act in emergencies, though.
Such Vibe Reporting falsely claims that negative outcomes for one sector (e.g., SaaS) will cascade into a series of negative second-order consequences for everyone else in the economy.
That’s like arguing that the demise of the disk drive industry in the 90s would later doom the PC market in the 2000s. When causality is merely suggested, that’s Vibe Reporting.
Digital Ick: Gut Over Brain
The 2028 Global Intelligence Crisis loves to take neutral (or even beneficial) things and repackage them in an uncomfortable way. There were too many examples to include, but here are two of the worst offenders:
Agents never stop (humans do): The part that should have unsettled investors more than it did was that these agents didn’t wait to be asked. They ran in the background according to the user’s preferences. Commerce stopped being a series of discrete human decisions and became a continuous optimization process, running 24/7 on behalf of every connected consumer.
This sounds like AI is going to wrest control over our lives, making purchasing decisions for us and taking over our agency. Well, that’s already the case with things like auto-pay, subscription services, or the index fund in your 401k. Those aren’t negatives, they’re conveniences. The article suggests something sinister that’s not really there.
Digital Ick is a powerful rhetorical device because it forces you to react on gut instinct, not think with your mind.
Human relationships are expendable: Financial advice. Tax prep. Routine legal work. Any category where the service provider’s value proposition was ultimately “I will navigate complexity that you find tedious” was disrupted, as the agents found nothing tedious.... We had overestimated the value of “human relationships.
This one is especially icky because it pokes at one of the most fundamental tenets of society: humans need each other. Instead, it suggests that human relationships were really just inconveniences all along, and AI is ready to exploit that. AI shifts from a mere threat to employment to a threat to humanity itself. Later in the report, when we read that AI has destroyed the economy, we’re not even surprised.
Digital Ick is a powerful rhetorical device because it forces you to react on gut instinct, not think with your mind. But in this piece, it’s amplified. By presenting this report as a 2028 future state that has already happened, its claims feel much more real than those of a mere forecast.
Astonishment: No Justification Needed
I was surprised at how often these technique was used. Consider the quotes below. Does your mind slow down to think critically, or does it switch to panic mode, wondering what might happen if these came to bear?
This is the first time in history the most productive asset in the economy has produced fewer, not more, jobs. Nobody’s framework fits, because none were designed for a world where the scarce input became abundant.
By March 2027, the median individual in the United States was consuming 400,000 tokens per day - 10x since the end of 2026.
AI capabilities improved, companies needed fewer workers, white collar layoffs increased, displaced workers spent less, margin pressure pushed firms to invest more in AI, AI capabilities improved…It was a negative feedback loop with no natural brake.
It should have been clear all along that a single GPU cluster in North Dakota generating the output previously attributed to 10,000 white-collar workers in midtown Manhattan is more economic pandemic than economic panacea.
The exponential steamrolled our conceptions of what was possible, even though every year Wharton professors tried to fit the data to a new sigmoid.
Two years. That’s all it took to get from ‘contained’ and ‘sector-specific’ to an economy that no longer resembles the one any of us grew up in.
Astonishment claims do one thing really well: they relieve the rest of the article from having to justify itself. By painting a vision of a future that’s so far beyond what exists today, the authors can simply claim that we lack the mental models and frameworks to understand them. It’s circular thinking that doesn’t actually provide proof of anything.
Astonishment claims do one thing really well: they relieve the rest of the article from having to justify itself.
The Truth Matters
If you’ve read The 2028 Global Intelligence Crisis, go back and re-read it through this new lens. Is it just as convincing? The challenge with pieces like this is that they contain kernels of truth. It’s not pure speculation. But beneath many established facts lies a poor chain of reasoning. That’s much harder to spot. The test we used from Cal Newport isn’t comprehensive, but it’s a good sniff test to see whether what you’re reading should be taken at face value.
Remember, strategic thinkers must seek the truth.
They must know the difference between logic and fantasy, between judgment and wishful thinking. And right now, the truth especially matters. New AI-first competitors are entering the market; your own business is still getting its arms around using AI itself, and no one is really sure if the economy is healthy or on the brink of something else.
The truth is your friend. Convenient answers or “sure things” are not. Next time a viral piece hits your inbox, run these tests, take a deep breath, then work on your strategy.


