I’ve been saying it out loud more lately: by 2030, we won’t “work” the way we work now.
Not because humans get lazy. Because intelligence is getting cheap.
When I stop and audit my own day, most of my value looks like this:
- show up to meetings
- translate strategy into business flows
- answer technical questions
- write / review docs (e.g. RFPs, Proposals, etc.)
- support sales development
- move projects forward when things get messy
- help with hands-on technical work
And here’s the uncomfortable part: a lot of that is already “AI-shaped work.” I’m the middle layer. I ask the model, I curate, I decide, I present. It makes me faster—sometimes 10x faster.
But if the output is “good enough” and the organization trusts it… why does it need me in the loop?
That’s not a personal crisis. It’s a labor-market design problem.
And it’s exactly why a piece like Citrini Research’s The 2028 Global Intelligence Crisis hit so hard: it frames a future where being “right about AI” can still be bearish for the economy. (Citrini Research)
Thanks to Brent Kaplan PhD for sharing article below with me. These were already thoughts I’d been wrestling with, but seeing someone else articulate them so clearly has a way of sharpening your own thinking.
Citrini’s core idea: abundant intelligence can break the consumer economy
Citrini and Alap Shah don’t present the memo as a prophecy. They present it as a scenario: what if AI keeps improving, adoption keeps rising, and the incentives stay rational at the company level—but collectively it creates a downward spiral? (Citrini Research)
Three ideas in the piece that stick with me:
1) “Ghost GDP”
The scenario describes productivity surging and “headline” output looking fine, while wages and spending power collapse. The memo calls this “Ghost GDP”—output in the national accounts that doesn’t circulate through households the way it used to. (Citrini Research)
That’s a scary thought because the US economy is basically a consumer engine. If machines do more work but don’t buy anything, you can get growth on paper and pain in real life.
2) The intelligence displacement spiral
The most unsettling part isn’t “AI takes jobs” — it’s the loop:
AI improves → companies cut payroll → margins expand → savings get reinvested into AI → AI improves → more payroll cuts. (Citrini Research)
Individually, each step is rational. Collectively, it’s a machine that can reduce the amount of human income available to fund the consumer economy.
3) Friction goes to zero (and intermediation gets wrecked)
Citrini also argues that “agentic” systems don’t just make people faster—they can remove whole layers of friction-based business models:
- subscriptions that rely on inertia
- travel booking platforms
- insurance renewals
- routine legal/tax/financial navigation
- even commission-heavy categories like real estate
The scenario’s point: a lot of what we call “moats” are really just human limitations with a friendly face. If agents price-shop, negotiate, cancel, and route transactions automatically, entire categories get repriced. (Citrini Research)
The part I agree with (even if the timeline is wrong)
Even if Citrini’s 2027–2028 compression is too aggressive, the direction is hard to ignore:
- “middle work” gets automated first (coordination, analysis, drafting, summarizing, prototyping)
- pricing power gets pressured as building and switching costs drop
- companies reinvest savings into more automation, because they have to
And that’s why the memo went viral and started rattling markets this week. (Reuters)
Humans will fight to stay in the loop (and we’ll invent new jobs)
This is the that feels painfully true: even if AI can do the tasks better the humans, humans still need to work and earn a paycheck. So we’ll create new forms of “employment” and new institutions to keep money flowing.
Here are a few likely “new job categories” that emerge when intelligence is abundant:
1) Accountability layers (someone has to own the outcome)
AI can recommend. But organizations still need humans who can be fired, sued, promoted, and trusted.
- “AI decision owners”
- model-risk signoff roles
- incident response + audit trails for agent actions
2) Workflow designers (the new ops)
Not “prompt engineering.” More like: designing systems that turn intent into repeatable execution.
- agent orchestration
- business process redesign
- ROI measurement and control
This is where a simple framework becomes a weapon. For example, the Business AI Canvas forces a team to define the problem, map the current process, decide if automation is justified, and explicitly design human-in-the-loop checkpoints.
3) Trust & compliance brokers
When agents transact, negotiate, and move data, trust becomes a product:
- privacy/security assurance
- policy compliance
- bias/quality monitoring
- “what can this agent do, and what is it forbidden to do?”
4) Taste, brand, and human meaning
Even if AI can generate infinite content, humans will pay for:
- taste (what’s worth attention)
- identity (what represents o I trust?)
The value shifts from producing information → curating meaning.
5) Local, physical, and care work (plus its coordination)
A lot of the economy is still physical: care, logistics, housing, hands-on services. But those fields will be reorganized by AI scheduling, routing, documentation, and optimization—so new roles appear around coordinating real-world delivery.
My “2030 bet”: your job becomes a portfolio of proofs
If you want to stay valuable in the middle of AI, the move is to stop defining your job as “tasks I do” and start defining it as “outcomes I can repeatedly produce.”
A simple way to keep yourself honest:
- What outcomes do you own?
- What decisions do people trust you with?
- What failures can you prevent?
- What messy, political, cross-functional thing do you resolve that nobody else can?
Then automate everything else—strategically—not blindly.
That’s why I like decision tools like the AI Readiness Matrix (“automate now / plan for future / optional / skip”). It keeps you from automating noise and helps you focus on leverage.