Scarcity Has Legs
It sprints to the new bottleneck and pays the first person waiting there.
Bottom line up front: Value migrates to the binding constraint, the single thing that caps throughput, even when everyone around it is busy optimizing everything else. Find that constraint before others do, and you capture disproportionate value. Miss it, and you participate in someone else’s abundance.
The $3 Trillion Game of Musical Chairs
Forecasts now put the AI data center buildout on the order of ~$3T through 2030 . That’s not a typo. THREE TRILLION DOLLARS chasing physical infrastructure for AI!! But a lot of it is chasing square footage while the binding constraint is power and interconnection, the stuff that determines whether the square footage can actually run.
To understand how we got here, rewind five years. In 2020, the AI bottleneck was capability. Could models actually do useful things? Billions poured into model training. OpenAI, Anthropic, Google, everyone racing to build smarter systems. Then GPT-3 landed, and capability felt “good enough” for many commercial tasks. The question shifted from can it do this? to can we run it reliably and cheaply at scale?
So scarcity migrated. By 2023, the bottleneck was compute. You could design a brilliant model, but good luck getting the GPUs to train it. Nvidia’s stock went parabolic, not because they got smarter, but because they controlled the widest bridge across the river everyone needed to cross. Jensen Huang wasn’t selling chips. He was selling throughput.
Now watch it shift again. GPUs need to live somewhere, and “somewhere” means data centers consuming 100+ MW and grid connections that take years. Interconnect queues now stretch over four years on average, with roughly five years. Large power transformers have shifted from “months” to years: 36 month lead times are commonly quoted, with maxima up to approximately 60 months. The bottleneck moved from bits to atoms.
One of the hottest AI adjacent investments now isn’t a foundation model. It’s securing power and grid access.
This pattern, abundance creating new scarcity elsewhere, isn’t unique to AI. It’s the fundamental law of how value moves through any system. And it has a name.
The Iron Law (Which Isn’t New)
What I’m describing is essentially Eliyahu Goldratt’s Theory of Constraints,. I’m not inventing new physics here. I’m applying a proven lens to a moment where it’s unusually powerful.
The core claim: a bottleneck is not a constraint. It’s the choke point that caps system throughput. Everything else is slack.
If you improve anything other than the bottleneck, you usually get local efficiency, not system throughput. You make one department faster while the whole pipeline moves at the same speed. If you improve the bottleneck even slightly, everything downstream accelerates.
This sounds obvious. It isn’t. Most people, most companies, most investors spend their energy optimizing non-constraints because those are visible, comfortable, or prestigious. The real limiter is usually the thing nobody wants to look at. Every era has its version of this story, and looking at the pattern across centuries makes the mechanism hard to deny.
The companies on this list didn’t win by being the smartest in the room. They won by solving the constraint everyone else was ignoring or hadn’t noticed yet. Which raises the obvious next question: where are today’s constraints hiding?
Where AI Scarcity Lives Right Now
AI has made cognitive output abundant. One prominent estimate puts the addressable AI-enabled labor shift and productivity potential at roughly $4.5T in the US alone. But capturing it is gated by implementation bottlenecks that have nothing to do with model intelligence. The interesting question isn’t whether AI creates abundance. It does. The question is where the new choke points formed.
Infrastructure is the obvious one. The hard constraints are grid access, equipment lead times, and permitting. If you can move atoms faster than your competitors (sites, approvals, power contracts) you win. The value isn’t just in Nvidia anymore. It’s in whoever navigates these physical constraints faster: better sites, faster permitting, efficient construction, smart energy sourcing. Trade-skilled workers are being paid a premium: recent reporting from Fortune shows workers on data center projects earning up to roughly 30% more, with some reaching six figures. Atoms are having their revenge on bits.
That revenge extends beyond hardware. When anyone can generate sophisticated content, the cost of generation collapses but the cost of trust doesn’t. If anything, trust gets harder. Synthetic and authentic become indistinguishable. There are really two trust problems here: authenticity of content and accountability of actors. Trust is the infrastructure of coordination, and when it degrades, transaction costs rise across the entire economy. Deals take longer. Verification layers multiply. What does “trust infrastructure” actually look like in practice? Provenance tracking, digital identity verification, AI output watermarking, third-party audits, and reputation systems with real accountability baked in. Think of these as trust banks for the 21st century. Whoever builds them captures a toll on every AI-augmented transaction.
But here’s where it gets interesting. Even with infrastructure humming and trust solved, there’s a bottleneck that’s harder to see because it lives inside organizations. Integration is where model demos go to die. AI has general capability but zero specific context. It can write code but doesn’t know your codebase. It can draft strategy but doesn’t know your competitive dynamics. The gap between “AI can do this” and “AI does this usefully here” is enormous, and the knowledge needed to bridge it is often tacit: permissions, data schemas, exception handling, ownership disputes, and the awkward fact that the critical workflow was never documented because it lives in Priya’s head. This knowledge isn’t promptable. It’s accumulated. And it operates on a completely different timescale than the other constraints. Infrastructure is a marathon. Trust is a generational project. Integration is a sprint you have to run fresh for every single organization.
Each of these constraints creates its own class of winners. But one group of people is professionally obsessed with spotting constraint solvers early, which makes their lens worth examining.
The Monday Morning Audit
All of this is interesting, but it’s only useful if you can apply it to whatever system you’re responsible for. Here’s the diagnostic. Six questions. Takes fifteen minutes. Changes how you allocate your next quarter.
The operating heuristic before you start: follow the wait time and the rework. That’s where your system is bleeding throughput.
- What’s the stated goal? What throughput are you actually trying to maximize?
- Where does work pile up? Visible queues point to probable bottlenecks.
- What are people waiting on? Dependencies reveal constraints.
- What would change everything if it improved 20%? That’s your leverage test.
- What are you avoiding looking at? That’s your discomfort test. Probably the real answer.
- What was the bottleneck 6 months ago? If it’s the same, you haven’t solved it. If it’s different, you have, and you need to find the new one.
This isn’t hypothetical. Here’s what it looks like in practice.
A product team ships features consistently but growth flatlines. The instinct is to hire more engineers. But work piles up not in engineering: it piles up in the two-week wait for product specs and stakeholder alignment meetings that end without decisions. The bottleneck is decision latency, not build capacity. Adding engineers makes the waiting pile bigger, not faster. One founder I know cut their spec approval cycle from 14 days to 3 by giving the PM unilateral authority on features below a revenue threshold. Shipping velocity doubled in a month. No new hires.
An individual has the skills, the resume, the credentials. Keeps getting passed over for roles that go to people who seem less qualified. The bottleneck isn’t capability. It’s the fear of putting work out before it’s perfect, the avoidance of ambiguity, the refusal to make a call without complete information. The constraint is emotional, not technical. No amount of resume optimization fixes a shipping problem.
The uncomfortable truth beneath both examples: the bottleneck is often something you’re bad at, something you don’t want to do, something you can’t easily fix, or something that requires changing yourself rather than your circumstances.
The Question That Won’t Leave You Alone
Remember the migration: models to GPUs to data centers to power grids. Each time the bottleneck moved, most people kept optimizing against the old constraint. Working harder instead of differently. Adding capacity where there was already plenty. Celebrating local efficiency gains while system throughput stayed flat.
Your system has a bottleneck too. It migrated recently. Probably more than once.
The question isn’t whether abundance is coming. It is. The question is whether you’re honest enough to look at what’s actually constraining you right now, especially when the answer is something you’d rather not see. If you haven’t named your current bottleneck, you’re almost certainly optimizing last quarter’s.
And last quarter’s constraint? That’s just someone else’s abundance now.