The existential power bottleneck
The International Energy Agency projects that global data center electricity consumption will more than double to 945 TWh by 2030. Gartner predicts that by 2027, 40% of existing AI data centers will be operationally constrained by power availability. The bottleneck for artificial intelligence is no longer compute. It's watts.
The scale of the problem
To understand how severe the energy crisis is, consider the numbers in context:
945 TWh is more electricity than Japan — the world's third-largest economy — consumes annually. It's roughly equivalent to the entire electrical output of France and Germany combined. And it would represent a doubling from the estimated 460 TWh consumed by data centers globally in 2024.
In the United States alone, McKinsey projects data center electricity demand will grow from 147 TWh in 2023 to 606 TWh by 2030 — accounting for nearly 12% of total U.S. power demand. For comparison, the entire U.S. residential sector consumes approximately 1,400 TWh annually. Data centers are on track to consume almost half as much electricity as every home in America.
This growth is almost entirely driven by AI. According to the IEA, a single ChatGPT query consumes approximately 10× more electricity than a traditional Google search. As AI moves from text generation to video, scientific simulation, and autonomous control systems, the compute intensity per query will increase by additional orders of magnitude.
Where the watts go
Inside a modern AI data center, power consumption breaks down along predictable lines:
- GPU/accelerator compute: 45–55% of total facility power
- Cooling infrastructure: 25–35% (liquid cooling loops, chillers, heat exchangers)
- Networking and I/O: 8–12% (switch ASICs, SerDes, optical transceivers)
- Memory and storage: 5–8%
- Power conversion losses: 5–8% (AC/DC, voltage regulation)
The critical insight is that cooling is the second-largest power consumer — and it exists entirely because of waste heat generated by electron flow through resistive silicon. Every watt consumed by a GPU generates approximately 0.7 watts of cooling overhead. This is not an efficiency problem that can be optimized away. It's thermodynamics: when electrons flow through a resistive material, they generate heat proportional to I²R. The only way to eliminate the cooling overhead is to eliminate the resistive heating.
The GPU power spiral
The trajectory of GPU power consumption tells the story clearly:
- NVIDIA A100 (2020): 400W TDP per GPU
- NVIDIA H100 (2023): 700W TDP per GPU
- NVIDIA B200 (2024): 1,000W TDP per GPU
- NVIDIA GB200 NVL72 (2025): 120kW per rack (72 GPUs)
A single GB200 NVL72 rack consumes 120 kilowatts — enough to power 40 American homes. A training cluster of 100,000 GPUs (the scale Meta is building for Llama 4) would consume approximately 170 megawatts — the output of a small natural gas power plant, dedicated exclusively to AI training.
This is why Microsoft has signed a deal to reactivate Three Mile Island's nuclear reactor. Why Amazon is purchasing nuclear-powered data center campuses. Why Google is investing in small modular reactors. The hyperscalers have concluded that the existing electrical grid cannot support AI at the scales they need.
"We are building power plants to power AI. That's not a computing strategy. That's a confession that the underlying technology is fundamentally inefficient."
The Gartner warning
Gartner's 2024 forecast delivered one of the most striking predictions in the history of enterprise technology analysis: by 2027, 40% of existing AI data centers will face operational constraints due to insufficient power availability.
This doesn't mean they'll be slightly less efficient. It means they will physically be unable to deploy additional AI capacity because there isn't enough electricity available at the site. New GPU racks will sit empty — not because of supply chain constraints, but because the building can't feed them watts.
The implications cascade across the entire AI industry:
- AI training timelines extend as compute capacity is rationed
- Cloud providers implement power-based pricing tiers, increasing costs for inference
- Geographic constraints tighten — AI infrastructure concentrates near power sources
- ESG and regulatory pressure intensifies as data centers consume increasing grid share
The photonic exit strategy
Photonic processors offer a fundamentally different thermodynamic equation:
Photons do not generate I²R heat. Light passing through a silicon nitride waveguide does not encounter electrical resistance. There is no current flow. There is no resistive heating. The waveguide itself consumes zero power — computation happens through passive interference patterns.
The practical impact on data center power:
- 50–80% reduction in active compute power for operations suited to photonic processing (matrix multiplication, Fourier transforms, convolution)
- 40%+ reduction in cooling infrastructure — because the chips generate dramatically less waste heat
- Near-elimination of I/O power — data stays as photons through the entire processing pipeline, eliminating optical-electrical-optical conversion losses
- Higher rack density — without thermal constraints, more processors fit in the same physical footprint
If photonic processors were deployed across the projected 945 TWh of data center consumption, even a conservative 40% power reduction would save 378 TWh annually — more electricity than the United Kingdom consumes in a year.
QLT's position: compute without combustion
QLT's room-temperature photonic processor is specifically engineered for the workloads driving the power crisis: AI training, inference, and the dense linear algebra operations that consume the majority of GPU cycles.
By performing matrix multiplication at the speed of light — passively, through waveguide interference — QLT processors eliminate the fundamental source of the energy crisis: electrons flowing through resistive channels, generating heat that must be removed by additional energy expenditure.
The 945 TWh projection is not a forecast of what computing will cost. It's a forecast of what electronic computing will cost. Photonic computing rewrites the equation entirely.
945 TWh is the cost of computing with electrons. QLT is building what comes after the electron.
Sources: International Energy Agency, "Electricity 2024: Analysis and Forecast" (January 2024); Gartner, "Predicts 2025: AI Infrastructure" (Q4 2024); McKinsey & Company, "How data centers and the energy sector can sate AI's growing appetite" (September 2024); NVIDIA GB200 NVL72 Technical Specifications; U.S. Energy Information Administration Annual Energy Outlook 2024.