Breaking through the copper wall
The optical interconnect market is projected to exceed $67 billion in the early 2030s. This isn't being driven by preference or incremental improvement. It's being driven by a hard physical wall: at modern signaling rates, copper simply cannot carry data far enough, fast enough, without drowning in its own heat.
The physics of the copper wall
Every electrical signal sent through a copper wire encounters three enemies that get worse as data rates increase:
1. Frequency-dependent attenuation. Higher frequency signals lose more energy as they travel through copper. At 28 GHz (the Nyquist frequency for 56 Gbps NRZ signaling), a standard copper trace loses approximately 1 dB per inch. At 56 GHz (112 Gbps PAM4), the loss doubles. This means the signal arriving at the other end of even a short cable is dramatically weaker than the one that was sent.
2. The skin effect. At high frequencies, electrical current concentrates on the outer surface of the conductor rather than flowing through its full cross-section. This effectively reduces the usable area of the wire, increasing resistance and heat generation. At 112 Gbps signaling, the skin depth in copper is approximately 0.2 micrometers — meaning 99% of the wire's cross-section is unused.
3. Crosstalk and electromagnetic interference. Dense copper cable bundles create electromagnetic coupling between adjacent wires. In a modern switch chassis with 512 ports of 400GbE, thousands of copper traces run in parallel within centimeters of each other. The mutual interference fundamentally limits how many channels you can pack into a given space.
The shrinking reach of copper
The practical consequences are stark and measurable:
- At 25 Gbps per lane (100GbE): passive copper cables reliably reach 5 meters
- At 56 Gbps per lane (200/400GbE): passive copper reach drops to 3 meters
- At 112 Gbps per lane (800GbE): passive copper reach drops to under 2 meters
- At 224 Gbps per lane (1.6TbE, emerging): passive copper reach drops to under 1 meter
At 224 Gbps — the signaling rate required for next-generation 1.6 terabit Ethernet — a passive copper cable cannot reliably connect two devices that are more than a meter apart. This means the GPU at one end of a server motherboard may not be able to electrically reach the switch ASIC at the other end without active signal reconditioning.
"We are witnessing the end of copper as a viable high-speed interconnect medium. The physics doesn't negotiate."
The compensating arms race
The industry has not surrendered quietly. Billions of dollars have been spent on technologies to extend copper's reach:
Active Electrical Cables (AECs) embed digital signal processors inside the cable itself, re-amplifying and re-equalizing the signal in transit. These work, but each DSP chip consumes 5–15 watts of power and adds latency. In a rack with 128 cables, that's 640–1,920 watts consumed just by the cables.
PAM4 modulation encodes two bits per symbol instead of one, effectively doubling bandwidth per lane. But PAM4 signals have 3× worse signal-to-noise ratio than NRZ, requiring more sophisticated (and power-hungry) receivers.
Equalization circuits — CTLE, DFE, and FFE — in the SerDes (serializer/deserializer) blocks of switch ASICs now consume more transistor area and more power than the actual switching logic. In Broadcom's Memory Tomahawk 5 switch ASIC, the SerDes blocks are estimated to consume over 40% of the chip's total power budget.
The industry is spending an increasing fraction of its power and silicon budget fighting copper's physics — and losing.
The photonic answer
Light doesn't have these problems. Specifically:
- No frequency-dependent attenuation over data center distances. A 400 Gbps optical signal arrives at the other end of a 100-meter fiber with virtually the same fidelity as a 10 Gbps signal.
- No skin effect. Photons don't carry charge and don't interact with conductor surfaces.
- No crosstalk. Light in a waveguide does not electromagnetically couple to adjacent waveguides (assuming proper mode confinement).
- No resistive heating. The fiber itself consumes zero power. The only power consumption is at the transmitter (laser) and receiver (photodetector).
This is why every major hyperscaler is accelerating the transition to optical interconnects. Google's Jupiter fabric already uses optical connections for rack-to-rack communication. Microsoft's Azure network is moving to co-packaged optics. Meta's next-generation AI training clusters specify all-optical backplanes.
Co-Packaged Optics: bringing light to the chip
The most significant architectural shift underway is Co-Packaged Optics (CPO) — integrating optical engines directly onto the same package as the switch ASIC or GPU, rather than using external pluggable transceivers.
CPO eliminates the longest remaining copper traces — the ones between the ASIC and the front-panel optical module. These traces are typically 15–25cm long and consume substantial SerDes power. By bringing the optical engine to within millimeters of the ASIC die, CPO can reduce I/O power consumption by 30–50%.
But CPO is an intermediate step. It still uses electronic switching internally. The photons enter the package, get converted to electrons, processed electronically, converted back to photons, and sent out. Every conversion step costs power and adds latency.
Where QLT fits: eliminating the conversion entirely
QLT's architecture takes CPO to its logical conclusion: what if the switching itself is photonic?
In a QLT photonic processor, data arrives as light, is processed as light, and leaves as light. There is no optical-to-electrical conversion. There is no electronic switching fabric. The computations are performed using interference patterns in silicon nitride waveguides, controlled by femtosecond all-optical switches.
This eliminates:
- 100% of SerDes power consumption (typically 40% of ASIC power)
- 100% of optical-electrical-optical conversion losses
- All copper-related signal integrity constraints
- The thermal envelope that limits rack density
The $67 billion optical interconnect market is being created by the failure of copper. QLT is positioned to capture the layer above that: the processing fabric that makes the photonic data path end-to-end, with no electrons in the critical path at all.
The optical interconnect market exists because copper failed. QLT exists because the electron failed.
Sources: LightCounting Optical Interconnect Market Forecast (2024); Ethernet Alliance Technology Roadmap; IEEE 802.3df 1.6 Terabit Ethernet Standard Working Group; Broadcom Memory Tomahawk 5 Architecture Brief; Google Jupiter Network Architecture (SIGCOMM 2022); Microsoft Azure CPO Deployment Whitepaper (OFC 2024).