TL;DR
- Ayar Labs raised $500M Series E at a $3.8B valuation, led by NVIDIA and AMD
- The company generates $91.6M in revenue with just ~210 employees (~$436K per employee)
- Their silicon photonics technology delivers 4-20x more throughput per watt than copper interconnects
- Backed by the “unholy trinity” of chip competitors: NVIDIA, AMD, and Intel (plus MediaTek, Qatar Investment Authority)
- Solving the memory wall problem—where GPUs sit idle 50% of the time waiting for data
The $500M Bet That Light Beats Copper
On March 3, 2026, Ayar Labs announced the largest funding round in silicon photonics history: $500 million at a $3.8 billion valuation.
The lead investors? NVIDIA and AMD—two companies that normally compete for every GPU sale.
When your biggest chip rivals co-lead your funding round, you’re either solving a problem so fundamental that everyone needs the solution, or you’ve built technology so defensible that they’d rather invest than compete.
For Ayar Labs, it’s both.
The Hidden Bottleneck Killing AI Performance
Here’s a number that should concern anyone building AI infrastructure: GPUs sit idle 50% of the time during large-batch inference, waiting for data to arrive.
Not waiting for compute. Waiting for data movement.
This is the memory wall problem, and it’s becoming the dominant constraint on AI scaling.
Why Data Movement is the New Bottleneck
Consider what happens when you run a large language model:
-
The model is too big for one chip. Meta’s LLaMA 3-70B requires ~70GB just to load the model weights at INT8 precision. An NVIDIA H100 has 80GB of memory—but that doesn’t account for the KV-cache, user prompts, or any operational headroom.
-
You need multiple GPUs working in sync. The model gets “sharded” across multiple chips using tensor or pipeline parallelism.
-
Those GPUs must constantly exchange data. During every forward pass, terabytes of partial computation results flow between chips.
-
Copper can’t keep up. Traditional electrical interconnects become the chokepoint—generating heat, losing signal fidelity, and burning power.
The math is brutal: while compute has scaled with Moore’s Law (and beyond, thanks to specialized accelerators), memory bandwidth improvements have lagged by 10-100x. Sam Altman revealed that OpenAI generates 100 billion words per day. Google processes 480 trillion tokens per month.
Every one of those tokens requires data to move between chips—and copper is hitting a wall.
The Ayar Labs Solution: 8 Tbps on a Beam of Light
Ayar Labs replaces electrical signals with photons. Their flagship product, TeraPHY, is an optical chiplet that:
- Transfers 8 Tbps of bandwidth
- Operates at 10 nanosecond latency
- Delivers 4-20x more throughput per watt than copper
The physics are straightforward: light moves faster than electrons, doesn’t generate electromagnetic interference, and wastes less energy as heat. What’s hard is engineering that into a chip that works with existing semiconductor manufacturing processes.
Why This Matters for AI Infrastructure
Hyperscalers are spending hundreds of billions on AI data centers:
| Company | Announced Investment | Scale |
|---|---|---|
| OpenAI (Stargate) | $500 billion | 5GW data center |
| Meta (Hyperion) | Undisclosed | 5GW data center |
| Meta (Prometheus) | Undisclosed | 1GW data center |
All of these projects face the same constraint: if you can’t move data between chips efficiently, adding more compute doesn’t help. It’s like widening a highway that empties into a single-lane bridge.
Ayar Labs is building that bridge.
The Numbers
| Metric | Value |
|---|---|
| Series E Raised | $500M |
| Valuation | $3.8B |
| Total Funding | ~$875M |
| Revenue (2025) | $91.6M |
| Employees | ~210 |
| Revenue per Employee | ~$436K |
| Throughput Improvement | 4-20x vs copper |
| Bandwidth | 8 Tbps |
| Latency | 10 nanoseconds |
The Investor Roster
The Series E includes:
- NVIDIA (co-lead)
- AMD (co-lead)
- Neuberger Berman
- MediaTek
- Qatar Investment Authority
Previous investors include Intel Capital, Lockheed Martin, GlobalFoundries, Applied Materials, and HPE Ventures.
The significance here can’t be overstated: NVIDIA, AMD, and Intel are all investors. These companies compete fiercely on everything from gaming GPUs to AI accelerators. The fact that all three have backed Ayar Labs suggests they see silicon photonics as a layer below competitive differentiation—infrastructure everyone will need.
The Founding Story: From DARPA Lab to $3.8B Valuation
Ayar Labs emerged from a DARPA research project called POEM (Photonically Optimized Embedded Microprocessors), a collaboration between MIT, UC Berkeley, and CU Boulder.
The Key Breakthrough
In 2015, the team published a paper in Nature showcasing a microprocessor with:
- 70 million transistors
- 850 photonic I/O components
- 300 gigabits per second per square meter of bandwidth
That was 10-50x greater bandwidth than existing electrical microprocessors.
The Founders
Mark Wade (CEO) built the first-ever CPU-to-memory photonic interconnect during his PhD at CU Boulder. He joined MIT to continue photonics research, where he led the breakthrough that embedded optical technology into standard silicon chip designs.
Chen Sun (Chief Scientist) was a PhD candidate in EECS at MIT, working in the photonics lab. He’d previously interned at Rambus and NVIDIA—experience that would prove valuable when building chips that needed to integrate with existing infrastructure.
Alex Wright-Gladstein (former CEO) was an MBA student at MIT who connected with Wade and Sun while TAing a class called Energy Ventures. She recognized the commercial potential and helped the team win $275K at MIT’s clean energy competition.
Vladimir Stojanovic (Chief Architect) was an associate professor at MIT who led the DARPA project. He’d previously co-founded NanoSemi, which was acquired for $96.8 million in 2020.
The Advisory Firepower
Pat Gelsinger, former Intel CEO, sits on Ayar Labs’ board. He’d launched silicon photonics research at Intel two decades earlier:
“I proudly declared that the death of copper was upon us, and everything would switch to optics, but I was just two decades too early. The GPU is sitting there sucking on a straw for more compute. Everyone wants their clusters to be bigger, but the physics of copper is limiting how big they can get.”
The Competition
Ayar Labs isn’t alone in the silicon photonics race:
| Company | Latest Funding | Focus |
|---|---|---|
| Ayar Labs | $500M Series E | Co-packaged optics for AI chips |
| Celestial AI | $250M (2025) | Photonic fabric for compute |
| Lightmatter | $400M Series D | Photonic interconnects |
NVIDIA has also made $2 billion in direct investments in Coherent and Lumentum—established optical networking companies that could complement or compete with Ayar’s approach.
Ayar’s Differentiation
-
Manufacturing compatibility: Ayar’s chips are built using standard CMOS processes, making them easier to integrate with existing semiconductor supply chains.
-
Strategic investor alignment: Having NVIDIA, AMD, and Intel as investors provides both validation and potential customer relationships.
-
Focus on co-packaged optics: Rather than building optical links between racks, Ayar focuses on bringing optics directly to the chip package—where latency matters most.
What’s Next
The $500M will fund:
- Scaling manufacturing and test capacity
- Opening a new office in Hsinchu, Taiwan (home to TSMC and major semiconductor ecosystem)
- Accelerating deployment of co-packaged optics in production AI systems
The timeline is aggressive but necessary. As AI models grow and inference demand explodes, the memory wall problem will only intensify. Companies that solve data movement at scale will capture enormous value in the AI infrastructure stack.
Why This Matters
The AI hardware narrative has focused on compute: who has the most powerful GPU, the most FLOPS, the biggest training clusters.
But compute without data is useless. Ayar Labs is betting that the next wave of AI infrastructure investment will focus on connectivity—and they’ve positioned themselves at the center of that shift.
When NVIDIA, AMD, and Intel all agree on something, it’s worth paying attention.
FAQ
What is silicon photonics?
Silicon photonics is the technology of building optical components (lasers, modulators, detectors) on silicon chips using standard semiconductor manufacturing processes. It allows data to be transmitted using light (photons) instead of electrical signals (electrons).
Why is the memory wall a problem for AI?
Large AI models must be split across multiple GPUs. Those GPUs need to constantly exchange data during training and inference. When data movement becomes the bottleneck (rather than compute), adding more GPUs doesn’t proportionally improve performance.
How does Ayar Labs’ technology work?
Ayar Labs makes optical chiplets that attach to existing processors. These chiplets convert electrical signals to light, transmit data optically, then convert back to electrical signals—all at nanosecond speeds and with dramatically lower power consumption than copper.
Who are Ayar Labs’ customers?
Ayar Labs works with hyperscale cloud providers, AI infrastructure companies, and semiconductor manufacturers building next-generation compute systems.
What is co-packaged optics?
Co-packaged optics (CPO) places optical components directly in the chip package, rather than on separate pluggable modules. This reduces latency and power consumption while increasing bandwidth density.