Search
Close this search box.

Enfabrica Raises $115M to Advance AI Networks with ‘Scalability Barrier-Breaking’ SuperNIC Chip

Current AI infrastructure leaves GPUs underutilized, waiting for data to flow through bottlenecked pipelines. Our chip eliminates these bottlenecks, making GPUs and other accelerators work at their true potential.

Scaling artificial intelligence isn’t just about packing more GPUs into a system—it’s about making them work smarter, faster, and in perfect sync. Yet, even the most advanced AI networks hit a ceiling. The industry standard tops out at 100,000 GPUs in a cluster, a threshold that has become the bottleneck for innovation. For organizations racing to train larger language models, run real-time inference, or deploy retrieval-augmented generation (RAG), these constraints have forced compromises on speed, scalability, and cost-effectiveness.

Enfabrica Corporation has taken a bold step to break through this barrier. At Supercomputing 2024 (SC24), the company unveiled its Accelerated Compute Fabric (ACF) SuperNIC chip, a revolutionary solution capable of scaling AI clusters to 500,000 GPUs while delivering 3.2 Terabits per second (Tbps) of bandwidth. Alongside this milestone, Enfabrica announced a $115 million Series C funding round, led by Spark Capital and backed by industry heavyweights like Arm, Cisco Investments, and Samsung Catalyst Fund. The funding signals strong confidence in Enfabrica’s ability to reshape AI infrastructure and scale compute capabilities for next-generation workloads.

At the heart of Enfabrica’s innovation lies its ACF SuperNIC chip, designed to address inefficiencies that plague modern AI data centers. In Sankar’s words, “Current AI infrastructure leaves GPUs underutilized, waiting for data to flow through bottlenecked pipelines. Our chip eliminates these bottlenecks, making GPUs and other accelerators work at their true potential.”

A New Era for AI Networking

The ACF SuperNIC chip is more than an incremental improvement—it redefines how GPUs, CPUs, and accelerators communicate within a data center. Traditional networking technologies often leave GPUs idling, waiting for data to flow. Enfabrica’s SuperNIC addresses this inefficiency by enabling GPUs to connect with multiple network components simultaneously, quadrupling bandwidth and introducing unmatched multipath resiliency.

The technology also introduces Resilient Message Multipathing (RMM), which eliminates job stalls caused by network failures, boosting training efficiency and improving uptime. With its high-radix design, 800-Gigabit Ethernet connectivity, and support for more than 500,000 GPUs in a two-tier network architecture, the ACF SuperNIC stands out as a critical enabler for large-scale AI workloads, including training, inference, and retrieval-augmented generation (RAG).

“Today is a watershed moment for Enfabrica,” said Rochan Sankar, CEO and co-founder. “We set out to design AI networking silicon from the ground up, focusing on the needs of system architects and software engineers managing compute clusters at scale. This fundraise and the upcoming availability of our ACF SuperNIC silicon in Q1 2025 mark major steps forward in that mission.”

The Numbers Behind the Innovation

The ACF SuperNIC delivers:

  • 800, 400, and 100 Gigabit Ethernet interfaces: Supporting high-throughput connections across diverse GPU servers.
  • 160 PCIe lanes on a single chip: Allowing seamless scaling of AI clusters in a more efficient network design.
  • Zero-copy data transfers with Collective Memory Zoning: Minimizing latency and maximizing host memory efficiency for better utilization of GPU fleets.
  • Software-Defined RDMA Networking: Offering customizability and future-proofing for large-scale AI network topologies.

This comprehensive design promises to cut compute costs by up to 50% for large language model (LLM) inference and 75% for deep learning recommendation model (DLRM) inference while delivering consistent performance.

Co-founder and CTO Rochan Sankar emphasized, “The ACF SuperNIC is purpose-built for the next wave of AI workloads, including real-time inference and generative AI applications. Our Resilient Message Multipathing technology ensures that even in the face of hardware failures, AI workflows proceed uninterrupted. It’s about reliability at scale.”

The $115 million funding round attracted an impressive mix of new and existing investors. Notable participants include Arm, Cisco Investments, Samsung Catalyst Fund, Maverick Silicon, and VentureTech Alliance, alongside returning backers like Atreides Management, Sutter Hill Ventures, and Valor Equity Partners. Enfabrica plans to use the funding to ramp up production of its ACF SuperNIC chips, expand its R&D team, and accelerate the development of next-generation products.

“The participation of such a diverse group of investors speaks to the commercial value and transformative potential of our technology,” said Sankar.

Scaling the Future of AI

Founded in 2020 by industry veterans from Broadcom, Google, Cisco, and AWS, Enfabrica has made it its mission to address one of AI’s biggest challenges: enabling efficient communication across massive compute clusters. Its technology introduces a hub-and-spokes network model that reduces idle GPU time and ensures efficient data flow.

By enabling faster, more efficient training of AI models—often a process that takes weeks or even months—Enfabrica’s solution positions itself as indispensable in the age of generative AI. With applications ranging from large-scale training to real-time inference, the company is paving the way for data centers capable of handling the demands of today’s AI-driven world.

As Sankar succinctly put it, “We’re not just creating a product—we’re laying the groundwork for the next decade of AI advancements. The ACF SuperNIC is about freeing enterprises from today’s limitations and empowering them to achieve the impossible.”

Picture of Anshika Mathews
Anshika Mathews
Anshika is an Associate Research Analyst working for the AIM Leaders Council. She holds a keen interest in technology and related policy-making and its impact on society. She can be reached at anshika.mathews@aimresearch.co
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!
Join AIM Research's Annual Subscription Today

Unlock Unlimited AI Insights for Just $9999!

50+ AI and data science reports
All new reports for the next 12 months
Full access to GCC Explorer and VendorAI
Stay ahead with cutting-edge insights