TensorWave has raised $100 million in Series A funding to speed up the deployment of the world’s largest liquid-cooled AMD GPU cluster, comprising 8,192 MI325X GPUs.
The round was co-led by Magnetar and AMD Ventures, and was also supported by Prosperity7, Maverick Silicon, and Nexus Venture Partners. This investment represents a major milestone in AI cloud infrastructure, cementing TensorWave’s commitment to AMD’s next-gen computing hardware.
Commitment To AMD Hardware
The move by the company to concentrate on AMD solely is one that has been made with purpose, informed by the belief that specialisation begets better performance.
CEO Darrick Horton was convinced that AMD’s Instinct accelerators are poised to tackle massive-scale AI training. As he explained in one of his earlier interviews that “there were definitely some bumps in the road with the first-generation product,” Horton’s rent-a-GPU offering was an early backer of Nvidia’s rival, explained El Reg. “It’s widely publicly recorded that AMD training performance was not great in 2024.”
But AMD’s Instinct MI325X GPU, with 256GB of HBM3e memory and optimised for deep learning workloads, is a giant step forward in AI computing. TensorWave separates itself from the rest of cloud providers by giving developers unfiltered, direct access to such advanced hardware instead of hiding it behind abstract layers proprietary to them. By optimising its stack on top of AMD’s architecture, TensorWave increases efficiency in AI training, fine-tuning, and inference workloads.
As Horton reaffirmed his belief on AMD with the statement that, “Our belief is simple: specialization wins. We’ve been AMD-native from day one. That depth of focus has let us unlock performance gains across training, fine-tuning, and inference by optimizing every layer of the stack around MI325X.”
An Open AI Environment
Scalability of AI infrastructure involves something beyond sheer computation. As larger and memory-consumptive AI models become common, conventional air cooling is challenged to provide system stability. TensorWave is overcoming this challenge with the world’s largest direct liquid-cooled AMD GPU cluster that delivers optimal performance without thermal throttling.
With its cutting-edge cooling, it enables maximum GPU density per rack, maintains high throughput for long-duration AI training tasks, enhances energy efficiency, and maximises hardware lifespan. All these engineering improvements mean the infrastructure is designed for scale and reliability.
TensorWave is also defining the future of open AI environments.
Platform developers are growing increasingly frustrated with locked-down platforms, volatile pricing models, and limited access to foundational AI hardware. The firm meets these challenges by doubling down on AMD’s open ROCm platform, allowing researchers and businesses to develop AI models free from vendor lock-in. It has a well-defined mission, which is to deliver high-performance computing to developers who require flexibility, transparency, and unrestricted control over their workloads. The Series A funding will enable TensorWave to speed up the rollout of its MI325X cluster, scale up its liquid-cooled architecture, and expand operations to service the needs of hyperscalers and enterprise AI teams.
As AI workloads need more memory, higher consistency, and improved throughput, TensorWave is becoming a niche cloud provider that’s purpose-built to serve these demands. The firm is spearheading a paradigm shift in AI computing, moving from generalised cloud infrastructure to purpose-built solutions for optimisation in deep learning. The sector is moving toward an era in which AI is not just revolutionising research and automation but also redefining how cloud infrastructure is built and deployed.
Through its synchronisation with AMD’s high-performance computing innovations, the firm is making sure that developers and businesses are able to harness the potential of AI without having to stay within technological limitations. Whether organisations are training edge models, fine-tuning language models, or increasing inference workloads, TensorWave is delivering the speed, consistency, and direct access that is needed for AI innovation breakthroughs. This new round of funding marks robust investor confidence in TensorWave’s vision and emphasises the increasing demand for specialised AI infrastructure as machine learning continues to grow.