TensorWave Secures $100M to Power AMD-Only AI Cloud

TensorWave separates itself from the rest of cloud providers by giving developers unfiltered, direct access to such advanced hardware instead of hiding it behind abstract layers proprietary to them.

TensorWave has raised $100 million in Series A funding to speed up the deployment of the world’s largest liquid-cooled AMD GPU cluster, comprising 8,192 MI325X GPUs.
The round was co-led by Magnetar and AMD Ventures, and was also supported by Prosperity7, Maverick Silicon, and Nexus Venture Partners. This investment represents a major milestone in AI cloud infrastructure, cementing TensorWave’s commitment to AMD’s next-gen computing hardware.


Commitment To AMD Hardware


The move by the company to concentrate on AMD solely is one that has been made with purpose, informed by the belief that specialisation begets better performance.
CEO Darrick Horton was convinced that AMD’s Instinct accelerators are poised to tackle massive-scale AI training. As he explained in one of his earlier interviews that “there were definitely some bumps in the road with the first-generation product,” Horton’s rent-a-GPU offering was an early backer of Nvidia’s rival, explained El Reg. “It’s widely publicly recorded that AMD training performance was not great in 2024.”

But AMD’s Instinct MI325X GPU, with 256GB of HBM3e memory and optimised for deep learning workloads, is a giant step forward in AI computing. TensorWave separates itself from the rest of cloud providers by giving developers unfiltered, direct access to such advanced hardware instead of hiding it behind abstract layers proprietary to them. By optimising its stack on top of AMD’s architecture, TensorWave increases efficiency in AI training, fine-tuning, and inference workloads.
As Horton reaffirmed his belief on AMD with the statement that, “Our belief is simple: specialization wins. We’ve been AMD-native from day one. That depth of focus has let us unlock performance gains across training, fine-tuning, and inference by optimizing every layer of the stack around MI325X.”


An Open AI Environment


Scalability of AI infrastructure involves something beyond sheer computation. As larger and memory-consumptive AI models become common, conventional air cooling is challenged to provide system stability. TensorWave is overcoming this challenge with the world’s largest direct liquid-cooled AMD GPU cluster that delivers optimal performance without thermal throttling.
With its cutting-edge cooling, it enables maximum GPU density per rack, maintains high throughput for long-duration AI training tasks, enhances energy efficiency, and maximises hardware lifespan. All these engineering improvements mean the infrastructure is designed for scale and reliability.
TensorWave is also defining the future of open AI environments.

Platform developers are growing increasingly frustrated with locked-down platforms, volatile pricing models, and limited access to foundational AI hardware. The firm meets these challenges by doubling down on AMD’s open ROCm platform, allowing researchers and businesses to develop AI models free from vendor lock-in. It has a well-defined mission, which is to deliver high-performance computing to developers who require flexibility, transparency, and unrestricted control over their workloads. The Series A funding will enable TensorWave to speed up the rollout of its MI325X cluster, scale up its liquid-cooled architecture, and expand operations to service the needs of hyperscalers and enterprise AI teams.

As AI workloads need more memory, higher consistency, and improved throughput, TensorWave is becoming a niche cloud provider that’s purpose-built to serve these demands. The firm is spearheading a paradigm shift in AI computing, moving from generalised cloud infrastructure to purpose-built solutions for optimisation in deep learning. The sector is moving toward an era in which AI is not just revolutionising research and automation but also redefining how cloud infrastructure is built and deployed.

Through its synchronisation with AMD’s high-performance computing innovations, the firm is making sure that developers and businesses are able to harness the potential of AI without having to stay within technological limitations. Whether organisations are training edge models, fine-tuning language models, or increasing inference workloads, TensorWave is delivering the speed, consistency, and direct access that is needed for AI innovation breakthroughs. This new round of funding marks robust investor confidence in TensorWave’s vision and emphasises the increasing demand for specialised AI infrastructure as machine learning continues to grow.

📣 Want to advertise in AIM Research? Book here >

Picture of Upasana Banerjee
Upasana Banerjee
Upasana is a Content Strategist with AIM Research. Prior to her role at AIM, she worked as a journalist and social media editor, and holds a strong interest for global politics and international relations. Reach out to her at: upasana.banerjee@analyticsindiamag.com
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!