Positron Raises $23.M After New CEO Steps In

People are realizing they need more competition in AI compute.

The AI hardware space is seeing increased competition as enterprises seek alternatives to Nvidia’s high-cost GPUs. Positron, a Reno-based AI chip startup, has raised $23.5 million in seed funding to scale the production of its energy-efficient inference chips, designed to process AI workloads at a fraction of the power and cost of Nvidia’s dominant hardware. The funding round includes notable investors such as Valor Equity Partners—known for backing Elon Musk’s ventures—along with Atreides Management, Flume Ventures, and Resilience Reserve.

While Nvidia continues to dominate AI computing with an estimated 80% market share, rising costs and increasing concerns over vendor lock-in are pushing enterprises to explore alternatives. Positron is betting that its U.S.-manufactured chips, which deliver significantly better power efficiency while maintaining high performance, can challenge the status quo.

Nvidia’s Grip on AI Compute

AI workloads are typically divided into two stages: training and inference. Training involves building AI models using vast amounts of data, while inference refers to deploying those models in real-world applications, such as chatbots, recommendation systems, and enterprise automation.

While Nvidia’s GPUs are considered the best-in-class for training large models, inference has different technical and economic demands. Running AI applications at scale requires hardware that balances performance, cost, and power efficiency, especially as businesses look to manage the soaring costs of AI adoption.

This is where Positron is focusing its efforts. The company claims its chips offer:

  • 3.5x better performance per dollar compared to Nvidia’s H100 GPU
  • 3.5x greater power efficiency
  • 70% faster inference at 66% lower power consumption

The startup is already shipping products to U.S.-based data centers and neocloud providers, allowing customers to test and deploy its chips in production environments.

These efficiency gains are critical as AI adoption expands. While companies like OpenAI, Google, and Meta are spending tens of billions on AI infrastructure—Meta alone has budgeted up to $65 billion for 2024—data centers are struggling with rising energy costs and hardware constraints. Positron’s power-efficient chips offer a compelling alternative.

“With this funding, we’re scaling at a pace that AI hardware has never seen before,” said Mitesh Agrawal, Positron’s newly appointed CEO. “Our solution is growing rapidly because it outperforms conventional GPUs in both cost and energy efficiency while eliminating reliance on foreign supply chains.”

Lambda’s Former COO Is Now Positron CEO

Positron’s new CEO, Mitesh Agrawal, isn’t new to the AI compute space. Previously the Chief Operating Officer and Head of Cloud at Lambda, he played a key role in growing the company’s revenue from $500K to $500M annually while securing over $1 billion in funding.

Agrawal joins Positron’s co-founders, Thomas Sohmers and Edward Kmett, who bring their own impressive backgrounds. Sohmers, a Thiel Fellow and semiconductor industry veteran, serves as Chief Technology Officer, while Kmett, a renowned mathematician and functional programming expert, is the company’s Chief Scientist.

This leadership shakeup comes as Positron accelerates its expansion efforts, already shipping products to data centers and neocloud providers across the U.S.—an impressive feat for a startup still in its first year.

The AI boom has triggered an unprecedented demand for computational power, but data centers are struggling to keep up. Traditional GPU-based setups consume massive amounts of electricity, with some high-end configurations drawing over 10,000 watts per server—a major constraint for legacy infrastructure.

Positron’s architecture is built to address this bottleneck. Instead of relying solely on GPU horsepower, its memory-optimized design achieves >93% bandwidth utilization—compared to just 10-30% in GPUs—allowing it to process AI models more efficiently.

This has tangible financial benefits. With a 50% reduction in data center CapEx, Positron’s customers can deploy AI workloads without the typical costs associated with upgrading power-hungry hardware.

Scott McNealy, Operating Partner at Flume Ventures, sees this as a game-changer.

“Investing in domestic AI hardware is a strategic imperative for securing America’s global AI posture,” he said. “Positron is proving that world-class AI compute doesn’t have to come from overseas.”

Why Positron’s U.S.-Made Chips Matter

Unlike most AI chip startups, which rely on manufacturing partners in Taiwan or China, Positron has built an entirely U.S.-based supply chain. Its chips are fabricated and assembled in Chandler, Arizona, by Intel-owned Altera, a company specializing in field-programmable gate arrays (FPGAs).

This domestic supply chain provides two key advantages:

  1. Reduced geopolitical risk – As the U.S. and China remain locked in semiconductor tensions, businesses are increasingly looking for domestic AI hardware sources to avoid potential disruptions.
  2. Faster time-to-market – By working with a local fab, Positron can rapidly iterate on its chip designs without the long lead times associated with offshore production.

“Investing in domestic AI hardware is a strategic imperative for securing America’s global AI posture,” said Scott McNealy, Operating Partner at Flume Ventures. “Positron is proving that world-class AI compute doesn’t have to come from overseas.”

“The industry is finally waking up to the dangers of Nvidia controlling 90% of the market,” said Sohmers. “People are realizing they need more competition in AI compute.”

What’s Next? 

While Positron’s first-generation chips leverage FPGAs, its next big move will be a transition to application-specific integrated circuits (ASICs) custom-built chips optimized for AI inference.

FPGAs offer flexibility by allowing companies to reprogram the hardware for different workloads, but ASICs provide higher efficiency and lower costs at scale. Positron’s strategy is to perfect its chip architecture on FPGAs before committing to ASIC production, ensuring a smooth transition to mass-market adoption.

“We’re not stuck with FPGAs,” Sohmers explained. “Our ASIC is being designed based on real customer needs, rather than the guesswork that often plagues AI chip startups.”

Unlike many AI chip companies that struggle to commercialize their technology, Positron has already demonstrated real customer traction, shipping its first-generation products to enterprise clients before this latest funding round.

Can Positron Take on Nvidia?

Competing with Nvidia is about the entire AI ecosystem.

Nvidia’s dominance comes from its CUDA software stack, which has become the default programming platform for AI developers. Companies that have tried to challenge Nvidia in the past—such as Graphcore and Cerebras—have struggled not because of hardware shortcomings, but because of software adoption barriers.

Positron is taking a different approach by ensuring plug-and-play compatibility with Hugging Face and OpenAI APIs, making it easier for enterprises to integrate Positron chips into existing AI workflows.

Additionally, Positron is targeting a specific market gap—inference—where cost and power efficiency matter most. As AI workloads shift from research labs to large-scale enterprise applications, demand for inference chips could surpass training chips, giving Positron a growing market to tap into.

While Nvidia remains the dominant force in AI hardware, its high prices and power consumption are pushing enterprises to seek practical alternatives. Positron is addressing this demand by focusing on AI inference—a segment that is expected to outgrow AI training in enterprise deployments.

The company is already shipping products, has secured backing from deep-pocketed investors, and is actively scaling its operations. Its leadership team has experience in growing AI infrastructure businesses, and its U.S.-based production strategy ensures supply chain stability.

Yesterday, OpenAI reported nearing the launch of its first custom-designed AI chip, with Reuters reporting that OpenAI plans to send the design to TSMC for validation in the coming months, before mass production begins in 2026.

Led by Richard Ho, a former Google chip designer, OpenAI’s 40-person in-house team collaborated with Broadcom to build a custom AI processor. Unlike Positron, OpenAI’s chip is designed for both training and inference, though it will initially be deployed in limited quantities for AI inferencing tasks. The chip will be manufactured on TSMC’s 3nm node and is expected to include high-bandwidth memory, making it a direct competitor to Nvidia’s GPUs.

📣 Want to advertise in AIM Research? Book here >

Picture of Anshika Mathews
Anshika Mathews
Anshika is the Senior Content Strategist for AIM Research. She holds a keen interest in technology and related policy-making and its impact on society. She can be reached at anshika.mathews@aimresearch.co
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!