Close this search box.

Breaking Through The Barriers Of AI Computing Costs, For Deployment At Scale With Naveen Verma

At the end of the day, in order to be successful as a business, you have to solve a problem, and it has to be a problem that really matters and that people really care about.

In the realm of advancing technology, the demand for Artificial Intelligence (AI) solutions is surging. Yet, a major hurdle hindering widespread adoption is the high cost of AI computing. Overcoming these barriers is essential for scaling AI deployment effectively.

To give us more insights on this, for this week’s CDO insights we have with us Naveen Verma, Ph.D., who has been a professor of electrical and computer engineering at Princeton University since 2009, where he has done pioneering research in a range of emerging technologies and systems. His breakthrough discoveries in next-generation computing have been widely recognized in industry and academia, and have led to step-change increases in compute performance and efficiency. He spent six years leading the deep research behind EnCharge AI’s core technology before co-founding the company.

The interview will delve into the genesis of EnCharge AI’s innovative product and its development under the guidance of Naveen, exploring the intersection of academic research and entrepreneurial pursuits. Discussions will touch upon the collaborative efforts behind the product’s creation, balancing the demands of academia with entrepreneurship, strategies for securing funding, competitive positioning against industry giants, potential collaborations for business expansion, and long-term projections for EnCharge AI’s growth and technological advancement.

AIM: What sparked the idea for your product, and how did your research background influence its development? Additionally, could you elaborate on the collaborative process, particularly how your students contributed to its creation?

Naveen Verma: It’s an exciting story for me personally. I’ve been a professor at Princeton University since 2009, and my research here has focused on Next Generation approaches to computing. We noticed very early on that computing is extremely important. We also noticed that things you can do with computing are becoming more powerful. But in order to do those things, what we’ve seen is that the fundamental resources that we need to be able to compute are increasing very rapidly.

As AI started to come into the picture, it started to increase even more rapidly. In fact, not only did the compute requirements start to increase more rapidly, but the compute capabilities started to increase more rapidly. It’s a very exciting time to be a researcher in this space, but in particular, it’s an exciting time to be somebody who’s doing fundamental research in this space.

If you look at the trajectory that AI has taken and has driven computing down, the explosion in compute requirements that we’ve seen, both in terms of the number of operations and also in terms of the amount of data, has just been incredible. So, even in the best of days of computing and computing hardware where Moore’s law was in full flight and transistors were scaling, the pace with which computer requirements have come on has far exceeded anything that we could have done there.

And so it became a really clear picture: If you’re in the fundamental research space, we need to do things fundamentally different. And that is where the fun starts. That’s where you say, “Listen, the way we’ve done things in the past isn’t going to work. Let’s start to really think broadly and disruptively about what can be done.”

My own group has been very interested in the Next Generation, new paradigms for computing, but what’s important to remember about computing is that they are complex systems. They are full stacks. They start from the physics of devices that do little manipulations and they go up  to circuits and architectures that allow us to have those devices work together towards a larger function of capabilities, but then there’s the complexity of the software stack. And I would say one of the unique aspects of my research group at Princeton is that we’re an extremely full-stack group.

So we build chips and send them to foundries to get fabricated. We even fabricate transistors and chips in my lab. But the point is we build very complex chips that can really be representative of the challenges that we face in industry and in state-of-the-art systems. And then we also build that full architecture and software stack. So we build software to talk to those chips. We build software that figures out how to take applications and map them to the chips, and then we build it all the way up to the programming interfaces where people are developing those applications.

That’s a highly collaborative process. But having all of that within one research group allows us to really think on a fundamental level on how to take computing in these new directions.

AIM: How do you manage being a full-time professor at Princeton University while also running a business based on your research? Can you share your strategies for balancing academia and entrepreneurship, including securing funding and scaling your company alongside your teaching and research responsibilities?

Naveen Verma: The most complete answer to that is you not only need to have a lot of support around you from the team that you work with on the university side, the startup side, but also from the institutions. You need to have a university that understands the value of innovation and driving research out of the research lab, and you need to have a startup, investors, and stakeholders who understand the importance of fundamental research and the impact that it can have on innovation. So there needs to be an alignment across all of these things that I rely heavily on and that I work closely with. I think the alignment comes from the following aspect, that is if you look at some of the biggest challenges that we have in the world today and I think computing, especially driven by AI, is a clear example of such a challenge. These are big problems. They’re not going to be solved in a sustainable way without really transformative solutions. And the reality is we need those solutions in society. But those solutions in many cases are going to be sourced out of fundamental research. That’s how we think about fundamentally new ways that can put us on new trajectories to meet these requirements that would otherwise be unsustainable. If you think about that, universities have a very important role to play in solving major societal problems. EnCharge and the research that we did at the university and our commercialization is a  clear example of this. This is a fundamentally new way of doing compute. In order to really understand it, to solve all of the issues around it and in order to put it into a platform that people can use, there’s a lot of fundamental research that has to be done.

We didn’t go and just raise money when we had the first idea of how to do this new and transformative approach to computing because there’s a lot of fundamental challenges that needed to be solved all the way up the stack. If you do go and raise that money, unfortunately often you don’t give these new fundamental innovations the chance to really solve real problems because when you start focusing on commercialization directly now, you lose the opportunity to really research things and understand them deeply. Having done that in the university, solving the problems, made mistakes, and landed on the right solutions de-risks this completely new technology.So now, when you have the opportunity to go in commercialize you can start to be extremely product-focused, not worry about technology, start to think about the customers, the solutions, the ecosystem partners. So there’s a real continuity in delivering impact that starts with fundamental research and fundamental technological transformations all the way through to innovation, commercialization, building products, putting them into real customer solutions and so on. There’s a real consistency and synergy across these.

AIM: Could you elaborate on how you’re competing with larger giants in the field and why someone might choose to use a chip built by EnCharge AI in their computers over something built by Nvidia or AMD, given their extensive libraries and offerings?

Naveen Verma: If you look at the landscape, there’s a lot of AI startups. There’s also a lot of activity in building new AI hardware at the big companies. But if you look at the landscape, there’s two categories of efforts here. There’s people who are building digital accelerators which are purpose-built for AI compute. They’re taking the same fundamental technology we’ve had for years, digital circuits and architectures, and saying let’s really specialize these as much as we can to solve the problem of AI. And that’s really what Nvidia and Qualcomm are doing. But you’re limited in how far that can get you in terms of its ability to meet power requirements and compute performance requirements, using the same fundamental technology we’ve always used and you will do as well as both technologies can do. There’s also startups that are trying this but it’s still the same fundamental technology and so we’re all trying to engineer within the same constraints of physics.

Now, there’s this new category where there’s new paradigms to computing and that’s where my research has focused over the last 15, 20 years and the interesting thing here is now you’re saying, ‘okay, let’s break the barriers of that physics. Let’s have new physics to build new approaches to computing that can give us much higher efficiency and much higher capability.’ The challenge there is there’s also startups that are looking at a whole range of them. But these are sort of fundamentally different approaches. There’s so many technological barriers that they need to understand and solve for. The distinction with EnCharge is that we’ve come from a really differentiated fundamental technology, but we’ve spent the time really understanding it, to de-risk it, to develop it so that we’re in a position to be very product and customer-focused. There’s also what the fundamental technology is and what new physics does it access and what barriers does it break but on a high level, that’s the positioning that we have in this landscape.

AIM: Are you collaborating with someone to expand this business? As an academic, how did it work for you to secure funding? What did you include in your presentation that convinced the VCs of its value? And now, how do you plan to utilize that funding to scale?

Naveen Verma: At the end of the day, in order to be successful as a business, you have to solve a problem, and it has to be a problem that really matters and that people really care about—preferably a lot of people—so that on a business level, you’re having a sustainable model for building revenue. It has to be a big enough problem also so that you’re having impact on the levels that you want to have from your fundamental research. The journey here is really about coming from a fundamental technology, but really understanding what problem it solves and then working with partners who are seeing the problems that we described in AI computing to then figure out which is the right problem among these to solve and how we should solve it. Our solution is this fundamental solution, but it needs to be positioned in a way that it actually solves the problem that somebody is facing, and the more specific we can get about that problem by working with a specific partner and by working with a specific ecosystem, the better you will solve that problem. It’s about articulating what that problem is and making sure that it’s a big enough problem that people care about.

If you look at the space of computing today, there’s a lot of problems, and I would say the paramount one is that compute power consumption is very unsustainable, especially as we start to scale out AI. Today, it’s done largely in the data center, which in many ways is restrictive. It’s limited by cost structures, by the ways that data and soft processes are managed in the data center, by latency, and privacy concerns in deploying that data in the data centers. So there’s a clear trend actually we’re seeing both in terms of accessibility to AI and in terms of user experience and user meaning from AI where AI is being disaggregated into all sorts of devices. Our laptops, mobile devices, all these kinds of things. Now you start to think about what those systems look like and who are the partners, the players that are delivering those to the market and that’s a complex constellation of software developers, operating system developers, processor developers, OEMS who built the devices. And so it’s really critical to articulate what problem we solve and work together with them to make sure that we build a product that fits into a solution that significantly impacts the limitations that computing devices face.

AIM: Are you considering hiring professionals for the business side of things to enhance your marketing strategies and create brand recall value, given the innovative nature of your product and the need to scale it successfully with the funds you’ve raised?

Naveen Verma: I am one element in the EnCharge operation and my role is very limited when you think about all of the things that EnCharge needs to do. We have a very robust team that’s doing the production development. That’s a team of Engineers. But as you pointed out, we also have a critical team that’s doing product strategy, product marketing and you need to start with a fundamental problem and then you need to figure out whose problem it solves and hopefully that’s an important enough base of people whose problem it solves.

Articulating all of that is critical through the product strategy and positioning. We have a team in the company now over 50 people and a large number of them are very focused on working with customers and partners to make sure that they understand the role that this plays so that we can demonstrate the impact of our technology and then from there, we move to a phase where we want to develop broad awareness so that there’s that brand recall value also. But I’m a small piece of this operation.

AIM: How do you plan to differentiate your product in the crowded AI accelerator market? If your approach is similar to competitors, how will you effectively communicate your unique selling points to stand out?

Naveen Verma: The place that all of this starts from is that we have a fundamentally differentiated technology, and that technology is something called Analog in Memory computing. If you look at AI models today, they’ve gone in two directions: they’ve exploded in the number of operations that we need to do, and they’ve exploded in the amount of data that they do these operations on. So we need to solve both of those problems simultaneously. In memory computing is one of the very rare architectures that can solve both of those problems. It keeps the data in memory to solve the data movement problem, and it uses analog techniques as opposed to standard digital techniques to do the compute in ways that are very efficient and high density, so we can fit the compute inside the memory. This is very differentiated, so there are a lot of people working on all sorts of different digital technologies. Those are the technologies that have always been around. We’re limited by the physics of what that can do.

Analog in memory computing, we’ve known for decades that analog can be 100x more efficient and 100x more dense. The problem is the analog can be noisy. And so the big problem that EnCharge solved and our fundamental technology solved is it’s giving us an approach to do very precise, scalable analog compute to start to gain these order of magnitude advantages. So we’re doing something very fundamentally different, and what’s important here is somebody needs to care about the advantage that this gives you, and then we need to put this into an entire product where they can actually benefit from that advantage. We spend a lot of time having those conversations with the partners who are really driving the forefront of AI. The ones who really need that high efficiency and high performance, who are truly trying to deliver that user experience at the forefront, working with them to understand what are the needs today, what are the future needs, how do we build a platform that is programmable so that AI and AI developers can continue to innovate. That’s been the critical lifeline for AI in the last 10 years, which is the pace of innovation, so that we can keep up with that pace of innovation and this hardware platform can go where the developers and the markets need it to go. So that’s really key for us is having those conversations and making sure folks understand that and we understand from them how to deliver the right product.

AIM: What are your projections for the future of EnCharge AI in the next one, five, and ten years? How do you envision your company evolving both in terms of scaling the business and advancing the technology?

Naveen Verma: Broadly speaking, we feel very privileged to have a fundamental technology that can address a very broad range of AI applications, but also a team that can deliver that technology. We intend over time to do that as much and as widely as possible. I think that’s what’s required to make AI fully, maximally available in all of the cases and to all of the people who need it, but this gets done in steps.

What we want to do is look at where are the biggest opportunities where our technology can most immediately have the maximum advantage and start with those and then go from having built that out as a product to now augmenting that product with additional auxiliary capabilities and features and partnerships so that you can then address the next biggest opportunity and progress in that way. If we think about the first big opportunity that we see for EnCharge, that’s what we’re very focused on in the next 12 months. Our product roadmap is really about delivering solutions where AI can scale out of the data center into platforms, client devices, mobile devices, edge platforms so that first of all by the sheer number of these devices you have scalability for AI to achieve.

Secondly, because of the proximity of these devices to humans and the various tasks that we do, the user experiences and the capabilities, the ways in which we can benefit and leverage AI, expand and open up, and so these are the immediate focuses for us. And so if you start to think about these mobile devices and client devices, there’s a very well-established ecosystem. We’ve been relying on these devices for so many years; there’s a cohort of people who are delivering those.

The idea now is to take EnCharge’s offerings and advantages and introduce them into these devices so that now the impact of those devices can grow through AI and so the aim for us is to work closely with those ecosystem partners to deliver AI very broadly to this large market category where you’re having major power efficiency, performance cost challenges, which we think our fundamentally better solution can address.

Picture of Anshika Mathews
Anshika Mathews
Anshika is an Associate Research Analyst working for the AIM Leaders Council. She holds a keen interest in technology and related policy-making and its impact on society. She can be reached at
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Biggest Exclusive Gathering Of CDOs & Analytics Leaders In United States

MachineCon 2024
26 July 2024, New York

MachineCon 2024
Meet 100 Most Influential AI Leaders in USA
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!