Today a critical challenge has emerged: how to develop AI systems that are not only powerful and efficient but also responsible and beneficial to society. As AI becomes increasingly ubiquitous, touching every aspect of our lives, the need for responsible AI development has never been more urgent. The problem lies in balancing technological advancement with ethical considerations, ensuring AI systems understand and choose to act ethically rather than simply being restricted from harmful actions. This challenge is further complicated by the diverse interests of stakeholders, from corporate entities focused on monetization to researchers striving for open and transparent AI models. In this thought-provoking interview, Risto Miikkulainen, a seasoned AI researcher and innovator, delves into these complex issues and offers his unique perspective on the path forward for responsible AI with Kashyap Raibagi, Associate Director – Growth at AIM Research.
Miikkulainen, a Professor of Computer Science at the University of Texas at Austin and VP of AI Research at Cognizant Advanced AI Labs, has been at the forefront of AI development for over three decades. In this thought-provoking interview, he shares his journey from being inspired by the iconic HAL 9000 in “2001: A Space Odyssey” to his current work on cutting-edge AI technologies.
With a blend of optimism and pragmatism, Miikkulainen tackles the complex issue of responsible AI development. He argues for a shift in our approach: instead of merely imposing restrictions on AI systems, we should focus on creating AI that understands ethics and chooses to act responsibly. This perspective opens up exciting possibilities for AI to become a powerful force for promoting equality and societal benefits.
Throughout the conversation, Miikkulainen delves into several key aspects of responsible AI development:
Key Takeaways:
- Responsible AI has evolved from focusing on bias prevention to actively promoting equality and societal benefits.
- Open AI models and collaborative research communities are crucial for advancing responsible AI development.
- There’s a delicate balance between business interests and the need for open, ethical AI systems.
- AI for good projects should focus on practical, ground-level applications that empower individuals rather than just high-level policy recommendations.
Kashyap:
Thank you so much for being here, Risto Miikkulainen.
You’ve been with Cognizant for a while now, and your research has been widely discussed. I recently saw that you’ve been on Lex Fridman’s podcast, so it’s quite an honor to host you. Before we start, could you tell our audience about your journey in AI and what fascinates you about it?
Risto:
Like many people who work in AI that are in my generation, I saw a movie 2001 Space Odyssey, and that just changed my life. I wanted to build machines like HAL, but not machines that are schizophrenic or something, but that was really an inspiration, that we could build a machine that would be intelligent, have a level of intelligence similar to humans, so you could have it as a companion, assistant and an equal, basically. And it took a long time, to get to the field and learn the ropes. And we are not quite there yet, but we are getting closer, much closer than I ever thought.
So I got my PhD in 1990 at UCLA and did AI natural language processing with neural networks at the time, and then I’ve been a professor at the University of Texas at Austin since then. And I worked on neural networks my whole career, but also evolutionary computation, optimization and discovery. And in 2015 Babak Hodjat got in touch with me and was interested in using some of the technologies that we developed at UT Austin in the real world. They had this startup called Sentient. And I joined them, and we built several applications, made some progress in research and Cognizant then acquired us in 2018 and we continued here and now we are growing this lab by leaps and bounds. Now, AI actually exploded, and it turned out that we are right in the middle of it, the exciting opportunities that now exist beyond our wildest dreams.
Kashyap:
You mentioned wanting AI to be a companion and equal. This reminds me of a recent conversation about the difference between a co-pilot and a coding assistant or chatbot. Some are looking at AI as a companion rather than just a servant. Considering the topic of responsible AI, how has your understanding of it evolved over the years or is similar today?
Risto:
AI is really difficult to build. And initially nobody even had a clue that it might someday cause trouble, because it was so hard to get it to do anything. And now this is relatively new that we have to actually worry about how people use AI and what effect it has. I was just recently on a panel with five people, and there was only one AI expert there, somebody from the government and somebody from healthcare. Another one was an artist. So many people now have to use AI and it touches their lives. So it’s become a new problem entirely. We build this technology, and it actually is going to be out there in the world and change it. So what do we do with it?
It’s still a little unclear what even responsible AI means, because at first it was just that your system is biased. It’s treating people differently because of superficial reasons, and that’s a very narrow definition or challenge. But I think we have an opportunity here, since AI is going to be everywhere in the society, to make society better through AI. So we can make AI systems be aware of these issues, fairness, bias, various other aspects of it, and sustainability and all that. So it can be an engine for promoting equality, because it can be aware of what the challenges are and act accordingly, even better than people. I mean, humans have their own agendas. They have limited knowledge of what’s going on, and they make decisions based on the information. But AI can actually assimilate more information and be more objective in many cases than humans can.
So there’s an opportunity here, and that’s what I want responsible AI to be, not just something that imposes restrictions on what we do with AI, how we deploy it, but something that goes beyond that. We’ll have to build metrics on how to measure effects and responsibilities or harm, and then we take AI to promote those good effects more than just preventing harm.
Kashyap:
You have a more optimistic view on AI. You mentioned training AI to understand bias, but it’s humans training the AI who need to want responsible and fair AI. How do we ensure responsibility is ingrained in the systems we’re building and deploying, especially in critical areas?
Risto:
Now, of course, there’s always bad actors, and people will use AI in not so nice ways. It’s like any other technology in that sense. But I think there’s a really crucial point here that I want to make, is that currently there’s a lot of effort in alignment, fine tuning AI systems, LLMs, for instance, so that they would never say anything bad, never tell anybody how to build a bomb or kill somebody or whatever bad actions they might be, that they are prevented from actually being able to execute something like that, and I don’t think that’s going to be good in long term.
I would much rather trust an AI who can do that and understands that it can do that and chooses not to do that. So that is a big difference in being capable and on top of it and recognizing the dangers deciding that certain actions are good and others are not, but at least be aware of all of those possible options and with that kind of AI, then we have a challenge of actually instilling our values on the AI, like what we actually believe is the right thing to do but the AI can then carry that out because it understands the different values and different implications.
Kashyap:
But AI isn’t built as one cohesive system. It exists in silos, with different people building AI for various purposes. How do you envision achieving this universal understanding of good and bad across all AI systems?
Risto:
Currently, when we are talking about LLMs, you cannot really build a capable LLM unless you give it a lot of training data, basically all the written material that exists. So it should actually have this general world knowledge upon which you can then specialize it onto something. It can be a math expert, coding expert, a philosophical reasoning expert, or maybe even some bad actor, but its foundation is still all of human experience that’s expressed in the written media.
Kashyap:
So you’re suggesting that as we move towards AGI, it will likely be built on foundation models currently developed by a few companies with corporate interests. How do we ensure responsible AI considers everyone’s interests in this scenario?
Risto:
These sophisticated AI systems most likely have to be built on foundational models, and that’s what I meant, also by training them with all the human written media. And in that sense, they would share the same kind of common knowledge. And then on top of that, you can make them specialized in something. Big question here, and a big opportunity is in making open systems that, for instance, that we would actually have the code, the data, and all the information about the model would be available. And there are a couple of attempts like that open to a certain degree. LLaMa, for instance, it’s quite a bit more open than some of the others. And in academia also, there is an effort to pool resources in academia so that it would be possible to train a model using similar resources as industries have been using, and then that would be open for any scientist.
And I think that is a really crucial effort, that if we build these models and they are closed and nobody can look at them, understand what’s in there, what’s happening, it’s not going to carry us further. We can’t build on it. If we have an open model that scientists everywhere can analyze and understand, they can look at their bias and fairness and other aspects of potential harm and also the understanding. Do they actually understand the difference between right and wrong and good and bad? Those kinds of technologies can be developed if the models are open. So some can still be closed, and if they’re really good at coding, for instance, people will use them. But I think for the research and ethical issues, I think we do need open AI models.
Kashyap:
It’s an interesting point that you bring. While there is, government is always trying to play a role in terms of enabling rules for AI, also, at the same time, there are researchers in the private sector that are always evolving and adapting into the newer changes that are coming and building LLM that are more catered to business use cases or other use cases. What’s the role of researchers in ensuring safe use of AI, especially with existing large language models built by the private sector?
Risto:
There’s also another area which is just developed very differently from the past, that there’s one community of AI researchers and scientists, whether they are people in industry or they are in academia. They go to the same conferences. They publish papers on arXiv, and in three months, now, pretty much if you put a paper on arXiv, in three months, somebody has taken it and built on it and put another paper on arXiv. If we can keep going that kind of a community that’s openly discussing ideas, openly exchanging ideas. I think this is part of the reason why this change has happened so quickly, because this has been a tremendous period of open exchange of ideas.
Now business interests can get in the way of that, but I think they can coexist. We can have this kind of open community and then have companies develop these specialized tools that other people can use. But the challenge is obviously there that it could go the other way as well. And that’s why it’s so important that some of these large models are actually open so we can do research on cutting edge models instead of something that was cutting edge five years ago, because we can’t draw the right conclusions, and that is still a challenge. We don’t have that many of these large language models that are completely open to researchers. And even if they’re open, they still have to be run with hardware, and that’s very costly, so that kind of infrastructure still needs to be built, but it is happening now.
So there are consortia that are established in order to give people access to the level of compute that they need in order to analyze these models and bring their science forward. So I’m optimistic about it, but it is still a delicate balance, and we’ll have to see how it plays out. And innovation in Silicon Valley in general, that’s always been the case. When people talk to each other, they become interested in this particular area. It’s been to the extreme, the same conferences, the same publication, which everybody knows what everybody else is doing. And it’s been possible, then, to build very fast on innovation.
Kashyap:
How receptive are big corporations to building responsible AI, especially when it might work against monetization of their language models? We have seen some examples without naming companies of large scientists, famous scientists, AI scientists who are working on this thing, getting fired, or their laptops being taken away especially these are some of the things that have happened in Silicon Valley.
Risto:
Even if you’re optimistic, you cant be naive. There are conflicting interests, and they sometimes flare out these conflicts. There’s a tremendous grassroots push towards responsible AI that people who are scientists and developers, really want to make the world better. This has always been the case, and then sometimes from the top, the business interests don’t align, and not quite riots, but at least very strong groups are formed and voice their concerns. And sometimes, people will have to go and they let go, and other times, the company changes their policy and what they’re working on. So I think this probably will continue. There are different interests, but as long as there’s enough growth and we find new opportunities for AI and business opportunities as well, it is always better to invest in new technologies, development and innovation then reap the benefits of that rather than try to put in restrictions and constraints so that you can talk and innovate and shelter or protect your innovation against competition instead of collaborating and building something even better.
As long as that kind of attitude is kept in check we can actually make more progress and more money even by innovating than trying to protect and stifle innovation. It is people and its interest and many stakeholders and I think it’s going to be a constant checks and balances in how things are going to progress. It has to make an impact eventually and these companies will have to survive and make money. But now it looks like there are opportunities for that. Just imagine how fast it has happened in only a couple of years. It takes years and years to build technology, advance it and adopt it, even like cell phones. I mean, how long did that take? Obvious technology, everybody should have it, but it took decades. That is different. The speed is different, and that also makes these decisions. Sometimes we stumble, sometimes we make mistakes, but fortunately, there are enough opportunities and areas and companies and academic researchers now that somebody will make progress, and others will have to adapt.
Kashyap:
One thing that has always worked especially is incentivization of some of the things. For example, when it comes to climate change, there’s a lot of companies and their practices which are harmful for the environment. So that’s when there was a lot of incentivization on some of the things that are being done, so that they cut on their carbon emission or invest in carbon neutral projects. Could incentivization, similar to what’s been done for climate change, work for promoting responsible AI?
Risto:
I guess it’s possible, if you have companies that are behaving badly or not in a common interest that they would have to contribute to some AI for good projects, like, for instance, the United Nations Sustainable Development Goals that they have projects on. As I mentioned before, there’s a lot of grassroots enthusiasm for AI for good and climate change in particular, even without much prompting, there are always people, researchers, forming these alliances and pushing this agenda and trying to build better tools for AI for good. That’s really great if it’s happening from the bottom up, and because people are actually already incentivized to believe that this is the right thing to do, then it’s most likely going to have an impact. So at least right now, I don’t think we really need interventions, government interventions, and internalization specific to that, because it’s already happening.
We do need, some regulation as regulations are coming out, and actually that’s something that data companies themselves have been pushing, so there would be some kind of checks that can be done, metrics to make sure that AI is actually deployed with some knowledge of what effects it can have and these harms it can have. That’s a little bit different, though. That’s like we have to follow up on technology, rather than we are working on this because it’s good for humankind. It’s a different kind of motivation.
Kashyap:
Of all the research you have done in the area, where should research be focused to ensure AI scales responsibly?
Risto:
The biggest challenges are really people that we have to get government decision makers to understand the science and listen to it when it actually makes a difference. Those are much harder challenges, but there are technical challenges as well, and obviously one of them is trustworthiness. So the LLMs hallucinate and we have to build a better understanding of how we trust these models? They need to estimate their own confidence for instance and give alternatives. And that’s technology that needs to be developed, and lots of people are working on it. Maybe we’ll crack it, but that’s essential in anything when you deploy, you have to know how well you can trust it.
And there are also now approaches that put multiple AI systems together where maybe one of them will suggest a decision and another one will critique it and try to identify the risks and biases. And then in this kind of a multi-agent interaction, we can come up with a better answer, one that’s actually vetted already by AI before it’s actually implemented and found to be responsible and maybe trustworthy.
Kashyap:
What’s the role of “AI for good,” and how can every AI be viewed through this lens?
Risto:
Most of the time, it means projects that have, not necessarily a business objective, but something that helps humankind in general. And those are projects like I said, that you might have a company build something that it’s selling that helps businesses do better, and then another project that might be, making data available so that we can get better medical diagnosis or distribute food equally or more efficiently, or whatever it is. So they are projects that in my opinion, when we talk about AI for good projects, don’t necessarily have an immediate monetary value, but they clearly have a societal value. And it’s great if we can get the private companies also to contribute to those so they wouldn’t have to be, say, government run or something. Data is available at the very high level aggregate forum.
If you build some systems, we actually have to reach the people who are using them. That’s actually all of the AI applications making maybe somebody’s work easier. They empower them, rather than trying to replace them or make recommendations that are impossibly high level. I mean, a politician wouldn’t necessarily follow it, because it’s so complicated the decision to make. So we have to understand what the decision makers are struggling with and what power they have to change it. And then AI should operate at that level, and that means we have to get data from that level. So a lot of these AI for good projects are perhaps, currently, at least the initial ones are too high level. It’s United Nations data at the level of states, and it doesn’t really get to who’s actually irrigating the crops. So there’s a lot of work that needs to be done in understanding where the effect can be actually had, and get the data so that we can build systems that help actual people.
Kashyap:
Thank you so much Risto for making the time. It was a fun conversation.