What does the AI revolution mean for our future?

The AI revolution offers immense potential, demanding proactive, responsible approaches to shape a future of opportunities and challenges.

The AI revolution marks a pivotal shift in our future, impacting technology, economy, healthcare, and education. In technology, AI advances bring efficiency but also raise concerns about privacy. Economically, job landscapes transform with automation, necessitating workforce reskilling. Healthcare benefits from AI in diagnostics but poses ethical questions about data privacy. AI-driven personalized learning transforms education, yet challenges surface in content shaping and equity. Governance and policy become critical to balancing innovation and regulation, addressing issues of bias and concentration of power. The AI revolution offers immense potential, demanding proactive, responsible approaches to shape a future of opportunities and challenges.

We had a roundtable discussion on the topic with a set of experienced and distinguished leaders in the industry. The session was moderated by Kashyap Raibagi, Associate Director – Growth at AIM along with panelists Suresh Martha, Head of Data Driven Innovation & Analytics at EMD Serono, Inc., Inderpreet Kambo, Principal (Partner), Commercial Technology & Analytics at IQVIA, Kiran Kanetkar, Vice President – Enterprise Technology, Data and Analytics at Pendulum Therapeutics and Joe Kleinhenz.

The Evolution’s Impact on a Technologist’s Perspective

So prior to being in the healthcare space, I was working in the finance industry. So back in about 2010 or 2011 back in those days more time was spent on data analytics and less on AI at least for the companies I worked for. But that has changed across the time. Starting 2015 or 14 we started adapting AI in our regular analytics as well. So that’s when I started with the pharmaceutical company. The first thing we designed for AI is trying to find out the look alike customers.

Because we are in the healthcare space. It’s very important to know our providers. So it’s mostly where we started predicting to understand who’s our current prescriber and what are the attributes that we can leverage and then try to understand who our look-alike customer should be. So that’s when we started adapting AI into our normal business operations. Since then we have done a ton of work on the AI space but wanted to give that example. On generative AI we have taken baby’s step for now because of the data or privacy concerns that’s there. Hallucination is there. Because we are a Pharma company we have to be very careful about any data case. So that’s why our evolution of Gen AI is very limited right now. We have our own GPT, which is obviously a licensing agreement with Open AI but still we have our own boundaries, but the application of that is limited to drafting emails, generating some contents. So it’s not been adapted at the pace where we wanted to be because of some of those concerns. Hopefully if you’ve seen in the news a lot of the companies are right now putting boundaries on Gen AI making sure it’s internal, making sure the data is not used to train the model. So once that evolution comes into play then I expect there will be a lot more adoption even within our company.

Suresh Martha, Head of Data Driven Innovation & Analytics at EMD Serono, Inc.

Unveiling AI’s Unprecedented Impact on Humanity’s Future

There is a Hong-Kong based startup that uses AI and advanced neural network models to expedite the whole process of drug discovery. They used Generative tensorial reinforcement learning model to design a fully functional drug molecule for treating fibrosis in mere 21 days. Their model was able to find not 1 but 6 distinct inhibitors of DDR1, a kinase target implicated in fibrosis and other diseases. That system, called generative tensorial reinforcement learning, or GENTRL discovered 6 DDR1 inhibitors molecules, one of which also showed very promising results in further studies involving mice.

Now compare this will be years of pre-clinical research that generates many molecules but only a handful of molecules pass-on to the next phase after many years and millions of dollars of lab testings involving complex and mostly human factored manual biochemical and biological lab tests. But this whole discovery reduced this process from many years to around 21 days. Around 9 months later, after successful results in preclinical studies, the company further initiated the first-in-human (FiH) study in healthy volunteers to establish a proper and safe dose for the molecule.. 

 Such technologies and advancements bear the hope that not only will we be able to find treatment for diseases that are unknown to mankind but also the way we perform clinical trials will be fully transformed. Transformative technologies such as these will pave the way how AI-driven approaches may bring significant social and economic impact to our society.

Inderpreet Kambo, Principal (Partner), Commercial Technology & Analytics at IQVIA

Influence of AI Milestones on Industry Perceptions

When it comes to ChatGPT and large language models, I mean it is definitely pretty entertaining for a lot of people in terms of you can really interact with someone and get any random questions answered through that. So that’s pretty good because it works with languages. But when it comes to businesses and I think that is where I feel like there are some challenges especially because when you have some business decisions to be made that means you need to have business data that needs to go and obviously you don’t want to expose your business data to an open model like let’s say ChatGPT, so you’re not going to put your data into the public domain. So that’s what most companies are trying to figure out. How do we create our own instance of a large language model in your own infrastructure is pretty costly. OpenAI themselves spent 100 million and a lot of different engineers who built this foundational model. So that’s something not every company can do. So you are going to rely on some of these models. But then the challenge is really how do we use some of these models and then use them for your own privately? That’s where I think a lot of companies are trying to figure out. How do we use it? So there is a lot of buzz around

that ChatGPT itself and how do we use those large language models for different things you can do. But I think practically what are some of the use cases that different companies can implement. I think there are few good examples. A lot of the predictive kind of models we already had that for last few years using a lot of the machine learning type of models, but I think where it is going to go next is really how do we implement some of these models on your private data without exposing your data with outside world and then use that for a lot of business decisions and automate some of those business decisions. We will definitely need a human in the loop at least for some time for sure because there is definitely a hallucination problem that everybody is aware of. So the human in the loop will be there, but I think that humans can become more productive with some of these technologies. I think a good example is really thinking about something like GitHub copilot that we have that can generate code. So a developer today can be much more productive by generating this code instead of hand coding every line of it. So there are definitely some of these low hanging fruits to be had which I think more companies will try to grab onto immediately. I think that is where there is a lot of promise but as each company becomes more mature at using some of these tools then we will uncover a lot of these high value use cases that in the long term will provide a lot of different benefits.

Kiran Kanetkar, Vice President – Enterprise Technology, Data and Analytics at Pendulum Therapeutics

Crafting a Framework for Evaluation Across Industries

As a leader in the enterprise, ethical use of AI is always at the forefront of my mind, more so with GenAI. There’s been a lot of talk about restricting use and access to GenAI. I believe at a societal level, it’s too late, the cat’s out of the bag. You can’t put that genie back in the bottle. There’s nothing they’re going to be able to do because even if one country says, ‘No, can’t use generative tools in this country,’ other countries are going to let it happen. There’s way too much potential. I think the risks we have are that most of these LLMs were built on data from the internet. The companies hoovered up all the data they could out of the internet to train these large language models. There’s a ton of bias and hate sitting in that data, and that’s something that I always look at, even for our normal ML models. Do we have bias in the data? Getting controls around that is going to be critical.

I do think that we’re going to see a much larger regulatory push into the entire data science space. Recently, President Biden signed that executive order around AI. I think that’s just the tip of the iceberg. In regulated financial industries, the regulators are starting to ask, ‘Please tell us everywhere you use any type of model.’ The answer quickly becomes, how much time do you have? That’s a big question. I think the responsible use of artificial intelligence is critical. I also think, again going to a societal level, there’s a risk of creating a Have and Have Not Society. The people that have access to artificial intelligence tools and intelligent agents are going to be so much more productive than those who don’t. You’re just going to see a widening of that of the gap we have today.

Joe Kleinhenz

Considering the wave of next-gen technologies including AI, GenAI, ChatGPT, securing patient data becomes paramount. During deployment of these technologies, all life science technology practitioners are deeply focused on data protection. Let us take the example of rare diseases. Rare diseases, by the virtue of their name, are diseases that affect less than 1 in 200,000 patients. Rare analytics patient is one such instance where absence of guard-rails could re-identify this sparse patient population and can cause patients to be re-identified.

This re-identification may potentially bring harmful consequences for the individual, for example, related to discrimination of such patients and including but not limited to insurance denials. There is not even an iota of doubt that there are serious potential risks associated with the handling of personal healthcare data and hence these technologies must be bound to ethical tasks of managing these risks and safeguarding the trusts of those whose data it utilizes. 

The Biden-Harris Administration’s announcement of setting up the working group to tackle risks of rapidly advancing generative AI is very appropriate. Just within the last few years, 100s of drugs and biologicals submitted to FDA had AI or ML included into one or more of the developmental stages. The executive order has special significance where the HHS will have directions to set up a task force to develop a strategic plan for establishing policies, framework, and guiding principles for predictive and generative AI capabilities. There is a mandate to develop AI assurance policies to ensure the quality of AI-enabled technologies in healthcare overall. The policy, as we spoke earlier, will help in monitoring Ai performance against the real-world data.

Inderpreet Kambo, Principal (Partner), Commercial Technology & Analytics at IQVIA

The Debate on Ban-Worthy AI Use Cases

Let’s say even if you have sort of bans in place, there are always bad actors who will use that for sure. So I think so that you see in cybersecurity. There are laws already that people will get prosecuted if someone hacks but there are thousands of hackers who do that. So there will be some bad actors who will use this technology in various ways. So that is where I think most companies or individuals would have to think of how do we safeguard ourselves ? Again some of these sort of issues that are going to crop down the line? 

Kiran Kanetkar, Vice President – Enterprise Technology, Data and Analytics at Pendulum Therapeutics

From a business perspective, I have seen most of the companies now have an AI ethical team which comprises legal people just to go over the use case. Make sure that there are no loopholes, to make sure they’re protecting the company. So I would not say it’s going to be banned, but they will be strictly evaluated whether it should be applied. If any AI should be applied for that particular use case.

Suresh Martha, Head of Data Driven Innovation & Analytics at EMD Serono, Inc.

AI’s Impact: Navigating the Future Landscape Together

The emergence of AI enabled workforce ecosystems has implications far and many – bringing together the operational and experiential realities of the “next-phase of normal” and changing the dimension of how we do our day-to-day work. On that note, we are seeing leading organizations promoting an AI-ready workforce through a multitude of initiatives. From talent acquisition and retention all the way to training, upskilling and rotation programs, we are seeing conscious efforts done by organizations to stay ahead of the talent need.

These leading organizations are working tirelessly to put their plans to assess current skill levels and identify their current gaps. They are consistently trying to identify areas where AI could augment or substitute tasks. Once these voids are identified, the organizational leaders are developing tailored training programs that embrace AI in their roles and responsibilities. However only those teams that foster a culture of lifelong learning will be the ones that will outshine those that are doing these for a “tick-in-the-box”. And finally it goes without saying that deploying a strong organizational culture is that critical layer that distinguishes organizations which will be successful in embracing this wave of technology from those who tried but failed.

Inderpreet Kambo, Principal (Partner), Commercial Technology & Analytics at IQVIA

I think the breadth of what this can do is going to continue to grow. We’ve talked about using GenAI on voice data, you’ve got the text aspect, but now you have a voice aspect. The industry is moving into visual GenAI where you can feed it photos and have it tell you what’s in the photo and you ask questions about the photo; video is being worked on right now. It’s almost like adding senses to it. I think over time it’s not going to be as good or as bad as everyone thinks. We’re going to figure it out.

I think from a societal standpoint, things are going to be bad for a while. The ability of the common individual to discern reality from AI-generated content, it’s going to be hard and there’s a lot of bad use cases there that bad actors can and are starting to get their arms around. That’s going to drive a lot of the legislative and regulatory pieces that we’re going to see in place. But to the point earlier, bad actors are always going to be bad actors. I think that’s going to be rough going for a while until we adjust to this new normal.

If you put a longer-term lens on this and say what does this mean over the longer horizon? We’re going to have to rethink business in general. It might sound dramatic, but I think this is almost like when you went from businesses that didn’t have electricity to electrifying corporations. It’s a sea change in the companies that can ride that change and adapt to it quickly are going to be the winners. The ones who just kind of dabble and play in it. They’re going to get left behind because that proactivity boost is so large. 

Businesses need to retool the enterprise to fully leverage AI and make it a first-class citizen, the base fabric of the enterprise. 

Joe Kleinhenz

The potential here is very big. Companies who are taking baby steps just to make sure that all those concerns have guardrails to elevate or ease those concerns. One of the use cases I see that’s picking up very fast is the content creation side of the things. Any kind of content creation, whether it’s for marketing campaigns or writing a letter or any kind of documentation that’s picking up very fast. Another use case, especially we are also evaluating is at a future stage obviously is a patient chatbot or a provider chatbot to answer some of the low-hanging fruit so that the real agents, the humans, are actually focusing on bigger problems. So that’s something we are focusing on or evaluating.  The other thing that something for our future that we are evaluating is essentially looking at like we’re a big Tableau shop when it comes to data and analytics. So Tableau is coming up with their Tableau GPT which is essentially a licensing agreement with OpenAI and they also have an agreement in place where OpenAI cannot use the data. So what we are thinking of doing in the future is instead of building a lot of dashboards, can we have the individual to ask questions and get the result like in ChatGPT. So that’s where at least my company is heading towards as a future perspective. In general I see from the outside perspective. Also, it has a huge potential to pick up very fast action as some of these issues are resolved or the concerns are eased a little bit.

Suresh Martha, Head of Data Driven Innovation & Analytics at EMD Serono, Inc.

There are a lot of different challenging use cases that we will have to tackle over the period of time to make sure  how we make sure Bad actors don’t use this technology in a bad way. But there are definitely a lot of use cases where individuals as well as companies can benefit. So I think those are the ones that we have to really focus on and show that this technology can benefit humanity in a lot of different ways and how do we balance the bad actors versus the good part of it that can be used. So things like how it can be used for drug discovery can have a lot of different good outcomes. Those are really the plus points. Obviously the plus points will overway the negative points, but we have to make sure we have a solution for the bad actors and the negative way the technology can be used.

Kiran Kanetkar, Vice President – Enterprise Technology, Data and Analytics at Pendulum Therapeutics

Opinions expressed here are solely author’s personal views and do not express the views or opinions of their employers

📣 Want to advertise in AIM Research? Book here >

Picture of AIM Research
AIM Research
AIM Research is the world's leading media and analyst firm dedicated to advancements and innovations in Artificial Intelligence. Reach out to us at info@aimresearch.co
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!