As we step into the year 2024, the realm of artificial intelligence (AI) continues to evolve at an unprecedented pace, reshaping industries, augmenting human capabilities, and pushing the boundaries of what was once deemed possible. The year ahead promises to be a pivotal moment for AI, marked by transformative trends that are set to redefine how we interact with technology and each other. From advancements in machine learning and natural language processing to the integration of AI into various aspects of our daily lives, the landscape is dynamic and multifaceted. In this rapidly evolving environment, staying abreast of the emerging trends in AI becomes not just a choice but a necessity for individuals and organizations alike.
We had a roundtable discussion on the topic with a set of experienced and distinguished leaders in the industry. The session was moderated by Kashyap Raibagi, Associate Director – Growth at AIM along with panelists Rajvir Madan, Chief Digital and Information Technology Officer at Arcutis Biotherapeutics, Carolyn Duby, Field CTO and Cyber Security GTM Lead at Cloudera, Anil Prasad, Vice President Of Technology & Engineering at Cloudmed, Krishna Cheriath, Chief Data & Analytics Officer at Zoetis, Srini Tanikella, Vice President Information Technology at SMART Global Holdings, Amitabh Mishra, Executive, Data & Analytics, US Pharma & Oncology at Novartis and Vinod Malhotra, Senior Vice President Of Engineering at BlackLine.
Trends Paving the Path to 2024 and Beyond
I look at this question first from a business lens. While technology is important, the key question for me is do you apply the technology to a set of business problems? So the first trend I think about extensively is about what the problem that AI (whether Gen AI or Narrow AI) is trying to solve. I talked to a group of peer CIOs the other day and I asked them to tell me about all of the AI use cases they have and this was across multiple industries. Then we discussed which of those AI use cases are generative AI, and only one person put their hand up. So, I think what’s been missing for me has been this bridge between the technology out there and the use cases and how we apply that technology within each one of our domains and industries. So that’s one of the big trends that I see – I think more and more work will be done next year to figure out how we apply that technology to various business problems. As Einstein once said, if I had 60 minutes to solve a problem, I would spend 55 minutes thinking about the problem and 5 minutes solving it. Recently we have spend more time on creating solutions, and in the future I see us focus more on defining the problem that these solutions will help with.
The other big one that I think about is building trust with many of our AI models, and that comes across multiple dimensions – its about data quality issues, thinking about things like how you open up the model and make the model visible and explainable. There will likely be a big trend around just building more trust around the AI models and reducing the hallucination rate of some of these models and making them less prone to misinformation. So those are the two big trends that I think about.
– Rajvir Madan, Chief Digital and Information Technology Officer at Arcutis Biotherapeutics
Key Catalysts Shaping AI Development for 2024 and Beyond
I have serious whiplash from the pace of adoption of generative AI even in highly regulated industries like financial services. A digital system that speaks your language is very compelling and leaders feel pressure to deliver value with generative AI or face disruption from smaller, more nimble startups. Organizations with mature data science practices and scalable platforms are already equipping their workforces with new capabilities based on Generative AI. One of our APAC customers OCBC delivered three major new capabilities to their employees securely in their private cloud: A chatbot service, a system to improve customer experience that analyzes customer calls conducted in multiple languages, and a code copilot tailored to their coding standards.
That said, Generative AI is a novel and developing area with unsolved problems. Examine your digital transformation initiatives and identify use cases that deliver significant value safely and securely. Risks include untrusted or inaccurate training data, sensitive data leaks, inaccurate responses leading to costly errors, and copyright infringement legal action. Seek out diverse viewpoints regarding safety and engage the workforce in the design and deployment of AI powered services.
– Carolyn Duby, Field CTO and Cyber Security GTM Lead at Cloudera
Navigating Challenges in Gen AI Implementation
I recently led the implementation of the GPT-3.5 Turbo, and our approach involved primarily around context generation, summarization and code generation, assistance. The larger landscape and roadmap around LLM had to be rigidly defined after uncovering the potential of LLMs. We implemented swiftly, aligning with a more exploratory mindset prevalent in the industry. Initiating with a basic framework and navigating through unforeseen challenges has been the current phase of our journey. As part of our efforts, we introduced Copilot as a feature for code auto-generation, but a significant industry challenge we encountered is centered around data. The importance of quality, ethical, and unbiased data is paramount for the effective functioning of models like ChatGPT or Gen AI. Recognizing this, we’ve reshaped our data strategy and architecture, ensuring that our data is in its most usable state for AI applications.
Looking ahead, we’ve realized the intricacies involved in model creation and have decided against building a base model from scratch due to high computing and cost requirements. Instead, our focus is on identifying existing base models and fine-tuning them to meet our specific needs. This approach has unveiled a skills gap within our team, prompting us to train engineers in the art of model fine-tuning using tools like Python and other relevant technologies. We’ve also acknowledged the human element in this process, recognizing the need for accurate responses in fine-tuning. Consequently, we are expanding our team to include more individuals in the intervention process for Gen AI. In summary, our focus areas for the upcoming year include addressing data challenges, honing fine-tuning skills, and expanding our team’s expertise in these critical areas.
– Anil Prasad, Vice President Of Technology & Engineering at Cloudmed
Addressing Scalability and Use Case Expansion for 2024
Among the spectrum of AI optimists to AI pessimists. I am an AI realist. In my current company, we had an active debate early on whether this was a digital distraction or a promise. Should we care about this as a company? Is it crucial to our strategy as an animal health company? We concluded that it is and then underwent a rigorous value identification exercise. We didn’t want a thousand flowers to bloom and “pilotitis”. So we said, let’s focus on what matters most for us as a company and our customers, and we identified seven priority areas that we would focus our Gen AI efforts and created an enterprise Gen AI program endorsed by our C-Suite. Then, we created an internal Gen AI venture fund that invests in the promising proof of concepts aligned to the 7 use cases and have clear value and technical evaluation criteria. If a proof of concept is successful, the internal venture fund will fund the pilot phase. If the pilot is successful, we will develop a full business case for scale and sustain and bring it forward for approval and investment. Currently we have several proof of concepts underway. To make sure that we are doing the proof of concepts at the right balance of value and risk, we created an AI governance council, which consists of the CDAO, CISO, CIO, Head of Privacy, Head of Intellectual Property protection and HR. The AI governance council reviews every AI project to ensure that we are doing the right things to protect our customers and our company, ensuring that we maintain our digital trust and security and taking the appropriate steps for employee skills development and adoption.
My feelings about the Gen AI topic is that first, this is a transformational moment similar to the smartphone introduction or back when e-commerce became a thing. I think the companies that will differentiate themselves are those that can identify what use cases will be unique Gen AI driven competitive advantages and what will be just the cost of doing business in years to come where they may derive efficiency and cost gains. Some of the use cases like Gen AI driven marketing content generation and translation will give cost reduction benefits for sure, but will be the way do marketing campaigns in the future.
Second, companies should take a lean-in with humility approach because there is a lot we don’t know yet. We are still in act one of an emerging play. The Gen AI innovation landscape will evolve with startups that will come and go, big tech evolution happening fast and policies/regulations that will emerge. So, we must be prepared to experiment in a cloud of uncertainty. Companies should not lean in and figure out what is the value proposition from Gen AI for them and then have a mindset of experimentation and fast evolution and pivots. This is easy to say and put on a few PowerPoint slides but very hard to do in mature organizations. When new innovations emerge in the market and Gen AI technologies shift and evolve, can you change what you are doing and pivot and adapt? If regulations emerge, can you implement compliance fast?
Hence what companies need to get right to get Gen AI experimentation and implementation right are (1) Clear value focus and prioritization based on your competitive advantages (2) A robust AI governance strategy with a MVB-Minimum Viable Bureaucracy mindset (3) Fast experimentation with agility to respond and adapt to the changing Gen AI landscape (4) Solid strategy for scale and adoption with a priority on employee skill development.
– Krishna Cheriath, Chief Data & Analytics Officer at Zoetis
AI in Pharmaceuticals: Navigating Opportunities and Challenges
I have been in the pharmaceutical space for a number of years. And I would say that compared to the other verticals, I think pharmaceutical companies have a good free cash flow most of the time, and sometimes they are quite comfortable in trying out new technologies with caveat. That caveat is because you’re in the healthcare space in the industry. There’s just a bunch of regulations that you have to keep in mind. So, in that context, AI has its challenges around misinformation, misuse or lack of trust in what AI puts out in terms of recommendations and various discoveries. I’ll give you one specific example. I’ve seen AI playing a big part in drug discovery in this industry. And this has been happening for, I would say, the last couple of years now. What we are addressing right now is number one – What is the potential? Number two – What parts of AI can be taken and used? In other words, which parts of this technology are ready to be used? A couple of years ago, we started to look for a niche within drug discovery where we could take and use AI, and this was before Gen AI. The other areas where we looked were clinical trials and manufacturing. You’ll notice that drug discovery, clinical trials, and manufacturing within the pharmaceutical space are all capital-intensive and extremely time-consuming. So, we’re just identifying small niches within the industry where AI can quickly make the biggest impact.
– Amitabh Mishra, Executive, Data & Analytics, US Pharma & Oncology at Novartis
Unleashing Vast Use Cases and Democratizing Innovation
I’m maybe a little more on the bullish side. The way I look at Generative AI, it is such a general purpose technology. Unlike in the past when specialized models had to be built for each problem domain, such as healthcare, finance, e-commerce, etc, foundational large language models (LLMs) are general purpose. So, it just opens up vast use cases, limited only by our imagination. Second, the bar for innovating in the space has come down. Previously, only big companies like Google, Amazon, Facebook, big pharma and big financial companies could afford to spend hundreds of millions to solve a domain problem. Now, the field has been level set. What previously required months of effort can be accomplished in a day today. I call it the democratization of innovation in AI. The third aspect is the infrastructure needed to train and utilize LLMs. Today we require very heavy-duty infrastructure to train LLMs and small LLMs with 3 to 7 billion parameters can barely run on laptops or mobile phones for inference. All major CPU/GPU vendors for laptops and mobile phones have announced roadmaps to speed up LLM model execution by 10x or more in the next couple of years. Clearly Moore’s law is in motion here. By the end of 2024, you will have lLMs running on mobile phones! Overall, I think the possibilities are endless. Yes, there are challenges. There’s a lot to be done on governance, transparency, and responsible AI. However, these are not going to hold back the pace of innovation in building more powerful foundational LLMs and their applications.”
– Vinod Malhotra, Senior Vice President Of Engineering at BlackLine