In today’s rapidly evolving technological landscape, the cultivation of resilient Generative AI (Gen AI) teams is paramount for organizations aiming to stay at the forefront of innovation. The journey of Gen AI development involves harnessing the power of artificial intelligence to create systems that don’t just perform tasks but comprehend, adapt, and generate novel outputs. To achieve this, organizations need teams equipped not only with technical prowess but also with adaptive skills, continuously evolving methodologies, and a proactive mindset toward optimization and success. This necessitates the creation of a work culture that thrives on adaptability, resilience, and continual learning, making way for an exploration of strategies to nurture such teams for long-term success in the realm of Generative AI.
We had a roundtable discussion on the topic with a set of experienced and distinguished leaders in the industry. The session was moderated by Aishwarya Gupta, Global Solutions & AI & Automation Offerings Practice Head at Wipro Limited along with panelists Fawad Memon, Director & Head of Digital Analytics and Insights (Marketing Science) at Virtual Gaming World, Roshan Thayyil, Data Science Leader, Vanitha D’Silva, AVP Data Science and Strategy at Sigmoid and Krishnaswamy Divakaran, Analytics Director, Independent Consulting,
Embracing Generative AI: Leaders’ Varied Paths and Progress
In the bank over the last six months to one year, what’s been happening is that the stringent data privacy regulations and governance make it impossible to directly engage with suppliers of Language Model Models (LLMs). We can’t use their models or customize them because we lack visibility into the data they’ve used for training. The bank’s governance doesn’t permit this. So, what are the alternative use cases we can explore based on the data available within our large bank?
It’s acknowledged that our bank has substantial governance due to its size. Logically, it should possess a significant amount of on-premise data for training purposes. However, the challenge we face is that, like many large enterprises, our data is siloed and spread across various locations, not centralized. The initial hurdle is data engineering, consolidating and unifying this dispersed data for training Language Model Models (LLMs) or traditional AI models. Data engineering, especially at this scale, often poses more challenges than the AI models themselves, which is a key difference between LLMs and traditional ML.
Another challenge is the infrastructure required. It’s not as simple as requesting a set amount of memory from regular hosting services to conduct a Proof of Concept (POC) for LLMs. These two significant challenges are being addressed through various initiatives at NatWest, such as hackathons and a focus on using GPT. This effort aims to establish a culture within the bank that embraces these technologies, fostering discussions and utilizing different methods for cultural development.
– Samarth Gupta, Vice President – Data and Analytics – Natwest
Lessons from Organizational Progress and the Evolution of AI Policies: Navigating the Path Forward
I think the sector that I’m working on right now is pretty much online gaming, and we are a company that is 100% on digital marketing as well. So, all our marketing is based on digital. So when doing that, what I just mentioned is highly applicable to us as well, which is data privacy. The issue that we will face, and the reason why we haven’t done much is that you can look at it in two ways. One is that under certain laws and regions, you need to inform the end-user that the data we are using will be used for Generative AI and for what purposes, so that is one thing depending on whether you’re using it in Europe or using it in the US and so on. Then the other bigger challenge is the ability to opt out. So, even online, when you are starting next year, Google will stop the cookies, so the world of digital marketing will completely change then. But at the same time, if I am sitting on somebody’s PII data and that person says I want to opt out of that data and all my models are being built on the data. So then what happens right? So do I go back and remove all the data and have my models run all over again. That’s another thing that we need to do. So from a legal angle, there’s a lot of questions that are going on. Yes, There’s a lot of talk about using it for simple purposes. My team just goes in and checks their codes and all of those simple things. Our creative teams come sometimes while brainstorming can come up with Starting ideas for an ad and so on. So those are the small things and efficiencies that can be done but using our own data to make predictions and all of those things that are still tricky and legal is still stopping us. I’m still trying to debate, What will happen? So at least that is what our take is from an online gaming perspective.
– Fawad Memon, Director & Head of Digital Analytics and Insights (Marketing Science) at Virtual Gaming World
Future Perspectives and Leveraging Advanced Models for Organizational ROI
So, I think it’s not a very straightforward answer. How these models fit the right use case is the first question. Does it work well for your use case? How is it going to be implemented? What is the entire ecosystem that is going to work on it? Do you have the right MDM in place to ensure that the data coming in aligns? Because with the amount of data you have, there are a lot of challenges that you might face, a lot of issues that might come up, and you might not always be able to predict it. So, if you have an extensive enough MDM in place and the right infrastructure maturity, all of these would play a role before you use the model for the use case. When all of these factors come into play, they would identify what is the right model to use and how to use it. And what is the customization that needs to be done at the end?
– Vanitha D’Silva, AVP Data Science and Strategy at Sigmoid
Choosing the Right Model: Building In-House or Partnering with Tech Vendors?
I think right now it’s still very early. While many companies have invested billions of dollars into it, from companies actually using it, I see a mix of enthusiasm and hesitation. There’s enthusiasm, but there’s also hesitation due to the privacy risk. You can’t control what you can’t control, so there’s still some amount of hesitation present. People are trying to identify low-risk projects to test it out internally, using it with internal customers where there’s less risk to the brand or the company. This cautious approach will persist for some time, aimed at making people more comfortable and confident in what generative AI can bring to the table. There’s no doubt about its capabilities, but there are significant internal risks we’ve observed. It might be a one in a million instance, but that one instance could impact the overall brand and company, causing concerns. I believe this cautious approach will continue for at least 2-3 years before we see significant changes.
– Roshan Thayyil, Data Science Leader
Transformative Impact of Generative AI on Data Science Skills and Micro-Level Interventions
Once upon a time, if you went to pitch an opportunity for data science to organizational leadership, they would look at you as if you were on dope. Today, that part of the selling is no longer needed because everybody wants to jump onto the AI bandwagon.
The special emphasis has been on hiring or upskilling people with natural language processing skills. So, what generative AI has done, luckily, has uplifted the entire pool of NLP as a top skill set across the board within data science, and if you don’t know NLP, you suddenly lost out on the whole race. So that’s the new challenge.
Continuing the use cases in many companies still remain the same. So, a quick win that I’m observing in a couple of examples is that if there is, let’s say, an existing model to approve a loan or if there’s an existing model that is actually identifying some sort of a supply chain issue, over the period today, human involvement in the supply chain is very high. Is there a way I could bring in a combination of a chatbot plus a language-based Q&A model that can replace this person talking to B? So if there are 10 steps from an order to its fulfillment, can I reduce it to four steps? So those micro-level interventions are starting to get traction. That’s something I am very excited about. It will take time. But at least the solution is at a more granular level than at a broad enterprise level.
– Krishnaswamy Divakaran, Analytics Director, Independent Consulting
Anticipating the Future Landscape of Generative AI
There is a long way before we look at AI being made sure of doing the end when transformations without human loop. Data privacy issues and challenges around transparency, responsible AI needs to be handled on priority and sustainability needs to be looked at on the priority while we are deciding on the computational needs and the energy consumption of deciding that Which models are to be used for solving the kind of problems and generative AI certainly is one of the most popular sub areas. And however, there are various themes which come together to deliver the solution and generative AI being the most resourceful highly advertised area today, but I think generative AI is able to generate that interest at a right scale and it is enjoying its attention today before it plateaus down.
– Aishwarya Gupta, Global Solutions & AI & Automation Offerings Practice Head at Wipro Limited