Chetan Alsisaria is an accomplished business leader and technologist with 18+ years of experience. He has led technology-driven business transformations for Fortune 500 companies, Indian organizations, startups, and government agencies. Chetan excels in identifying growth opportunities, building high-performing teams, and delivering excellence in data analytics and enterprise performance management.
As the Co-Founder & CEO of Polestar, he shapes critical business processes and emphasizes sustainable growth for all stakeholders. Chetan previously worked with top tech firms like PWC, Deloitte, and Ernst & Young.
In this week’s CDO insights, Chetan gives us his insights on “Maximizing the Value of Generative AI in Organizational Contexts”. The interview delves into Chetan’s perspectives on Generative AI, exploring topics such as organizations’ transition from PoCs to practical applications, challenges in the Generative AI space, progress in addressing these challenges, and the importance of a problem-centric approach. It also touches on structural changes and elements of the Generative AI ecosystem while providing insights into where organizations should focus to maximize value and align with long-term goals.
AIM: Can you explain the current thought process and approach that organizations are taking with generative AI, especially in terms of moving from proof-of-concepts (PoCs) to deriving maximum value, considering the shift in focus from technology hype to delivering tangible solutions?
Chetan: These are still the initial days, and I would say that the high hype surrounding Generative AI has not subsided; in fact, it has only increased over time. Rightfully so, as this hype is not without merit. There is real value in Generative AI. However, even at this stage, what I predominantly observe is that organizations and small teams or individuals are using it to boost productivity and augment human capabilities.
For instance, marketing teams are using it to generate content, blogs, and social media posts, or to create images. Tech teams are generating small code snippets or performing code documentation using Generative AI.
At the enterprise level, organizations have begun contemplating its use. There was a period when everyone wanted to adopt it simply because others were doing so. However, now they are asking important questions: Are we pursuing the right objectives? Have we selected the appropriate use cases? Is this use case viable? How do we transition from a Proof of Concept (POC) to full-scale implementation or enterprise-level deployment? Organizations are beginning to explore these questions.
Nonetheless, even now, I would say that a significant portion of the work remains at the POC stage. Some segments or industries are slightly ahead of the curve, integrating Generative AI into certain business processes. For instance, chatbots handling HR policies have become relatively common. Also, the use of bots for interpreting and responding to questions about agreements and legal contracts is on the rise. However, as I mentioned, this is still in its early stages. I don’t see widespread enterprise-level deployment yet, but it’s likely to move in that direction very soon.
AIM: Are there challenges that make transitioning from POC to production more complex in the realm of generative AI like Jenny compared to previous technologies? What are the day-to-day hurdles organizations face when implementing general intelligence for practical use today?
Chetan: The first significant challenge, even before reaching the Proof of Concept (POC) stage, is that most organizations lack a well-defined strategy and a set of identified use cases. Often, it’s more of a “let’s do something” approach, and many times, these use cases don’t add enough value to justify the time, energy, and financial investment.
Secondly, when transitioning from individual inquiries to production-level applications, accuracy becomes a critical factor. Biases and hallucinations are real concerns because most foundation models are trained on extensive datasets. If there’s inherent bias in that data, it will manifest in the outcomes.
Another challenge is that these foundation models lack business context. They may generate outputs that appear highly accurate but can blend with the correct information in a way that makes it challenging to identify accuracy. At the POC level, this can be overlooked, but at the enterprise level, solving real business problems demands a higher degree of certainty or at least the ability to detect inaccuracies without extensive validation efforts.
Data security remains a concern. There have been incidents, like the outage in March where users could access each other’s chat history and payment details, which raise alarms. While efforts are being made to enhance security, these concerns persist.
Additionally, there are worries about how generative AI uses data. Organizations wonder if their data will be used to train public models and whether they can control this. While there are settings to limit data usage, concerns linger.
Another challenge is talent availability. When moving from POC to production, specific skills, such as prompt engineering, become essential. These emerging skills are in short supply, leading to higher costs.
Intellectual property (IP) rights are a complex issue. In the US, content generated by AI programs typically cannot have IP rights. In the UK, there’s a different stance where customized generative AI content may receive IP rights. This ambiguity adds to organizational concerns about protecting the output they create.
Furthermore, concerns include organizations not having the right data infrastructure and business processes in place. Even if organizations want to leverage generative AI for insights, data quality issues can impact the output. These are some of the concerns prevalent in the field.
AIM: How close are we to solving the specific challenges and roadblocks in generative AI that you mentioned? Is there a rush to adopt the technology without first identifying specific problems it can solve? Can you shed light on the thought process and existing frameworks for addressing these issues in the generative AI space?
Chetan: That’s why it’s said that it’s a paradigm shift. I think I’m not wrong. Someone said that this may be the biggest innovation since the steam engine. So when you have a huge paradigm shift, then many times. you define a problem after you see what this product is capable of. When it’s incremental change typically, then you have a problem and then you bring in a solution or a technology to solve that. That’s why Generative AI is so transformational. Because what it can do, no one knows the limits of what it can solve, what it can do.
So, yes, that particular part is evolving where what it can solve, that capability is there. Most organizations have not defined the problem statements for which they are going to use it but they’ve started defining it right now. How close we have to solving some of these problems I would say, we are moving very fast on solving data security, ethical standards being set up, there is huge amount of work going on in this space. And as I said some amount of production deployment, I’ve started seeing in some of my clients, engagements and accounts where they have started taking it to a production level. Talent availability is something which once you have this kind of a hype talent problem would be solved. It definitely takes some time, but you already see a lot of training around it, a lot of courses going around it. You see organizations setting up COE’s, Center of Excellence specifically for Gen AI. First of all, to build talent, second to institutionalize the knowledge that they are learning from one maybe 10 POC’s. And then maybe all 10 of them fail or none of them would want to take it to a production level but they would be learning around it. So they are forming those COEs, taking those learnings and then putting it in the further use cases.
AIM: COE is one way. But what are some of the other structural changes needed in a company from a perspective of setting up delivery teams as well as how you engage with a client?
Chetan: I would say take any enterprise, one big change that they would need to do and that is happening is you’d see a lot of strategy guys moving to technology and technology guys moving to strategy. The boundaries between strategy or business and technology have never blurred to this extent. Have you ever seen a small manufacturing company based out of Manesar talking about Generative AI in their boardroom? So now its agent, it’s a boardroom discussion
So, that is one of the structural changes that I’m seeing, where a lot of business folks will be moving to technology to actually see how technology can help them solve those business problems, which they are aware of. Technology folks are moving to business to see how business can adopt these technologies to do things much better. So that is one change that I’m seeing day by day. Second thing is building the right ecosystem. This is still evolving, and with Gen AI, the implications are far-reaching. It would be there in each thing. If you define an order to cash cycle, procure to pay, hire to retire, everywhere you can define how you can make your stakeholders experience much better or you can bring in significant productivity gain using Generative AI. No one company would be able to do that. What kind of ecosystem are you building? That is going to be very important. So I think these two big structural changes, I see which organizations are taking and would need to take to make it successful.
AIM: When you refer to the generative AI ecosystem, what components besides talent and upskilling initiatives should we consider to gain a deeper understanding of the changes taking place in this space?
Chetan: The ecosystem would be a set of advisors who would advise you on ethics, security, compliances. There are huge compliances implications. For example, you are a product company, and you’re using a particular data set and you are using Generative AI, and you are sending the employee data, individual data. This has far-reaching implications. This is dictated by many policies, compliances, and you don’t want to be on the wrong side. So this is one side and it’s very difficult because this is still evolving. It varies from country to country and as I said, it’s changing. So that is one kind of area on compliances, data security, and ethics.
Second is building the set of providers who can provide you not just with talent but they can give you some accelerators or domain-trained models. So, for example, if there is a healthcare company, using a foundational model you can do only as much, because as I said, it doesn’t have the context. So there would be healthcare players, who would have Foundation models trained on healthcare data specific to a particular area. You would not want to reinvent the wheel, so you want to partner with such providers as well. So these are some of the examples.
Then you would have the right set of partners for your data strategy and making sure that your data is in place. You have got high-quality data, which can be fed to your foundation model and thereafter, you can actually get more contextualized results for your use cases.
AIM: To maximize the value from their current generative AI investments, where should organizations begin, and how can they align these efforts with their long-term vision for the technology’s potential evolution?
Chetan: This is essentially a summary, I would say, of some key points. It involves creating the right strategy for Gen AI, identifying use cases with low complexity but high impact, and low validation. Prioritizing these cases based on their impact, cost, data readiness, and resistance to adoption is essential. As we all know, there is significant resistance, often driven by fear among people. Based on these considerations, a roadmap can be developed. It’s important to start with a strategy that acknowledges the possibility of failure. Identifying these use cases as a series of POCs (Proof of Concepts) and pilots should be done, but only for the problems you genuinely want to solve, avoiding initiatives just for the sake of it.
Creating transparency in how you plan to involve all stakeholders is vital. Communicating how these changes will impact them, whether through data sharing or potential changes in their job roles, is essential. Bringing all these elements together is crucial. I am confident that while this approach will significantly enhance efficiency, it won’t replace humans; rather, it will augment their capabilities and make them more effective. However, achieving this requires bringing everyone together.
Therefore, establishing a change management program right from the outset is necessary. Awareness programs should be initiated early on. A study by MIT even suggests that organizations with a culture that embraces Generative AI will have a more productive and happier workforce. Although there may be initial fears, this cultural shift will eventually lead to more productive workplaces. However, in the short term, it’s crucial to ensure productivity, as there’s no long-term success without a strong start. If the initiative fails in its infancy, it may never reach a scaled-up version. To prevent this, it’s essential to rally people together, and an effective change management program is a key part of achieving that.