Search
Close this search box.

Navigating the Generative AI Revolution: From Hype to Impact

I think the biggest area where we can see a significant impact in the near future is our ability to elevate technology to a truly autonomous agent level.

In today’s digital landscape, the allure of generative artificial intelligence (AI) permeates across industries, promising transformative potential and unparalleled innovation. However, amidst the fervent desire for adoption, businesses encounter a critical imperative: how to approach the generative AI revolution strategically, moving beyond hype to tangible impact.

In this dynamic terrain, we are joined by Amit Gautam, CEO at Innover and Rakesh Prasad, Senior Vice President, Strategy & Solutions, Innover, offering insights into crafting effective strategies, tailoring projects to diverse client needs, and navigating the nuances of organizational readiness.

As the boundaries between human creativity and machine-generated content blur; ethical and legal considerations loom large, demanding a delicate balance between innovation and responsibility. In this exploration, we delve into the multifaceted realm of generative AI adoption, unraveling its promises, challenges, and the roadmap ahead for businesses navigating this transformative frontier.

AIM: How should businesses approach their adoption of generative AI, considering the prevalent desire for it, while ensuring a focus on identifying problem statements where generative AI can genuinely make a meaningful impact?

“What we typically recommend to our clients is to go pick up use cases where you have a well-defined problem statement and understand what good looks like or what the ROI is.”- Amit G

Amit G: Gen AI is still in its infancy, with a significant potential. What we advise our clients is to probably begin with an awareness phase, first, and then move on to a governance, implementation approach, experimentation, and eventually realization. Let me take a moment to talk about each one of them.

First, the technology. The first thing you want to do is have awareness—understand exactly what the potential risks are. Once you have the awareness, that’s when you can put together the guardrails, or the governance, around it. And as you might have seen in the news, there have been inadvertent misuses of the technology as well. Hence, it’s important for us to have governance. Once you have it, then you can define the broader strategy of where you are going.

If you don’t define a proper strategy, disillusionment can creep in because there is no ROI. Once you have this strategy, then you can proceed with your implementation approach and then be ready for experimentation. What we typically recommend to our clients is to go pick up use cases where you have a well- defined problem statement and understand what good looks like or what the ROI is. Once you have that, then you can continue to pick up other use cases that are adjacent to have a compounding impact on the overall realization of the ROI. But it all starts with awareness and governance, and then it goes down the chain.

AIM: Considering the insights from AIM research on LLM economics, what are the main factors and estimated costs of developing a text-based Generative AI application from scratch, and how do you tailor these projects to diverse client needs efficiently? 

“Most of the time, I would like to believe that corporations don’t need to build large models from the ground up. Small language models, especially those trained with synthetic data, can function just fine, or by inheriting a large language model and then optimizing it.”- Amit G

Amit G: Several factors come into play: the budget, the time, the expertise that you have in-house, what level of specificity or customization you need, what your tech stack is, and, of course, the expanse of data, whether it is in-house or part of an ecosystem outside. For the most part, one realization is that building a large language model from the ground up can cost tens of millions of dollars, if not hundreds of millions, and then, obviously, keeping it running is a tall order for most companies. Having said that, what we have seen is that foundational models, and picking them up and then building on top of them, is probably a more pragmatic approach, if it fits you.

Fine-tuning is obviously one of the most preferred ways of inheriting a large language model and adapting it to your context. It’s far more economical, for example, training a GPT-3 model can be done in a matter of a few weeks at a cost which is probably in the low single-digit million dollars. Now, having said that, the emergence of small language models has gained prominence recently, and I believe it will continue to do so. On one side, you have large language models, which are much more comprehensive, while on the other hand, small language models are highly context-specific. You can have a few million parameters for a small language model versus billions for a large language model, and you can use several techniques, like knowledge distillation, pruning, and quantization, effectively. 

The significant advantage of small models is they can work very nicely in the specific contexts they are trained in. However, their accuracy and contextual understanding dramatically reduce as you go outside of these contexts. One thing we advise is to be very careful in defining the scope of your small language models, otherwise, there is a significant compromise.

The difference between a small language model and a fine-tuned large language model is that with the large model, you are re-training the weights of the model. However, with a small language model, you are limiting the number of parameters in the context of the dataset. One advantage of small language models is that you can easily deploy them on mobile devices, and they can still function just fine, given their size is much more optimized.

To summarize, it depends on your context. Most of the time, I would like to believe that corporations don’t need to build large models from the ground up. Small language models, especially those trained with synthetic data, can function just fine, or inheriting a large language model and then optimizing it – both of these approaches should address the majority of clients’ needs.

AIM: What are your observations on organizational readiness to deploy and scale large language models and Generative AI, considering both technological and non-technical factors like organizational structure? What key elements must be addressed for success in the next 5-10 years, and can you cite examples to illustrate varying maturity levels?

“On-demand information availability, Hyper-personalization, Intelligent ecosystem &

Independent Decision Making are the big buckets where we see a lot of value emerging”- Rakesh Prasad 

Rakesh Prasad:  This is indeed a very interesting question because I think all of us, as we speak with our customers and internally as well, are trying to identify those three or four areas where the promise of the technology truly meets the reality of what it is. And I think finding that balance is crucial, as Amit mentioned in an earlier discussion. If you don’t get it right, people will quickly succumb to digital disillusionment and lose interest. So, how do we maintain interest? From our perspective, there are three key areas, or buckets if you may, where we’ve seen a lot of promise—for lack of a better term. Yes, productionisation is ongoing for some, while others are still in the journey, but there are three main areas where we see a lot of value emerging.

One is on-demand information availability. If you look at organizations, there are a lot of data points and information residing in various documents, repositories, and places. Often, there’s a significant productivity loss as subject matter experts have to sift through numerous documents and sources to gather insights for actionable decisions within the organization. This is definitely one area where generative AI can make a difference by creating an intelligent agent or bot —regardless of the name—that enables organizations to have on-demand information availability, which today is scattered across many different places. A classic example is making software engineers more productive, helping them leverage technology to understand best practices for writing code, generating auto-code, and performing auto-QA. This is driving a lot of productivity internally.

The second big bucket is hyper-personalization and end-to-end, real-time transactions. This isn’t something that generative AI enables by itself, but it is the combination of generative AI with traditional machine learning models that come together to deliver truly hyper-personalized experiences in real-time. This area has seen a lot of discussion, especially in customer service, supply chain visibility, and commerce platforms, making it the second bucket where there’s a lot of promise in the technology.

The third area is around creating an intelligent ecosystem and facilitating intelligent, independent decision-making models. If we were to truly expand this technology to its fullest capabilities, we could start building autonomous agents or intelligent bots that enable end-to-end intelligent decision-making without human intervention—from someone asking something via email to understanding the context, finding the answer, taking action on it, and publishing it. There are various paths in which this generative AI can evolve, ranging from completely independent voice-enabled applications that our customers are considering to autonomous agents for customer service. These are the key examples where we see generative AI expanding. However, the key point is that it will never be about generative AI creating the entire impact independently; it has to be integrated with other technologies at our disposal, making the entire ecosystem more automated and intelligent throughout the lifecycle. I hope this gives you a perspective on the different use cases we are exploring.

AIM: How advanced are intelligent bots, and how prepared are large organizations to adopt them for impactful, scalable integration? Given the transformative potential of workplace interactions and hiring, what is the current maturity level of these organizations in adopting such technologies, and  what does that mean for the entire Workforce to prepare for the work?

“Driving awareness almost always drives positive change in the workforce. Making conscious steps towards reskilling the workforce you have, as well as providing opportunities to utilize the technology for the firm’s benefit, can go a long way.”-Amit G

“Solutions like Copilot, powered by generative AI, are further accelerating this shift. The team will not spend as much time coding, but rather, will focus on understanding the problem to solve and then use these assistants to generate parts of the code much more easily.”- Rakesh Prasad 

Amit G: Every time disruptive technology emerges, bringing transformation, there have always been questions about whether it will benefit humanity or cause more harm. I’m an optimist. I believe that humanity’s ability to adapt, survive, and thrive is always supreme. So, I stay very optimistic. I believe generative AI, while becoming a very integral part of our society and the work we do, will still aid humans rather than disrupt them in a negative way. Now, having said that, I think it is important, going back to some of the previous discussions we have had, to consider how we stay relevant as this evolution unfolds. This requires, of course, going back to the awareness part that I was touching on.

Driving awareness almost always drives positive change in the workforce. Making conscious steps towards reskilling the workforce you have, as well as providing opportunities to utilize the technology for the firm’s benefit, can go a long way. It’s just like with every other evolution we have seen; staying relevant is critical. Investing in oneself is a proposition I support even within the firm. How do we keep investing in ourselves? I’m a huge proponent of investing in myself in terms of learning and staying relevant with what technology has to offer. I think it starts there, and then once you have the awareness and the training, the application of the technology to your advantage, in my opinion, comes naturally. 

Rakesh Prasad: Taking the conversation forward, if we examine the software engineering realm as an industry, we were already witnessing disruption from low-code and no-code platforms, which were signaling a shift. These platforms imply that the need for traditional programming skills should decrease, and professionals should become more aligned with solving problems. Solutions like Copilot, powered by generative AI, are further accelerating this shift. The team will not spend as much time coding, but rather, will focus on understanding the problem to solve and then use these assistants to generate parts of the code much more easily. Is it mature yet? No, but we are all witnessing leaps and bounds in how it can assist us.

Thus, it’s more about shifting the skill set demand rather than completely eliminating it. As organizations, we both have to adapt individually to the new skills and continue to evolve. For example, if we look at other industries, whether it’s healthcare, marketing, or communications, with many of these generative AI-based solutions, operational aspects of tasks will be completed much faster. However, there will still be a need for someone to interpret the results better, to drive decision-making, and to provide a more engaged and personalized experience to end customers or consumers. 

So, the skill set will continue to shift, but the aim is to ensure we can stay relevant to where the technology is heading. The interpretation of data and insights, and taking actions based on them, will still be areas where a lot of human plus generative AI interaction will exist for a long time. That’s where we see the importance of continuing to train our teams and customers to orient towards this direction rather than just focusing on operational tasks. This is the next step in adapting to and leveraging the evolving landscape of technology.

AIM: What’s the current progress in deploying smart agents with no humans in the loop, and how prepared are organizations for such automation? Considering the scale of AI’s impact on automation, how do you view its effects on job creation versus displacement, especially in light of policymakers’ concerns about the workforce?

I’m a strong believer in the need for more ethics and regulations over it. The implementation of such measures would not only protect the interests of users and the public at large but also guide the development of AI technologies in a direction that is beneficial and avoids potential negative consequences.” Amit G

“The essence of this evolution is not about replacing human roles but enhancing them, enabling individuals to focus on areas where they add the most value—complex decision-making, strategy, and personal interaction—while automated systems handle the more routine, data-intensive aspects.”- Rakesh Prasad 

Amit G: As technology and artificial intelligence continue to evolve towards what is termed as artificial general intelligence (AGI), although I personally believe we are still far away from achieving true AGI, I think there will always be a need for some level of regulatory oversight. It’s imperative that governance and regulations are established around it. Any technology, not only this one, has shown through several examples in the past that going unchecked or unsupervised can cause significant harm. There are several examples from the past, and generative AI is no exception to that. Some level of oversight will be needed.

Looking at recent examples, some of the content that was being created, which was both very disturbing, and firms were obviously forced to take actions, underscores this point. I’m a strong believer in the need for more ethics and regulations over it. The implementation of such measures would not only protect the interests of users and the public at large but also guide the development of AI technologies in a direction that is beneficial and avoids potential negative consequences. This approach ensures that while we continue to harness the benefits of AI, we do so in a manner that is responsible and aligned with societal values and norms.

Rakesh Prasad: When considering any business process or interaction involving humans, it’s clear there are multiple layers to such interactions. It’s challenging to envision a scenario where even an advanced automated agent could manage all these layers comprehensively. For instance, while an automated agent might handle some aspects effectively, such as initial inquiries or processing steps, the final layers—like cross-selling, up-selling, or making final decisions—would still require human involvement. This underscores the ongoing evolution in the interaction between humans and machines.

Indeed, many redundant or mundane tasks could significantly benefit from the application of intelligent automation. Reflecting on the developments from just a few years ago, before the advent of generative AI, cognitive automation was a burgeoning concept, propelled by the promise of technologies like RPA (Robotic Process Automation). The term “cognitive automation” suggested a future where machines could perform tasks that require understanding and decision-making, a future that seemed distant if reliant solely on traditional programming logic like “if-else” loops. However, with the power of generative AI, what was once a concept is now becoming a reality. 

Yet, even with these advancements, it’s clear that automated systems cannot fully replace the nuanced and complex interactions required in many business processes. Therefore, it seems we are moving toward a symbiotic relationship between machine intelligence and human insight. This partnership aims to deliver a more personalized and enriching customer experience across various journeys and touchpoints. The essence of this evolution is not about replacing human roles but enhancing them, enabling individuals to focus on areas where they add the most value—complex decision-making, strategy, and personal interaction—while automated systems handle the more routine, data-intensive aspects. This balance promises to redefine the landscape of business processes, making them more efficient, responsive, and tailored to individual needs.

AIM: How does Innover navigate the ethical and legal challenges posed by the blurring lines between human creativity and machine-generated content, especially when deploying LLMs and Generative AI for Fortune 500 companies?

“Innovating responsibly amidst the blurring lines between human creativity and machine-generated content requires establishing checks and balances to ensure a balanced relationship between AI development and human creativity.”- Rakesh Prasad 

Rakesh Prasad: Addressing the moral and ethical conundrums that accompany any disruptive technology is crucial, especially one that evolves as rapidly as AI. As partners to our customers and employees, it’s essential to ensure a balanced relationship between AI development and human creativity, necessitating the establishment of checks and balances. The discourse around governance in AI is already gaining momentum, reflecting a collective endeavor to harness this technology responsibly.

Among the significant concerns are the emergence of deepfakes and other alarming content that exploit AI’s capabilities, raising societal alarms. Concurrently, we observe organizations strengthening their data safeguards and implementing more rigorous validations around the use and deployment of generated content. Achieving and maintaining this balance is pivotal and will likely require iterative adjustments—a characteristic shared with the introduction of any new technology.

Another critical issue is the phenomenon of AI hallucination, where AI-generated content or errors could potentially undermine trust. Preserving trust is paramount, as its breach could adversely affect organizational integrity and customer relationships. In response, we’re witnessing an increased emphasis on responsible innovation, with companies proactively establishing guidelines to govern their use and deployment of AI technologies.

As the technology matures and begins to wield a more significant impact, regulatory and governmental intervention is anticipated. Currently, the technology remains in a phase of experimentation, with organizations spearheading governance, risk monitoring, and deployment strategies. However, as AI becomes more ingrained and impactful on a large scale, regulatory frameworks are expected to evolve accordingly, mirroring the historical progression of mainstream technologies.

This phase of AI evolution is largely self-driven, with external enforcement likely to intensify as the technology’s implications become more profound. This viewpoint encapsulates our perspective on navigating the complexities of AI development and deployment, emphasizing the need for a balanced approach that fosters innovation while safeguarding ethical and moral standards.

AIM: What key advancements do you anticipate for Generative AI in the 5 and 10 years. How do you envision the capabilities and applications of Generative AI evolving, particularly under optimal conditions, and what are your overall predictions for its future impact?

“We are already witnessing the advent of multimodal models that enhance interactivity through various channels, such as text and voice, indicating a future where interactions with AI will become increasingly seamless. The evolution of small language models or hyper-context-sensitive models is also on the horizon, promising more specialized and effective applications of AI.”- Amit G 

Amit G: Staying away from speculating about the advancements of generative AI over the next two to five years is wise, considering the well-known sentiment that humans tend to significantly overestimate technological progress in the long run and underestimate it in the short run. Despite this uncertainty, it’s evident that generative AI is in its infancy, and we can anticipate the rate of change to be exponentially higher. 

We are already witnessing the advent of multimodal models that enhance interactivity through various channels, such as text and voice, indicating a future where interactions with AI will become increasingly seamless. The evolution of small language models or hyper-context-sensitive models is also on the horizon, promising more specialized and effective applications of AI.

Furthermore, the development of autonomous agents that can execute end-to-end processes or tasks with minimal human intervention is becoming more tangible. Looking a bit further, the concept of interactive AI, where bots can delegate tasks within an ecosystem of other bots, is not only fascinating but highly probable in the near future.

Moreover, the combination of generative AI with robotics opens up significant possibilities for the future, pointing towards an era of unprecedented technological synergy and innovation. Overall, there’s a strong sense of optimism about the technology’s potential and its contributions to human advancement. This optimism is not just about the technological marvels we will witness but also about the profound impact these advancements will have on solving complex problems, enhancing productivity, and opening new frontiers in science and creativity.

Rakesh Prasad: I think the biggest area where we can see a significant impact in the near future is our ability to elevate technology to a truly autonomous agent level. This advancement is key to realizing the technology’s vast impact. The substantial influence and value we discussed. Beyond this achievement, the possibilities are limited only by our imagination. The technology promises to be powerful and has the potential to evolve rapidly.

Picture of Anshika Mathews
Anshika Mathews
Anshika is an Associate Research Analyst working for the AIM Leaders Council. She holds a keen interest in technology and related policy-making and its impact on society. She can be reached at anshika.mathews@aimresearch.co
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

21-22 Nov 2024, Santa Clara Convention Center, CA
The Most Powerful Generative AI Conference for Developers
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!
AIM RESEARCH

Subscribe To Our Weekly Newsletter

Get notified about everything latest in AI industry in USA.