Close this search box.

Exploring the Boundless Potential of Generative AI with Shub Bhowmick

Like any new technology, decision-makers must consider risks before rolling out these technologies across the organization.

Generative AI, a transformative branch of artificial intelligence, is key to unlocking an expansive realm of possibilities across various domains. This cutting-edge technology empowers machines not just to understand, but to create. Its capabilities extend beyond mere data interpretation to generating new content autonomously, whether in images, text, audio, or even videos. By harnessing the power of Generative AI, industries are experiencing a revolution, witnessing the birth of innovative applications that redefine how we perceive and interact with technology.

In this week’s CDO insights, we have Shub Bhowmick with us, who serves as the co-founder and CEO at Tredence, a specialized data science and AI solutions firm aimed at addressing the “last mile problem” in AI. Before establishing Tredence in 2013, Bhowmick occupied high-ranking roles at Diamond Consultants (currently PwC), Mu Sigma, Liberty Advisor Group, and Infosys. His educational background includes an MBA from Northwestern University’s Kellogg School of Management and a Bachelor of Technology in Chemical Engineering from IIT-BHU. The interview revolves around Generative AI’s transformative impact on industries. He discusses its alignment with current trends and its role in reshaping businesses. Shub ponders whether Generative AI fundamentally differs from past advanced analytics and addresses hurdles in AI adoption, emphasizing necessary mindset shifts for professionals and CEOs to drive effective adaptation. Lastly, Shub envisions Gen AI’s revolutionary potential in redefining workflows and the organizational landscape, if not for data security and privacy concerns.

AIM: How do you perceive the alignment of Generative AI with current industry trends and its role in shaping the future of businesses across multiple sectors?

Shub Bhowmick: I had never heard the use of the word ‘hallucination’ in my industry until we all came across this explosion created by OpenAI. To begin with, in November of last year, they released 3.5. Since then, this has been the only topic everybody’s talking about, especially in technology. All new startups that have a GenAI thesis are getting funded. It sparks many enterprise AI conversations around Generative AI, and even service providers like ourselves are preparing our solutions to take this core capability and drive business outcomes. This is very transformative. It has already democratized AI. We talked about AI for a long time. We used to call it “advanced data science” and “applied analytics” and just “AI” for the longest time—in the last ten months, we’ve been using the concept of Generative AI, but the idea is not very different. It’s about how you take data, information that you already have within your firewall, or leverage other data sources that you can procure from third-party sources, and then help your executives make more meaningful decisions to improve their businesses and their bottom line or top lines. Specifically, around Generative AI, we are doing a few different things.

First, we are working a lot on contextualization. How do you fine-tune those foundational models or use simpler techniques like a RAG, prompt engineering, or more advanced situations where the client asks us to build an LLM? We are taking various ideas around contextualization and making it more relevant for the businesses and the customers we’re working with. Number two is around coding assistance. There is recognition in the industry that this will meaningfully improve the productivity of our engineers, data scientists, engineers—all kinds of engineers, data engineers, software engineers, and so on. There are multiple steps when we take new business specifications and code that into software. There are two steps. Step one is taking the business requirements and writing the technical specs, and step two is taking the technical specs and writing the code. I believe that step one will continue to be done by experts and domain specialists like Tredence. Step two will progressively get automated, thanks to various forms of large language models. And that’s where we are investing, trying to build those capabilities, and internally training our people around that. And there are many other dimensions in which we are developing our capabilities.

AIM: Do you think that Generative AI represents a significant departure from what was termed advanced analytics or advanced data science in the past? Looking back at the evolution—from AI to GenAI—I wonder if the underlying technology fundamentally differs from what we referred to as advanced analytics in 2012 or 2013. What’s your perspective on this?

Shub Bhowmick: The fundamental premise is to use a good data foundation to predict and optimize decisions to use your resources better, and we all live in a constrained resource world. The fundamental premise is to leverage all the available resources and use them in the most balanced and optimal way. That premise has stayed the same. That is what the purpose of data science or AI is. But Generative AI is a different kind of AI. The version of Generative AI we leverage with text and NLP is called the large language models. There are other new forms of Generative AI we are now starting to see in images and other forms of unstructured data. It’s the same way we used to have gradient boosting models, 15 years ago, then, 10 years ago, we have started playing with neural nets and deep learning, and for the last ten months, the world is going crazy around large language models. The concept of large language models needs to be clarified. We have been working on large language models, simpler forms of large language models like words, for nearly three years. What ChatGPT and OpenAI have done has democratized this use in the context of day-to-day B2C contexts. For example, my daughter started using it in high school. Even my kids have the ChatGPT app on their Apple watches and interact with it. In less than a year, a 10-year-old can use AI and understand what AI can do for them, which is very new for this world.

AIM: Are you encountering hurdles in AI adoption, even with tools like Gen AI? If so, how are you overcoming these challenges? If not, what factors contribute to smooth technology adoption in this sphere?

Shub Bhowmick: Like any new technology, decision-makers must consider risks before rolling out these technologies across the organization. Now, what are those adoption impediments and risks? Let’s go through them one by one. 

One is cost; it is expensive, especially if you’re looking to create a new model from scratch. It is an extremely expensive proposition. We are hearing about the constraints around the production of GPUs by Nvidia. So, there are constraints at the underlying silicon layer and the cost of training a new model from scratch. And again, they’re starting to see innovations like MosaicML, a new form of hyper scalar that enables you to more optimally train a new large language model; the other core hyper scalars are stuck, still catching up.

The second is security where concerns remain. Is my data secure, especially in highly regulated industries like financial services and healthcare? There is a lot of concern about the data leaving the premises.

The third centers around the setting up of infrastructure. It’s not inexpensive to set up that entire infrastructure, what we now call LLMops, which is LLM infrastructure. It is not about copying and pasting a bunch of Lego pieces. It requires a different kind of nuanced architect to consider all these trade-offs and pros and cons as they put this together.

Fourth, a big challenge is the accuracy and performance of these solutions. As I mentioned at the beginning of the conversation, hallucination is a problem. There are several LLM solutions out there, but there is more than one size that fits all. Depending on the situation, some cheaper models may work better if you tweak and tune them appropriately. There are other techniques, like RAG, to further reduce hallucination. So, if you want to use large language models for, let’s say, the way we used to think about business intelligence, which was very deterministic, there was nothing probabilistic about it. I want to know what my sales have been in this category for the last two weeks in this part of the world.

I’m not asking a model to predict that answer. It’s a simple query. But so far, large language models could be more effective in these deterministic use cases. There’s a lot of innovation happening even in this space. How do you take large language models to convert a query into SQL, and then once you get the information back, how do you take that information and present it in a chart that can be sent out by email? All these things are still evolving. We are also working very closely with our clients to help them identify an initial shortlist of use cases, and then put those use cases in a two-by-two, in feasibility versus cost versus another trade-off of parameters. In addition, we are working with several clients and setting up that LLM infrastructure. How do you set it up without compromising on security, without compromising on other essential requirements that the client has?

The entire organization, including our chief information security officer, is starting to get involved in these conversations. It’s about actually beginning to do POCs. Knowledge management is a fascinating use case in a POC on various use cases. Chat GPT is very interesting for us because we ask questions, and it’s able to give us answers scraped from the internet in a wonderful, conversational format. Now imagine, instead of the internet, if your information source was your internal Knowledge Portal, all the information you have inside your company. And then when you ask that question, you get the response in a similar ChatGPT-type format. So, knowledge management is another use case where we work with several clients internally. We have leveraged it and created a beautiful system, which amazes me. Even when I see the response coming back, it’s out of this world.

AIM: What changes do professionals building and using Generative AI need to make? How can CEOs strategically drive adaptation to these new changes and support their teams? Specifically, what shifts in mindset and approach are required for analysts and AI professionals dealing with the complexities of Generative AI? How should end-users adjust to leverage this technology effectively?

Shub Bhowmick: We should not say no. We should not try to stop it. We should not try to resist it. I still run into leaders who reject this, saying, “Hey, it’s another short-term phenomenon. It’s a hype that will die out.” This is not similar to some of the other situations in the past where the hype has indeed died. This is going to be a very big part of our future. So, there is no point in trying to resist the simulation. The challenges and risks must be considered, but there are ways to explore use cases slowly. Many customers are starting with productivity use cases where you are not necessarily touching anything to do with your customer data, but you’re starting to look at your internal information and within the boundaries of your firewall; what kind of POCs and experiments can you start running? 

Next it’s important to ask: can I improve the productivity of my engineers from Generative AI-led code? That’s another use case. It’s still a very early stage as far as those use cases are concerned. But talking to my colleagues in the Bay Area given, I’m starting to see this happening meaningfully. However, the broader enterprise mainstream, Fortune 500 industries have not yet adopted this use case because of the constraints we have discussed. 

Finally, companies are exploring the setting up of the infrastructure perspective and use cases that don’t touch customer data.  If you’re leveraging third-party data to build specific capabilities, for example, in the financial services sector of the capital banking industry, which does extensive M&A work, can I leverage Generative AI to accelerate my ability to scout all the information and identify the thesis that aligns with my investment areas? Can I do all this more productively? Can I reduce the number of manual steps while I go through this and thereby expand the horizon and scope of my research? I am starting to see these kinds of use cases regularly now, and I do not doubt that six months from now, when we are having this similar conversation again, we will see many more such use cases. However, touching any experiment with customer data will likely happen a little bit down the line as far as this maturity curve is concerned, given the risks associated with all kinds of regulations and so on.

AIM: Do enterprises view security risks as a major hurdle in adopting advanced AI technologies like ChatGPT and API’s? With the apprehension surrounding potential security threats despite the promising capabilities, do you anticipate this fear diminishing soon or remaining a formidable challenge for organizations like Tredence? How tangible are these risks, and what strategies can be employed to ease concerns and encourage widespread adoption, unlocking the immense potential of these technologies?

Shub Bhowmick: The risk is there, and it will need to be managed and mitigated, but that should not come at the cost of not embracing the technology. Every industry is different. Verticals that deal with more consumer-level data must be more careful. Some industries, like financial services, which relate to money or healthcare, will be more careful because they are more regulated, and the cost of making a mistake is very high. Yet even those clients are trying to figure out ways to adapt, and the good thing is a lot of this pressure is now coming from the CEO office. It’s no longer just bottom-up experimentation; this is coming top-down because CEOs want to be included. And if you can leverage this new technology to gain, let’s say, two or three basis points of improvement in your bottom line, and if you are the one that was left behind and others in your industry have leapfrogged you, then you don’t want to be in that boat. Given the commercial acknowledgment or pressure and the reality of some benefits, these use cases are starting to drive meaningfully. This is very much a part of our future. Ultimately, the risks will be managed, and they will not become fundamental impediments in moving in the right direction.

AIM: How would Gen AI revolutionize organizational operations if risks like data security or privacy weren’t a concern? From immediate changes to long-term evolution, how might this technology reshape workflows, like having an Iron Man-style copilot? Ultimately, how does Gen AI redefine operations, envisioning a futuristic organizational landscape?

Shub Bhowmick: I don’t think it’s too far off. So, knowledge management is a use case. Organizations are leveraging AI to take that internal knowledge and create similar conversational AI formats on top of that information. Gen AI-based productivity development is an extreme use case. Now productivity rises because your engineering staff becomes more productive, they can produce more with less, or because of your business operations. This podcast is the conversation that you and I are having right now. Instead of you or me, we can have two humanoids having this conversation. All those things will start happening, especially in certain operations, which can be more automated using Generative AI. You will be able to read and quickly process a large, complicated PDF in a legal document. We are seeing several use cases around this. Your dependence on paralegals to do all that heavy lifting for you will likely get more and more automated and streamlined.

Finally, a movie called HER came out in 2013, which my co-founder recently mentioned in an event when we were speaking about Gen AI. The movie’s lead character, Theodore, played by Joaquin Phoenix, talks to a system that is an AI that sits in some iPhone-type device in his pocket. She is so real, and the voice in that device is so real that he falls in love with her. Now, the reciprocation is very artificial because she is only a machine. In the real world, we will eventually start seeing these kinds of B2C use cases. This trend will transition from Hollywood to our daily lives. I will carry one with me to help me plan my day, and so on. This is real. We should figure out a way to embrace it, learn from it, manage the risks, and not get in trouble (or fall in love).

Picture of Anshika Mathews
Anshika Mathews
Anshika is an Associate Research Analyst working for the AIM Leaders Council. She holds a keen interest in technology and related policy-making and its impact on society. She can be reached at
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Biggest Exclusive Gathering Of CDOs & Analytics Leaders In United States

MachineCon 2024
26 July 2024, New York

MachineCon 2024
Meet 100 Most Influential AI Leaders in USA
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!

Cutting Edge Analysis and Trends for USA's AI Industry

Subscribe to our Newsletter