As we witness an unprecedented integration of data science into diverse sectors, the implications for organizational strategy and operational innovation are profound. Before the advent of transformative tools like ChatGPT, many businesses found data science to be an esoteric discipline, challenging to decode and apply. However, the current landscape has shifted dramatically. Now, not only is data interpretation more accessible, but the surge in practical applications has also prompted a critical examination of how these technologies are harnessed to drive strategic value. This discussion aims to unpack the phases of adoption—exploration, exploitation, and expansion—and to outline effective strategies for integrating AI in ways that align with and advance broader business goals.
The AIM Leaders Council hosted a roundtable session on “Navigating the Gen AI Adoption Cycle-Organisation’s Strategic positioning”with leaders: Yogananda Domlur Seetharama, Director of Data Science at Walmart Global Tech, Kalyana Bedhu, AI/ML Leader at Fannie Mae, Avijit Chatterjee, Head of AI/ML and NextGen Analytics at Memorial Sloan Kettering Cancer Center, Ram Ramanan, Executive Vice President, Engineering at Rondo Energy and Aravind Peddiboyina, AI Innovation & Global Analytics Delivery Leader at Kimberly-Clark.
Advancing Data Science Integration and Strategic Response in Non-Technical Sectors
Before ChatGPT, businesses wanting to use data science capabilities found them difficult to interpret. The valuable aspect is that the data science team must first delineate and explain how we can streamline metrics of interest before moving forward. What ChatGPT enables is a clearer view of its impact on the world. Consequently, businesses began seeing more industry use cases being adopted within their own operations. Now, the situation has reversed; we are inundated with use cases that need immediate attention.
The influx of these use cases prompts a need for strategic responses and effective integration of data science within sectors not traditionally focused on tech. It’s crucial for these industries to understand the scope and scale of implementing data science solutions to ensure they align with broader business objectives. I’d recommend a few key strategies to facilitate this integration.
Prioritize the scalability of data science projects. By starting small, test hypotheses and measure impact in a controlled environment, adjusting strategies based on those results before scaling up. This iterative process helps mitigate risk and fosters a culture of continuous improvement.
Collaboration between data scientists and domain experts within the company is vital. This synergy ensures that the solutions developed are not only technically sound but also practically applicable to our specific industry challenges. Regular workshops and joint project teams help maintain this collaborative environment.
– Yogananda Domlur Seetharama, Director of Data Science at Walmart Global Tech
Strategic Phases of Adoption: Exploration, Exploitation, and Expansion in Business Innovation
The great question here concerns the adoption across three phases. Let us focus on these phases, as the approach to each is distinct. In the exploration phase, you want more people to try it out. In the exploitation phase, we aim to understand the brakes and guardrails—the controls that prevent us from making mistakes. When a mistake is made, is there a kill switch that can remedy the entire situation? Can you recover? This is crucial because you’re entering adoption cautiously optimistic. Here, you’re not measured by the return on investment or the value, which are indeed important, but by your ability to avoid mistakes, because one error could lead to a complete rollback, especially in finance. Then, in the expansion phase, you start measuring the business value and the ROI.
Throughout, keep the end goal in mind: you’re conducting all these explorations for some business value. This process involves exploring and motivating people to think about new use cases and teaching them to adopt an experimentation mindset. It doesn’t come naturally to those who don’t regularly write prompts, so it’s about laying a general foundation on how to think about how everyone can participate in AI and how AI is for everyone. This is probably your first major milestone. Then, focus on how to derive value from these initiatives. Remember, a car runs faster not only because it has accelerators but because it has brakes. We need to understand this and probably implement it at an organizational level.
Disclaimer: All opinions are mine and not my employer’s.
– Kalyana Bedhu, AI/ML Leader at Fannie Mae
Enhancing Data Management and AI Integration in Healthcare and Business Applications
There is the part about the Office 365 Copilot, which 300 employees are testing. It’s essentially a black box experience provided by Microsoft and OpenAI. You’re in Teams, you’ve just had a recording, and you get the meeting minutes or you use Copilot within MS Word or Outlook to assist in composing a draft of a document or email. You ask it to give you summaries and to suggest next steps, and so on. When it comes to clinical use, consider our data curation project, where we have about 87,000 genomic-profiled patients, whose data we are curating longitudinally. This means looking into the radiology or pathology reports to extract a lot of hidden “crown jewels” insights in the text and structuring them into specific fields. We currently do the curation manually using medical professionals, but the goal is to use an AI-assisted pipeline to accelerate the process.
For a patient facing use case, we have a very popular app to search for herbs for pain management, sleep disorders, and other issues for cancer patients and so on. We are looking to add a Q&A interface on top of that, so that the results are very reliable, because they come directly from within the expert content provided to the RAG pipeline. The results are reviewed by humans in multiple ways, addressing the questions that patients might ask, to ensure it’s not veering off track, but it’s a continuous process that must be monitored. In many cases, especially in machine learning models, we talk about drift. This becomes even more important in GenAI models, because of the potential risks that can enter the situation, affecting patients who are acting on the content that you are generating.
-Avijit Chatterjee, Head of AI/ML and NextGen Analytics at Memorial Sloan Kettering Cancer Center
Engineering the Future: The Crucial Role of Validation in Gen AI and Product Development
It’s a little bit easier in the engineering case because, even if you’re not using generative AI, the key steps involve validating what you’ve got. Does this really make sense? Does this really work? There’s definitely a test phase, or a validation phase, through which whatever is created must pass. It’s got to undergo some simple sniff tests to ensure that things make sense. So even if generative AI wrote the code or we created a new algorithm or a new machine learning model, it must go through the same set of tests that we typically use. So, I don’t really see that as a big challenge.
In response to an earlier question about how I will measure success, I would say that if more people begin to adopt this and their productivity increases, that would be the best metric for me. If the product cycle time is now shorter, if people are testing critical cases, and if we are able to make products that work the first time around, that to me is probably the best way we’ll see it. And faster development, higher reliability—these might be a bit harder to recognize and might take a bit longer, but I believe they’ll be evident. I will measure it more from the standpoint of productivity, with people being more productive.
However, I don’t really see a concern in terms of adoption or worries that things could go wrong. That’s always a concern that people are going to have, but anyway, you need to validate the models, so it’ll sort itself out.
-Ram Ramanan, Executive Vice President, Engineering at Rondo Energy
Defining the Boundaries: Platform Capabilities vs. Product Features in Gen AI
Businesses are eager to leverage innovative technology to solve problems and increase productivity. However, one of the major challenges we face is defining, in the realm of Gen AI, what differentiates a product from a platform capability. This distinction is crucial when it comes to democratization, prioritization and investments. In order to have a good productive Gen AI Product we need a strong platform to support them.
One of the current challenges that industries at this state is to clearly define what constitutes platform capabilities at the enterprise level that every product can utilize, and what are the business needs and features required that contribute to Product to solve a business problem. This is something that, down the line as we mature and learn, industries will need to finalize and provide guidelines. When it comes to Traditional AI we have many product teams that are driving commercial transformation, focusing on revenue growth management, Supply Chain Automation, and domain-specific initiatives solving a business problem or helping with a segment growth and the platform clearly provides required Infrastructure, frameworks and data needed for these models.
When it comes to Gen AI and current maturity we see that Gen AI solves few types of Transformation Pillars
- Shrinking: Summarization, Extraction & Classification
- Translation: Changing from one format/language to other
- Expanding: Brainstorming, Synthesizing, Drafting
These Transformation Pillars can be argued in both ways depending on how organizations are structured. A significant challenge we see is determining how organizations have democratized the decision making on prioritizing the Platform or Product capabilities. While business definitely backs these products features, the question remains whether they fully support platform capabilities, considering the substantial upfront investments. There will be conversations and debates on what problem statements we can address with Platform capabilities before building Product capabilities. That said as organization learn, mature we will see more alignments on how we can better execute these capabilities for a larger impact to the organization
– Aravind Peddiboyina, AI Innovation & Global Analytics Delivery Leader at Kimberly-Clark