Leveraging Large Language Models (LLMs) in enterprises offers transformative possibilities. LLMs like GPT-3 can automate customer support through chatbots, streamline content creation, and analyze vast datasets for valuable insights. They enable personalization, aid in legal compliance, and enhance knowledge management. LLMs also power predictive analytics, improve employee training, and automate data entry tasks. Natural language interfaces and content summarization make applications more user-friendly, while market intelligence from LLMs keeps businesses competitive. To make the most of LLMs, companies must invest in infrastructure, data security, and ethical considerations. Integrating LLMs into workflows ensures they align with business objectives, driving innovation and efficiency across functions.
To dwell more on this we had a roundtable discussion on the agenda “How to Leverage LLMs in enterprises’ ‘. The session was moderated by Lavi Nigam, Lead Data Scientist at Google along with panelists Raghavendra Prasad Munikrishna, VP – Data Engineering & Analytics at JP Morgan, Samarth Gupta, Vice President – Data Engineering & Analytics at Royal Bank of Scotland, Muthu Chandra, Chief Data Scientist at Ascendion, Sanjay Thawakar, Senior Vice President & Head, AI Works & BPMA at Max Life Insurance and Nirupam Srivastava, Vice President – CX/AI, Growth, Legal, Innovation and Startups at Hero Enterprise.
Navigating the Confluence of Enterprise and Open Source Technologies
There are a lot of things that are happening in the enterprise world, there’s also equally sort of a lot of excitement, a lot of new things that are happening in the open source world. While, of course, there’s a challenge, always with the open source in terms of the data, security and lots of things from an enterprise perspective. While there’s a lot of hype surrounding this technology, it’s not unfounded. There is significant potential in this field, and we’ve witnessed crucial developments. However, it’s essential to remain grounded and ensure that our actions align with our enterprise objectives. It’s also vital to understand our strategies concerning both open source and enterprise solutions. Eventually, there will be a mix of enterprise offerings from various vendors and open-source options. This is the opportune moment for all teams to coordinate and formulate their strategies.
– Lavi Nigam, Lead Data Scientist at Google
The Rise of Gen AI and Its Impact on Problem Solving
One key benefit of open source is the control it affords. There is no vendor dependency concerning algorithms; you can fine-tune and customize them based on your specific use cases. This level of control is highly advantageous.
From a security perspective, especially in the banking and financial sector, all operations occur within our secure ecosystem and guardrails. This instills trust because any issues that arise during a POC do not impact external data. This aspect is crucial in the BFSI vertical.
Moreover, customization and adaptability play a significant role. Most use cases require some level of customization; they can’t simply be taken out of the box and implemented as-is. Enterprise solutions often necessitate partnership and adjustments to fit unique use cases. This can result in reduced control, unlike open source, where you have greater control.
Regarding security vulnerabilities, open source models benefit from a collaborative community working to address issues before they impact enterprise users. This leads to quicker bug fixes and vulnerability mitigation compared to enterprise solutions.
It’s worth noting that enterprise solutions have their advantages, such as trusted brands and long-term support relationships. However, when it comes to control, stability, trustworthiness, and transparency of data and models, open source often holds a slight edge over some enterprise versions.
– Raghavendra Prasad Munikrishna, VP – Data Engineering & Analytics at JP Morgan
Unlocking the Potential of Gen AI: Open Source Integration and Modular Approaches
If we liken Gen AI to adding a neocortex to our AI engine, it’s akin to welcoming a child who has grown up outside our enterprise, in the vast realm of the internet. This child brings knowledge, some of which may not align with our beliefs, and some that require unlearning and relearning. This is where dedicated APIs come into play, although they are currently in the pilot stage. In our efforts to implement LLMs, we are taking open-source models and retraining them with our data. However, the quality of data, not just labeled data, is a well-recognized challenge in training.
To address this, dedicated APIs and controlled access frameworks are emerging. These frameworks focus on how to retrain open source within our systems, emphasizing networking, security, and API setup rather than the actual training process. Some areas where open source Gen AI can be immediately applied include content creation, marketing campaigns, and customer interactions. It’s about plug-and-play integration in these domains.
The challenge lies in the fact that Gen AI differs from traditional ML or AI, as data is intertwined with the framework. One potential approach could be separating the framework, algorithms, and data. Instead of providing the entire package, organizations could offer the framework and various methodologies, allowing users to choose how to adopt it. Then, they could focus on data hydration, determining whether users need training data, a hybrid of their data and provider data, or domain-specific training data.
Moreover, organizations offering open source products could segment their training data by domain. This means having different versions of the model trained on distinct datasets, such as banking data, Google Scholar data, or open media data. This approach simplifies the adoption of open source.
Currently, adopting open source mainly involves setting up the right access controls. However, the proposed approach suggests a more modular way of working with providers. Organizations could collaborate to set up the necessary Gen AI modules. Presently, Gen AI offerings are often monolithic, bundling everything together, without modularity. Breaking this down into separate components could offer a more flexible and customizable solution for organizations working with providers.
– Samarth Gupta, Vice President – Data Engineering & Analytics at Royal Bank of Scotland
Unlocking the Potential of Open Source Language Models
Based on our experience, we have experimented with Bard, and more recently, Llama as well. Bard, the Google model, has notably improved our customer experience. Other open source language models are still evolving. However, Bard’s standout feature is its ability to effectively capture bidirectional context, particularly valuable for unlabeled text data. Many of our customers struggle with unlabeled text data, a common data challenge across various industries. We’ve found that using Bard and online SLPs allows us to implement practical solutions in production rather than engaging in extensive hunting. While some customers initially explored extreme hunting, Bard’s maturity has made them more comfortable using it. This sums up our experience with open source language models.
– Muthu Chandra, Chief Data Scientist at Ascendion
Strategic Considerations for Choosing Between Open Source and Enterprise-Level LLM Models
Two essential considerations are at play here. Firstly, the decision between adopting an open-source LLM model or an enterprise-level open LLM model hinges on your specific use case. It’s crucial to identify who your intended audience is for this asset. For instance, if it’s intended for our internal salesforce spread across the country, we need to assess whether an enterprise-grade solution is suitable. We must delve into scalability, security considerations, and tolerance levels for the answers it generates.
Secondly, our strategy is heavily influenced by factors such as the use case, scalability requirements, the audience we’re exposing the asset to, and the acceptable margin of error in the responses. Based on these factors, our current inclination is towards using APIs from major providers like Azure or OpenAI and fine-tuning them to our needs. Alternatively, we may consider intermediary solutions or utilize services like cognitive search, vector search, or semantic search. Currently, we’re in the exploration and experimentation phase to determine what works best. It’s worth noting that some API data may not be readily available for the India region or other billing perspectives.
Ultimately, our approach is open-minded, considering open source where it makes sense, especially since we host everything within our secure environment. Our choices are tethered to specific use cases and cost considerations.
– Sanjay Thawakar, Senior Vice President & Head, AI Works & BPMA at Max Life Insurance
Enterprise Considerations: Performance, Transparency, and Service Risks
From an enterprise perspective, whether open source or closed-source proprietary platforms, the primary concern is performance. Currently, objective benchmarks are not widely available because enterprises don’t typically invest extensive time in platform testing. However, overall, performance tends to be nearly comparable, whether it’s open source or a closed, proprietary platform like OpenAI.
Secondly, open source software provides a level of comfort, especially in regulated industries like banking and insurance. Regulatory oversight is essential, and open source allows for code inspection, even though not everyone may actually review the source code. Nonetheless, it offers a sense of transparency that regulators and many stakeholders appreciate, compared to closed-source alternatives where the inner workings are concealed.
The third element revolves around servicing and the associated risks of these platforms. Whether a big-name company takes on all the risks versus open source remains somewhat neutral. Contract agreements, ownership of liability, and disclaimers play a significant role. These agreements often include disclaimers stating that the software is experimental, and users need to navigate the risks themselves. This aspect can be mitigated by engaging reputable service providers, such as IBM, which I worked with previously.
– Nirupam Srivastava, Vice President – CX/AI, Growth, Legal, Innovation and Startups at Hero Enterprise.
Gen AI is in its early stages, but it holds promise because it aims to replicate the human neocortex’s ability to create new knowledge from previously acquired information. While challenges like data centers, infrastructure, and computational power exist, progress is being made, including advancements in quantum computing.
– Samarth Gupta, Vice President – Data Engineering & Analytics at Royal Bank of Scotland