In the rapidly evolving landscape of Artificial Intelligence (AI), financial institutions and other industries are at a pivotal juncture, grappling with the integration of advanced technologies into their operations. This discussion delves into the intricate balance between innovation and the ethical implications of AI, particularly in the realm of governance. Our panel of experts, drawn from various sectors, brings forth a nuanced exploration of how organizations are navigating the complexities associated with leveraging generative AI technologies.
Our panel includes distinguished speakers from diverse sectors, each bringing a wealth of knowledge and experience to the table. The session, moderated by Jaya Murugan Muthu Manickam, Senior Architect at Adobe along with panelists Sitaram Tadepalli, Vice President, Machine Learning Systems at DBS Bank , from the banking industry, underscores the significance of implementing rigorous checks and balances in the utilization of AI models, emphasizing the development of comprehensive guardrails throughout the data lifecycle. In contrast, Vishal Nagpal, Data Science and Analytics Leader at Amazon, ventures into the strategic application of generative AI to bolster consumer services and operational efficiencies. From the healthcare sector, Dr. Santosh Karthikeyan Viswanathan, Global Technical Director at AstraZeneca addresses the challenges and opportunities presented by AI in clinical data analysis, highlighting the importance of thorough assessments and adherence to regulations and Raj Bhatt, CEO at KnowledgeFoundry discusses the transformative ‘Plus Factor’ of generative AI, focusing on its impact across analytics industries and consumer expectations.
Together, these perspectives offer a rich tapestry of insights into the ongoing dialogue around AI governance and ethics, spotlighting the innovative strategies and current use cases that are shaping the future of technology within and beyond financial institutions.
Financial Institution Inquiries: Addressing Governance and Ethics in AI Development
Especially in industries like banking and other sectors bound by legal jurisdictions and governance frameworks, governance plays a crucial role. We believe in implementing thorough checks and balances when utilizing these models. We have established guardrails both during data injection and monitoring phases to ensure compliance and ethical considerations.
When we talk about building platforms, we aim to create an open environment where everyone can utilize their preferred models and approaches for problem-solving or application development. We’ve developed a central guardrail framework to ensure that all problems are evaluated with respect to the models being used. We prioritize basic checks to ensure ethical considerations are met.
Regarding the applications we develop, the majority are internal-facing. We exercise caution in deploying applications or chatbots that interact directly with customers using LLM due to its maturity level. Our focus lies in internal tools, such as Enterprise search, which facilitates intelligent document searching across various platforms like Confluence, SharePoint, etc. This ensures that different user groups can access relevant information securely, considering data privacy and security measures.
Our central platform integrates with the access control layer to enforce these checks and balances. For instance, while leaders may access documents within their division, they’re restricted from accessing beyond their authorized patterns. Continuous monitoring is crucial post-implementation. We assess LLM’s performance using various metrics like correctness, relevance, and groundedness. However, evaluating LLM’s performance differs from traditional ML models due to the unstructured nature of text and diverse query origins.
LLM serves as a valuable tool for document summarization and content ingestion for our applications. It streamlines tasks like updating documents based on new regulations, saving significant time and effort. We’ve deployed applications to automate these tasks, reducing manual workload and providing assistance to human operators. There’s a feedback loop established to validate the quality of LLM’s output, ensuring it aligns with expectations and regulatory requirements.
In essence, our approach emphasizes governance, thorough monitoring, and continuous improvement to leverage LLM effectively while mitigating risks associated with its use.
– Sitaram Tadepalli, Vice President, Machine Learning Systems at DBS Bank
Employing Generative AI for Consumer Needs and Product Suitability: Strategies and Current Use Cases Explored
I believe I can segment the applications of Generative AI into various business areas. Some of these applications are related to specific team activities, while others pertain to broader business functions such as product, marketing and operations. For instance, on the Amazon India Marketplace, we have already implemented review summarization. This was identified as a low-hanging opportunity for us, given that LLM applications excel in summarization and text generation. These are the primary areas we aimed to leverage initially, while options like agent interactions are deferred for later consideration.
From a marketing perspective, we are exploring the integration of summarization and multimodality. This involves transforming textual descriptions into video summaries of products. Moreover, we are utilizing text generation for language translation in marketing materials, facilitating communication across diverse languages.
In the realm of risk management, LLM applications have proven beneficial, particularly in handling policy-related content and operational communications. These applications help manage the vast amounts of textual information and facilitate more efficient responses to seller inquiries across multiple channels.
The goal is to automate decision-making processes in these interactions. For example, decisions made by operations investigators could potentially be automated using LLMs. The concept involves a “human in the loop” (HITL) approach, where a portion of decisions is automated, and human investigators validate the accuracy of these decisions. This strategy, however, presents challenges. We are still determining the best methods for processing unstructured data, establishing quick feedback mechanisms, and managing the costs associated with frequent model tuning. The balance between decision accuracy and operational efficiency remains a critical consideration as we continue to refine our approach.
– Vishal Nagpal, Data Science and Analytics Leader at Amazon
Leveraging Generative AI for Decision-Making in Clinical Data Analysis: Challenges and Opportunities
We were quite surprised by the intensity of the AI use cases within the industry. However, when asked if we could implement that immediately, the quick answer is no. There is a series of assessments we need to go through, even when considering the POC phase. For instance, we cannot immediately use some of the publicly available LLM models, so we must take a meticulous approach and we need to consider what data we will be handling during the process. All pharmaceutical industries must adhere to regulations.
With that said, we need to ensure that the data we use are handled responsibly and ethically. The role of enterprise AI governance plays a key role that monitors and helps in establishing policies, processes & evaluating assessments, whether we can proceed with the proposed solutions/platform. Sitaram spoke about the platform, which is quite important, but many people forget. I mean, they get carried away with the term AI. However, I firmly believe that strong data engineering and necessary infrastructure is essential behind AI, something I truly believe in.
Switching to Generative AI, prompt-based outputs should be reviewed by humans and considered ‘human in the loop’ in every step of the process. Also, we should ensure we have a data retention policy defined within the organizations.
We are still navigating through leveraging this technology, but given the intensity of interest, I hope all leaders consider this as a scorecard item in terms of leveraging Generative AI. I hope this progresses quickly, and we can take it forward.”
Disclaimer: All views expressed by Santosh are personal and should not be considered as attributable to his employer.
– Dr. Santosh Karthikeyan Viswanathan, Global Technical Director at AstraZeneca
Unlocking the Power of the ‘Plus Factor’: Gen AI’s Impact on Analytics Industries and Consumer Expectations
I run Knowledge Foundry, we are a data science and data engineering services firm. There’s a lot of buzz around GPT-3 among Fortune 2000 companies. They are indeed experimenting with pilots, and each company probably has tens of different initiatives in progress. What I’m observing is that not many of those initiatives are currently reaching production, and clients are becoming a bit worried about it. There’s a flurry of activity across various parts of different organizations, but not many are transitioning into production.
I’ll tell you about the areas where Generative AI initiatives are making strides in production. In creative fields, there’s a lot happening, whether it’s generating text for storylines or creating images for purely creative industries. However, in mainstream industries, there are a few themes that are succeeding in production. One such theme is extracting knowledge from document repositories, especially when an enterprise has a vast knowledge repository spread across various parts of the organization. LLM serves as a good knowledge management tool for extracting information from these repositories. However, the cost of running a vector database to store all these embeddings is still a significant hurdle. Eventually, this cost must decrease for these generative methods to become widely used.
Similarly, document writing from complex sources, such as summarizing study results in clinical trials, is gaining traction, particularly in the pharma space. Although these initiatives are still in the pilot phase, some companies are beginning to use certain modules in production. Some parts of these initiatives are being adopted by various pharma companies and clinical research organizations.
There are overarching themes, and I believe the cost of generative AI must decrease for many of these pilots to become suitable for production. In the early phase of 2023, everyone was excited about the potential of LLM. It was seen as capable of wonders. While it’s undoubtedly useful as a coding assistant and for generating content, the adoption of large-scale enterprise use cases will increase once the cost decreases. This includes not just the cost of LLM itself but also the associated vector databases and other components.
Moreover, we’ll likely see some off-the-shelf solutions emerging for specific areas of application. For instance, there’s already a focus on using LLM for contract management, with specific players developing products tailored to this end-use case. These products will incorporate LLM models and have a ready-made architecture. Some of these solutions will become available as off-the-shelf products in 2024 and 2025.
Simultaneously, as the cost of LLM and vector databases decreases, enterprises will find it more cost-effective to develop these solutions in-house. This, hopefully, will lead to more ideas transitioning into production.
– Raj Bhatt, CEO at Knowledge Foundry