What We Do Is We Take [A] Responsible AI Approach,” Says Truist’s Chandra Kapireddy

Truist Assist is designed to shift easily between AI-powered interactions and human care

As AI technologies become more advanced, the need for ethical oversight is more and more urgent. Firms that introduce AI without a well-established framework are in danger of unintended biases, privacy issues, and accountability flaws that have the potential to undermine public trust. Truist, the top financial institution in the U.S., is addressing this threat head-on by infusing responsible AI principles into the core of its operations.

Generative AI has proved to be incredibly useful for repetitive task automation, deciphering intricate patterns in data, and improving customer interaction, but it is not perfect. AI models, most notably large language models (LLMs), might sometimes produce information with certainty, which is a severe threat in applications where accuracy is not negotiable. This issue became evident when OpenAI’s ChatGPT was used in legal proceedings, inadvertently generating fictitious case law that lawyers unknowingly cited, leading to courtroom embarrassment and reputational damage. This underscores the importance of human oversight and rigorous validation when integrating AI into critical operations.

AI Hallucinations Under Scrutiny

During a podcast interview with MIT Sloan Managment Review, Chandra Kapireddy, senior leader in the generative AI, machine learning, and analytics group at Truist stated, “At Truist, what we do is we take [a] responsible AI approach.

Instead of addressing responsible AI as an afterthought, Truist approached the challenge proactively by enshrining ethical AI principles in a formal policy that shapes its AI development cycle. The organization weaves essential aspects of responsible AI privacy, explainability, transparency, accountability, safety, and security into all AI initiatives to ensure that technology is aligned with ethical norms as it produces meaningful outcomes.

The importance of this model is that AI systems, especially those serving the financial services sector, have significant impacts on people’s lives, ranging from credit assessments to fraud prevention. If an AI model is biased, not transparent, or misunderstands data, the implications can be catastrophic.

Seven-Step AI Risk Framework

The implementation of AI at the company adheres to a formal seven-step lifecycle model, so every step is specifically designed to reduce risk while maximizing value.

This cycle starts with ideation, when an AI application is suggested and analyzed for its business and ethical feasibility. Prior to proceeding, the company performs risk clearing and risk assessment, carefully determining if the technology adheres to compliance directives and social consequences. After clearing, projects go through development, testing, third-party validation, deployment, and continuous monitoring, creating a complete cycle that ensures accountability at all levels.

Truist steers clear of the dangers many companies encounter when rolling out AI solutions hastily or with poor governance. AI failures like discriminatory hiring software, incorrectly interpreted financial transactions, or violations of data privacy highlight the importance of designing AI systems with trust as a primary consideration.

For instance, over the past few years, various companies have come under fire for rolling out AI models that inadvertently discriminated against specific groups based on poor training data.

Truist’s policy of structured mitigates the above-mentioned risk through ensuring that AI models are strictly tested before they go into production.

Transparency is also a focus area in Truist’s accountable AI strategy, with explainability critical for user trust. Most AI models are “black boxes,” where the rationale behind their choices is not known even to the people who designed them.

Truist focuses on AI explainability so that all decisions made by AI can be understood and explained by human stakeholders. This is especially important in banking and financial services, where customers need to know why an AI-driven system makes a decision regarding credit worthiness, loan approval, or detecting fraud.

Their AI approach is ongoing monitoring post-deployment. AI models do not remain static; they learn from the data they receive.

Without frequent audits and assessment, models can stray from their desired accuracy and produce unforeseeable results that could damage customers or compromise business integrity. Truist uses a human-in-the-loop strategy for AI so that there is human control, especially for decision-making where outputs are closely vetted.

As Kapireddy emphatically put it, “As an industry, when we show those GenAI-created apps or [GenAI] powered apps, we would indicate that “Hey, you are interacting with an app that is AI,” and along with the warning of the AI hallucinations.”.

Truist Assist Enhances Client Support

With that confidence, Truist has introduced generative AI into its business, especially in customer service and financial decision-making, with its AI tool Truist Assist, a digital assistant aimed at improving customer engagement through natural language processing and understanding.

This AI solution lets customers pose questions and get answers in real time, with easy deployment into Truist’s contact center, providing a seamless handoff from automated service to human one where necessary.

Truist Assist is designed to shift easily between AI-powered interactions and human care. If the customer question needs more substantial help, the AI platform transfers the conversation to a Truist teammate, making the process seamless without having the customer repeat details. This combining supports Truist’s T3 Strategy Technology plus Touch equals Trust, focusing on a hybrid model that balances AI efficiency with individualized human touch.

Sherry Graziano, Truist’s Digital and Contact Center Banking Head stated, “The Truist Assist launch is just another milestone on our path to co-creating a digital first client experience, with the ability to add human touch.”

When a customer’s question demands a more intense level of assistance, Truist’s self-service channel escalates from the virtual assistant to the firm’s contact center, that’s precisely what Chandra Kapireddy communicated throughout the interview, when he stressed on human supervision.

​​Commitment to AI Ethics

The firm, according to Kapireddy is attempting to create such applications that are going to make a deterministic API call that agent must place to obtain a definite response customer questions.

Kapireddy emphasizes retrieval-augmented generation (RAG) patterns whereby AI contextualizes prompts by retrieving pertinent information prior to processing them via large language models (LLMs). Still, he emphasizes that responses produced by AI need to be strictly validated prior to being endorsed as credible outputs. Validating is crucial because AI models can create hallucinated information, and any errors regarding banking or financial services will have severe implications.

Truist constructs its own guardrails on top of those that come with AI vendors, enhancing responsible use of AI. If a response fails confidence metrics, the reply is not just an estimate but rather an explicit communication that the query cannot be answered currently. This way, AI systems ensure accuracy and accountability over attempting to produce potentially erroneous information.

Aside from technical orchestration of AI models, Kapireddy highlights the vast potential AI has in financial services. Be it revenue generation and risk reduction or teammate productivity and operational efficiency, AI is already building the future of banking.

📣 Want to advertise in AIM Research? Book here >

Picture of Upasana Banerjee
Upasana Banerjee
Upasana is a Content Strategist with AIM Research. Prior to her role at AIM, she worked as a journalist and social media editor, and holds a strong interest for global politics and international relations. Reach out to her at: upasana.banerjee@analyticsindiamag.com
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!