Building Smarter and Safer Systems with Responsible AI Principles with Deepak Dube

Responsible AI is the glue that binds humans and machines in an environment because, for the foreseeable future, we will have human colleagues and digital colleagues coexist in an enterprise in the workplace.

As artificial intelligence continues to revolutionize industries, the conversation around Responsible AI has never been more critical. At its core, Responsible AI ensures that systems are designed, developed, and deployed in ways that are ethical, transparent, and aligned with societal values. From addressing bias to ensuring fairness and accountability, the framework of Responsible AI safeguards the delicate balance between innovation and trust. In a rapidly evolving landscape, businesses and individuals must navigate these challenges to create AI systems that are not just powerful but also principled.

This episode features Deepak Dube, a distinguished voice in the AI industry and a proponent of Responsible AI. Deepak is the founder and CEO of EazyML and Datanomers, where he has pioneered transformative solutions in fraud management and FinTech automation. Among his notable achievements is the creation of a fraud management system that has been deployed across major global telecom operators. Most recently, he led the development of the FinTech Risk Profiler, an intelligent machine-based solution that went from concept to customer deployment in just six months.

Previously, Deepak served as the CTO of IPsoft, where he led the creation of Amelia, a groundbreaking cognitive agent. His extensive experience also includes his tenure as a distinguished member of the technical staff at AT&T Bell Labs and as an adjunct faculty member of computer science at New York University. Deepak holds a Ph.D. in computer science from the Illinois Institute of Technology and brings decades of experience in AI innovation, automation, and strategic development.

In this conversation, Deepak delves into the nuances of Responsible AI, sharing insights on transparency, ethical design, and the evolving role of generative and agentic AI in driving transformative solutions. His expertise offers a unique perspective on how businesses can adopt AI responsibly while achieving measurable success.

Key Highlights 

  • Importance of Responsible AI: Deepak emphasized why ethical frameworks in AI are critical for trust, accountability, and long-term success across industries.
  • Transparency in AI Models: He discussed the challenges of achieving transparency in machine learning systems and how it impacts regulatory compliance and business credibility.
  • Generative and Agentic AI Trends: He shared insights on emerging trends in generative AI and agentic AI, exploring their potential applications and risks.
  • Balancing ROI with Ethics: He highlighted how organizations can balance ethical AI practices with achieving measurable returns on investment.
  • EazyML’s Approach: Deepak provided an overview of how EazyML integrates responsible AI principles into its machine learning solutions, ensuring fairness and adaptability.

Kashyap: Hello and welcome everyone to the next episode of AIM Media House podcast Simulated Reality. Today we have with us the founder and CEO of Easy ML Deepak Dube. Hi Deepak. How are you doing today?

Deepak: I’m fine Kashyap, how are you?

Kashyap: I’m doing fantastic. Thank you for making the time to join us, Deepak. We are going to be talking about responsible AI, and when you and your team reached out to me to discuss this topic, one of the first things we agreed on is that it’s very vast.

But before we dive into the specifics, I want to understand why you wanted to talk about responsibility. What drives your passion for this topic?

Deepak: Because the thought is that statistical model, once it’s been developed as a robust model, it’s good enough for decision AI. And that’s a misnomer. Unless it has a supporting cast, a halo around it of elements of responsible AI, the model itself is not sufficient in terms of delivering the enterprise business objectives and the promised ROI.

I’ll elaborate a little bit. Most AI projects fail. And this is just to exemplify what I just said about responsible AI as a supporting cast of features, functions that support the statistical model for decision AI. Most AI projects fail. Why? Because of the lack of trust. The subject matter expert disagrees sometimes with the prediction, even from the best of the model, and then starts to second-guess the model. Why? Because it’s a black box. It’s making these predictions. They don’t know why they disagree and what were the reasons for that prediction. They cannot reconcile the differences.

And so all the promises of driving efficiencies in a business process and reducing cost and delivering business objectives, cost, and a promised ROI are stillborn. So it’s important to have all the features, functions, in particular, the example that I just said of trust requires transparency of the core statistical model, of something that is delivered by responsible AI or explainable AI, that’s one of the constituents. So in so many words, unless you have responsible AI, there is no trust in decision AI is saying; hence, more projects pay up for that.

Kashyap: And I think one of the things you’re trying to highlight is establishing trust with the client. While there are various mediums to do that, your model efficiency through AI and some of the parameters which can bring more visibility into the decision-making process of this AI is something that will help the right stakeholders build more trust around it? According to you, what constitutes responsible AI? You just mentioned transparency and explainability as some of the parameters which are important and critical to responsible AI, but it’s a really vast topic. How do you holistically approach responsible AI?

Deepak: Transparency, which is explainable AI for models or augmented intelligence for data, is one such key component. Then another key component is accountability. The model must be held accountable for accuracy. The fact that it’s performing today and not eight months from now, as the dynamic business environment changes, the model must be a good citizen and alert the users of this possibility. Accountability: the model must be held accountable for its performance, for its accuracy.

That’s another key component of what’s responsible AI. Then another key component is compliance. Your model, as a good citizen of your decision AI workflow, the process itself must be compliant with the regulatory requirements. Otherwise, it’s a major problem. And transparency certainly helps with that. Yet another big piece is remediation, which is frequently overlooked. Actually, here’s the adage: if you just state a problem without a solution, then you’re part of the problem.

Imagine a model that has just made a doomsday forecast. An unfavorable outcome has been forecast. The quality has dipped for your manufacturing process. We disapprove the loan to this guy. The pharmaceutical is going to lose significant market share because biosimilars are going to be released—whatever that unfavorable outcome is, just predicting that. And so now the recipient of that information says, “Okay, so what do we do to fix it?” That remediation of an unfavorable outcome to where you can make it favorable is also a key component of responsible AI, implemented in EazyMl by counterfactual inference.

And then, last, of course, you have a component of fairness in responsible AI, which is based on data bias and its alleviation. So, some of the constituent parts of what that halo constitutes.

Kashyap: No, I think this is really helpful. But one of the things that I want to ask you as a follow-up to this is about your conversations with customers. I’m sure as a product company working specifically in machine learning, this is a conversation you have on a very regular basis. While AIML in itself is probably a little confusing for a lot of people, a conversation as detailed as you just mentioned around responsible AI might be a little difficult. How do you have a conversation with the customer in such a case? How do you convince them that these elements of responsible AI are all very crucial for the success of an AIML project without complicating things too much and getting a buy-in from them?

Deepak: So typically, EazyML comes in a variety of stages of AI/ML. I’ll pick one which we’ve seen. An enterprise has implemented an ML model. It’s doing decision AI in a workflow. The subject matter experts are unhappy, they second-guess the predictions. They can’t relate to the predictions, and so the promised return on investment, the business objectives, are not being delivered.

So their question is, can you help us put this whole AI project back on the rails? And what we do then is say, instead of talking about responsible AI holistically, which you’re right is a massive term, we pick out constituent parts to say, “okay, you’ve got a model and for some reason its predictions are being doubted. So what we will do is we will have this model launch an API call to an EazyML explainable AI module. What comes back are the reasons behind each prediction along with the explainability score.”

So now, parsing one of these, the reasons behind each prediction means, let’s say, Kashyap is the subject matter expert. He doubts that particular prediction, says no, I disagree, can you tell me why you made that prediction? The reasons come back, he takes a look, I think it makes sense now that you explained it to me, you’ve reconciled the difference. In other words, the model has started to earn your trust, and that is key to the success of any project, including AI projects. The explainability score that comes along with it is also very important.

Maybe in the future, I can talk about that. Why is that also critical? But the operative word here is trust through transparency. So the existing AI project, which is not delivering on its promise, now can.

Kashyap: One of the things that you mentioned at the start of the process as well is that responsible AI is not just enabling trust, it was also enabling ROI which is very interesting for me, while indirectly I can definitely see a relationship between the two factors but can you elaborate a little bit more to help our audience understand how does trust or other factors of responsible AI drive return on investment?

Deepak: So not every prediction is made with equal certainty. The model makes a prediction because the model has been trained by training data. Some predictions are, One positive about that, I think it’s right, and there are some that are maybe not. All right.

If you do not specify a score accompanying each prediction that says how certain the model is, then the subject matter expert who’s consuming this information assumes every prediction is made with 100% certainty. The model’s predicted, it thinks it’s right. So the ROI piece. Just think about this. A user says every prediction because I don’t see any explainability score accompanying this, I’m going to assume every prediction is right. Of course, some of them he agrees with, she agrees with, some of them he or she disagrees with.

So they are told, they’re like, I don’t know, should I trust this? Sometimes it’s right or not right. I don’t know if I should trust this. Now fast forward to responsible AI’s confidence scoring or explainability score that I talked about. You set the threshold based on users’ enterprises’ tolerance of business processes’ threshold tolerance for error. Anything that exceeds this is predicted to the user with a green tick. And of course, the user is skeptical initially. Over a six-month period, the proof’s in the pudding. They’ll evaluate and say, “Yeah, you know what? Every time this model makes a prediction with a green tick, it’s almost always right.

Anything that’s below this threshold, the score that accompanies the prediction is accompanied with a red question mark, which means park it on the side. Let the user inspect. So now here’s the ROI. All the green ticks lead to partial automation because the user says I don’t need to inspect those, driving efficiencies, reducing costs, delivering business objectives, and the promised ROI in the business case.

Kashyap: AI, which we saw 10 years back is not the same, right? While the term has been there for more than 30 years, we have always had generative AI as one aspect. As we enter 2025, it’ll be agentic AI, and we will have more and more versions of AI as we go. I’ve been studying this space for the last seven or eight years, and I’ve seen a newer term come every year. While the fundamentals remain the same, does that mean the same thing for responsible AI? How does responsibility need to evolve with the changing landscape of AI itself?

Deepak: The basics of how humans and machines coexist remain the same. Responsible AI is the glue that binds humans and machines in an environment because, for the foreseeable future, we will have human colleagues and digital colleagues coexist in an enterprise in the workplace. So let’s look at what’s hot and popular these days: generative AI. See how certain components of responsible AI that we talked about in the context of classical AI map to generative AI as well. Let’s pick, for instance, let’s start with transparency, and then we’ll move on to accountability.

So you have asked a question, a query of the LLM, or the agent completes the task, or the LLM comes back depending on what your use case is with a response. How do you know? Can the LLM be transparent to say this is where I got the response from so that I can trust it? In other words, can it point to the source documents, maybe with yellow highlighter on the relevant sentences, that says these are what led to the response so that as a consumer I feel comfortable that I’m not being misled? Be very transparent on how you got the response, the prediction, the source.

Look at accountability. The model and LLM must not mislead. Can the LLM not have, very similarly, a confidence score accompanying each of its responses? EazyML has augmented the LLM response with the confidence score between zero and one again. And you define YOUR threshold that we talked about to say above this I can trust and I’ll let it flow downstream for consumption without me interfering. Any confidence score below this for the response, park it on the side for human evaluation before we let it go downstream because we don’t want to mislead people. So accountability, the model is accountable. It’s spitting out a score so as to not mislead you, telling you how confident it is of the answer that it has provided.

See some of those same components mapping onto generative AI because the underlying concept of trust and ROI and delivery of business objectives, whether it’s classical AI, generative AI, or whatever else AI, remain. Those never go.

Kashyap: One of my final questions to you is, considering the complex landscape of responsible AI and some of the things you just mentioned, what must a customer keep in mind when selecting a vendor for responsible AI solutions? Additionally, are large enterprises capable of building responsible AI frameworks by themselves, or is it necessary for them to consult beyond their own capabilities? What are some other factors they need to consider when deciding whether to build or buy services for responsible AI?

Deepak: A couple of things. Number one, show me the AI is becoming this big morass that’s very difficult to grapple with, lots of promises, and moving at the speed of light. Show me that it delivers in my business environment. So, put a vendor through the test, through the ringer.Just powerpoints will not do. 

Number two, when evaluating a vendor, ask them that this is my business problem. Tell me how your specific solution—the solution could be a platform like EazyML or the solution could be executables, utility functions—and EazyML is available as both. Tell me how those things can solve the specific problem that I am encountering. These hard questions need to be asked.

And if they are satisfied, yeah, okay, this is how I’m going to do it and it makes sense. Then show me that you can do it. That’s important before you sign on to any vendor. There’s a lot of moving parts to this. Once you get the vendor selected, then it is vitally important that the vendor be as committed to delivering to your business objective. It’s not that I provide the product and now you provide the resources to make it happen. No, the vendor should be in collaboration with your technical team, delivering to the business objective so that your technical team is getting trained on the product so that in the future they can deliver on their own. That is absolutely key. This collaborative approach is a must for the enterprise to recognize the value of signing on to any solution. 

You said build versus buy and there are some things that lots of large vendors like GCP, Azure, AWS can provide, very sophisticated, but there are niches. Explainability, as I mentioned the explainability score. That’s an EazyML pattern. Why is that required? We kind of saw that thresholding. And so, unless you are developing it on your own, it would really mean that you’re developing a skill set equivalent to research in this particular area.

So those kinds of niche, you have a primary vendor like a Microsoft or a Google, and then you import those niche areas to augment what this major vendor provides in specific areas, and then collaborate with those niche areas and niche area vendors to deliver that objective with the major vendor and their niche solutions that you have bought from these other vendors for specific things that your solution needs.

Kashyap: Deepak, thank you so much for making the time. I had a wonderful time understanding your perspective around Responsible AI. I hope you had fun as well.

📣 Want to advertise in AIM Research? Book here >

Picture of Anshika Mathews
Anshika Mathews
Anshika is the Senior Content Strategist for AIM Research. She holds a keen interest in technology and related policy-making and its impact on society. She can be reached at anshika.mathews@aimresearch.co
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!