Scott Zoldi Says Blockchain Isn’t Dead and Its AI Applications Are Just Beginning

There's more to the world than agentic AI.

Is blockchain dead? Many seem to think so. Once hailed as the backbone of Web3 and the future of decentralized finance, blockchain has seen its hype fade in recent years. Web3 itself has been largely written off as a branding exercise, and Google search trends show a steep decline in interest. But does that mean blockchain has no future in finance? Far from it.

AI-driven finance is powerful but inherently flawed. Algorithms can introduce bias, large language models can hallucinate false information, and black-box decision-making leaves regulators struggling to ensure fairness. This is where blockchain could still play a pivotal role—not as a speculative asset but as a mechanism for auditability, ensuring AI-driven financial decisions are transparent, traceable, and tamper-proof.

But can blockchain truly function as an independent financial auditor, or does it add unnecessary complexity? Should it replace traditional regulatory bodies, or merely complement them? More importantly, can it balance transparency with privacy in a world where financial data is both highly valuable and highly sensitive? 

To give us more insights on this, for this week’s CDO Insights, we have Scott Zoldi, Chief Analytics Officer at FICO®. A leading voice in Responsible AI, Scott drives AI and analytics innovation across FICO’s solutions and holds over 130 active patents and pending applications. He pioneered AI model governance frameworks, including a patented use of blockchain, and isone of American Banker’s 2024 Innovators of the Year. Scott also serves on advisory boards for FinReg Lab, Software San Diego, and the San Diego Cyber Center of Excellence. He sits down with Kashyap Raibagi, Associate Director – Growth at AIM Research to discuss how blockchain’s real value may finally be coming into focus and not as a trend, but as a necessity.


Key Insights:

  • Blockchain is a powerful tool for enforcing AI model development standards, ensuring compliance, transparency, and ethical considerations.
  • Many organizations struggle to validate AI models due to a lack of transparency and explainability in machine learning algorithms. Blockchain can enforce predefined governance rules.
  • By embedding AI governance rules into a blockchain, companies can track every step of model development, require multiple approvals, and create an immutable audit trail.
  • Blockchain prevents retroactive changes to AI governance criteria, ensuring that models cannot bypass ethical or compliance requirements after development begins.
  • Unlike public blockchains, private blockchains provide controlled access, allowing companies to enforce internal AI policies while maintaining security and regulatory oversight.

Kashyap: Given the recent headlines asking, Is blockchain dead? I’m surprised but excited to dive into this topic again! Before we explore its applications in finance, I’d love to hear. What sparked your passion for blockchain, and why do you remain so enthusiastic about its potential despite skepticism?

Scott: I appreciate you having me here. Blockchain is not dead; I think, actually, blockchain and the applications of blockchain and AI are just starting. And I’ll tell you that it’s not an obvious sort of marriage of the two technologies. But I got convinced because I was faced with a sort of dilemma. This was a previous CTO at FICO, many years back, and he said, ‘Scott, how can you find a useful application for AI and blockchain?’ And I said, ‘Wow, one of the perfect use cases is around compliance and governance and developing models to standards.’ So, for me, that turned into the invention of an AI blockchain for both model development and model monitoring and governance of these models. We use it at FICO. It’s used for every single model that we develop so that we can ensure that we follow a model development standard. So, I’m very bullish on it, and I actually think there’ll be more applications of blockchain in our AI space to come.

Kashyap: That’s fantastic. In the drive to embrace AI solutions, particularly in finance, are firms sacrificing model governance and validation? How can blockchain ensure accountability without restricting innovation?

Scott: I think one of the challenges is this: many teams that have a formal model governance function struggle with how to properly govern and do the model governance of AI and machine learning technology. For example, the many choices of machine learning algorithms that you want to use are not transparent. They’re not explainable to a certain level and maybe not to a regulatory standard. When you go through a model governance process, very often it’s not done well because it’s very hard to figure out, sometimes, that’s because they’re starting off with the wrong algorithm to build these AI solutions. I do think that the drive to embrace AI solutions has put many model governance teams on the back foot, trying to figure out how to validate and ensure they understand how it works. 

But the problem is they’re making the wrong decisions at time zero. Blockchain can help with that. The way it helps is it basically says, ‘This is a tough problem, and we don’t want to have 100 data scientists using AI in 100 different ways and then throw it all on a model governance team who may not have the aptitude to challenge that.’ Let’s define what it means to develop AI and machine learning, which algorithms are going to be transparent, explainable, ethical, and auditable, and then force that as a standard. That’s where blockchain comes in. If we develop a standard for how all models will be developed—let’s say in the area of credit risk, it has to be for the areas of credit risk. You have to use an interpretable neural network, a specialized type of machine learning model that is transparent. That becomes a standard. Once you make that decision, on how to examine the latent features that drive the model, how to test those latent features to see if there’s some systematic bias, the company gets specialized. They get super expert in specific algorithms, and that’s what the blockchain enforces. It basically says, ‘You must use these algorithms. You must do these tests. You must follow these standards.’ It’s a corporate standard, and the model doesn’t get released until all that is met. That’s what blockchain can allow us to do: demonstrate that a standard was followed.

Kashyap: Algorithms are known to have biases, especially as models become more complex and black-box in nature. As you mentioned, explainability remains a challenge. You also highlighted blockchain’s role as an auditor in some applications. Can you give an example, perhaps from the financial industry, to illustrate how blockchain can help improve model transparency and make understanding AI models easier rather than adding to their complexity?

Scott: Blockchain in itself, I view it as the traffic cop. It’s basically going to say, ‘We have a best practice,’ and every organization has a best practice. For example, if you wanted to look at credit risk in the area of financial services, the EU Act says that that’s a high-risk AI, and it can be, because the lack of transparency of many machine learning models would cause an inability to understand what drives the score and how that score may be different for different types of people. So, it’s a serious challenge. From a corporate perspective, we can say, ‘You cannot use a dense neural network. You cannot use a deep learning model. You cannot use a stochastic gradient boosted tree. But you can use, let’s say, an interpretable neural network.’ These interpretable neural networks are designed so that a human being can inspect all the latent features—essentially, what activates that score to be high or low. Blockchain then insists that if it’s a credit risk model, it must use these interpretable neural networks. The auditing comes down to the fact that when you want to develop one of these models, you have to satisfy all the requirements associated with the blockchain. 

One of those would be it has to be an interpretable model. Second, you have to run these ethics tests on these subsets of people. Third, you need to have both the doer—the person that’s doing the work to demonstrate that—a tester that verifies it, and an approver. Now, there are three human beings that literally have to sign the blockchain and say that it was followed, in addition to providing the proof of work. That auditing function occurs in the model development process, and that’s very different than the way many model governance teams work today, where they’re asked to comment on the model after it’s been built. That’s a little too late because if you made the wrong decision on day two of a three-month project, then you have to unwind that entire project or maybe just throw away that model and start over again. Whereas here, we say, ‘We, as a corporation, are aligned that this is the technology that’s acceptable to use. This is how we will ensure the proof of work that we need to see from an audit perspective.’ Blockchain will ensure that those technologies have been used, that we have three different signoffs, including someone like myself or a very senior analytic person in the company that’s been an approver to show that both the doer and the tester have done it properly. Moreover, it’s on the blockchain for all time, and that is the auditing function. Later, we can go look at why this decision was made. It was made by Julie, Tony, and Scott, and these were the rationale and proof of work. It shows that level of transparency and an ability to audit that model, but it’s an enforceability tool, frankly.

Kashyap: Following your answer, one question I have is: What are some inherent characteristics of the blockchain model such as decentralization, transaction tracking, and identity masking that make it more effective as an auditor? Can you help us understand why these features enable blockchain to perform this role better than a centralized system?

Scott: So, we leverage blockchain primarily for immutability. I’ll give you an example. In the beginning of any project, we have success criteria that very often has to be defined by an organization. Maybe I’ll say, ‘One success criteria is I have to detect a certain level of delinquent accounts at a certain outsort rate. If it’s higher than that, I pass the success criteria. If it’s lower than that, I have not.’ And I need to show that any bias between different groups of individuals, when I do my ethics testing, can be no more than some fraction of a metric. This is the metric that has to be used. All this is on the blockchain at time zero. We’ve agreed on the requirements before anyone started the work. That’s very different than what sometimes happens today because once the work starts, and let’s say I’ve met the performance requirements but I can’t meet the ethics requirements, there’s no going back and changing the requirement. There’s no forgetting about the requirement it’s on the blockchain. It can’t be changed. The only thing we can do is say it’s impossible, and we have to cancel the project and restart it. This immutability is crucial. The other thing that’s really important and it’s kind of sad to say is that people take their job very seriously when they have to sign something that is permanent. The same way that if I presented you with a piece of paper and asked you to sign a legal document, you’d read it. You’d take it seriously. You’d want to know what you’re signing. The same thing occurs with the blockchain. We see a few different behaviors. 

One, we see integrity in terms of what the requirements were, and that you cannot change them. Those are established between, let’s say, a CIO and a product person from day zero. Two, people are going to ensure they take their job very seriously when they sign off on these tasks on the blockchain because it’s permanent. People will make mistakes from time to time, and that’s why we have a tester and an approver. But the seriousness of meeting those requirements is so much higher than if people said, ‘Yeah, I think it’s an ethical model.’ No, you’re going to show that it’s an ethical model, and these are the technologies you must use, and these are the criteria you must meet, and these are the thresholds. If you look at a lot of our regulated environments today, including what’s happening in the EU, there are notions of what it means to be responsible with AI, but no one codifies it for you. The blockchain will. It codifies it at the very beginning, and then it’s about having that immutability to show whether or not you got there. I think that integrity is super important because you can’t go later on and say, ‘Let’s say we just invested three months building a model, and it might be a little less ethical.’ You can’t have a business person lean in and say, ‘It’s good enough.’ No, it’s not good enough because the blockchain sets the requirements. In fact, in our processes, we cannot release the model. The models are not allowed to go to production unless all the requirements of the blockchain are met. It really sets a high bar and high standard, and that’s why we use it. These models can impact people’s lives, so ‘good enough’ isn’t acceptable. This is something really serious, and we have to have that proof of work. That’s why immutability and the blockchain are important. I will say this: the blockchains we use are private blockchains. They’re not public blockchains. There are people who have access to those blockchains, and access is granted. It’s a little different than what people might think about in terms of a public blockchain. Private blockchains are very different in the mechanics in which they’re used. You could open it up to someone else, like a lawyer, to go look at it and ensure they feel comfortable with it, and they see the things they care about and whether it’s been done in the blockchain.

Kashyap: Do you see blockchain becoming the de facto mechanism for auditing in the future? What steps need to be taken for large-scale adoption beyond its current applications? How can the industry be encouraged to embrace blockchain with the same enthusiasm that you have?

Scott: So, I think it may become the de facto standard. I mean, I think there are just too many benefits to an AI development organization. You still have statistics that the majority of models that are built never find their way into production, and there are lots of reasons why, but some of it is the lack of following a standard. One of the things I try to talk to people about with this is you can enforce a corporate standard, and that’s important. You know how the models were built properly, and when you have hundreds of data scientists, you need a way to make sure they’re all building models the same way. This is a way to align all that workforce. 

Moreover, it allows us to specialize on the algorithms, as I said earlier, and become super expert on what are the algorithms we can use for credit risk, what are the algorithms we use for bias, and have those really hard conversations potentially with the governance teams, with your board, with regulators, and say, ‘This is how we’re going to address interpretability. This is how we’re going to address ethics.’ If we improve those algorithms, they get spanned over hundreds of data scientists because it’s part of the standard. So, you get to be a very high-impact organization that way. The third one, which I’ve recently talked more about, is the quality of the analytics is so much better that organizations, even if they cared less about responsible AI and doing things properly, just from an efficiency perspective, the model will work because you followed standards. You don’t have people injecting code or experimenting on things that impact human life. Research is great—I do tons of it—but we don’t do it directly in a model that is going to impact your life or someone else’s life. You’re not my guinea pig. 

Therefore, we need to control those things too. We’ve seen quality models go way up because they have to be part of a standard set of features. All of a sudden, I’ve seen quality issues improve 90-95%, in fact, sometimes never having to pull a model back. I think big, serious software companies like FICO that are participating in this golden age of AI need to have the right production tools to assist us to do this, the same way that software development has tons of tools in terms of the code repositories, the repos, the regression testing, and all this good stuff you do from a software development engineering perspective. AI doesn’t have the same grownup tools and methodologies, and I think AI blockchain will be one of those things that will do that for the AI community. In fact, given how complicated it is and given how we are in this golden age of AI and how AI is showing up in so many different types of decisions that impact human life, you’re going to want to be able to have a standard, and you’re going to want to be able to show that it was enforced.

Kashyap: I see your vision. Beyond the technological factors, what challenges do you foresee in driving industry-wide adoption of blockchain? Are there political or structural hurdles that could slow its acceptance? Some of the benefits you mentioned might not align with everyone’s interests. How do you navigate these challenges, and what steps do you see as crucial for overcoming them?

Scott: So, I think one of the key challenges is the definition of a standard. You need to bring leaders together. Analytic leaders have strong opinions. AI leaders have strong opinions. We’re constantly confronted with new technology. We’re constantly wanting to try new technology, and we need to have hard discussions. Research has to be in its own sandbox. But when it comes to production model development, what are those algorithms that we’re going to align on? So, who are the top 10 leaders in a company, analytic leaders, and getting them to align on a standard could take some time. It’ll take some arm wrestling and some emotion and some anger or whatever else, but eventually, everyone sees the benefit. I think that’s one thing. So many organizations—I think more than 44% that we surveyed of just financial services organizations at least a year ago, didn’t have any standards, which is scary. The work to get to that standard is important, and it’s going to be a little bit of your top execs in AI, machine learning, and analytics having to hash it out a little, but it’s for the benefit of the company. 

Another challenge is scientist adoption. Depending on the scientists and depending on the organization and depending on how they ran their lives from a day-to-day basis before something like a blockchain, they could say, ‘Listen, you’re impacting my freedom to choose my algorithm, and we’re not innovating quickly enough because I just saw this thing pop up on GitHub, and I want to try it.’ They’re not doing the risk analysis of using this and may not understand the algorithm. There are really good reasons. That’s more of a cultural shift, which is like, ‘Hey, there’s plenty of time to experiment, and there’s plenty of ability to change a standard, but let’s agree on what works today, what we all have comfort in.’ Then you make the use case for which technology you want to bring in. 

I’ve generally seen that that’s not the biggest concern. Most scientists frankly get depressed when they build things that get rejected by model governance or they build things that never get deployed or they build things that, when deployed, there’s a big crisis around that model. I think at the end of the day, you could use 10 different algorithms. Sometimes it’s not about the algorithm; it’s about how you’ve set up the problem and how you built them, and the specific algorithm is just a nuance. I think that’s easily addressed, but depending on the culture and the scientists, if they want to just go off and do it their way and that’s off standard, that person’s going to have a bit of a challenge in that environment. But again, most organizations recognize that we’re here to serve our customers, not to serve individual research plans.

Kashyap: And from a technology perspective, how do you see things shaping up, particularly in the financial industry? Existing security protocols rely on legacy infrastructure and systems, meaning a shift to blockchain would require significant overhauls. Do you have a practical framework or a plan in mind that could help the industry transition smoothly to blockchain?

Scott: So, I think one of the things that we do is we have contracts. We have contracts between myself and my CTO. This is the way I present a model—it’s called a model package. This is how it operates, and he and his organization understand this is how it gets deployed, and this is the input that produces the output. The first thing is, as you mentioned, to identify what is my contract with my CTO around how we deploy these models in our software. You have to define that contract. It can’t be willy-nilly; there has to be a firm contract. The same applies to cybersecurity, legal, and other areas. Those all become requirements for the blockchain. So, part of the work effort is to say, ‘Listen, most organizations that are building models at scale already have that.’ It’s about making sure that is part of the blockchain and the standard—not to come up with something entirely different. The invention of the scientists is to reflect the best practices, and that includes things like cybersecurity concerns, legal review, business reviews, and software integration. Those would all be requirements that say, ‘You need to deploy this model in this pre approved model package format, and these are the assets our team needs when we get it.’ So, when the model gets released, here’s the model, here’s the blockchain, here’s the testing data, here’s the monitoring parameters for this model, and they’re all delivered because, per contract, that’s what my CTO needs as we deploy that in our formal software. So, I see it all as just requirements in terms of those builds.

Kashyap: Fantastic. Any final thoughts on the first steps needed to move in that direction? From a FICO perspective, how are you approaching this transition by reorganizing processes, people, and technologies? How do you envision this space evolving over the next three to four years?

Scott: For FICO, we’re pretty far down this path. I think the invention is seven years old, and we’ve been productizing it internally. We have a user interface. My entire organization uses it. There is no model developed that’s not on blockchain, so we’re pretty far down that path. In terms of the three to five-year time period, we’re also coming up with standards for how we develop generative AI responsibly or generative use cases responsibly. On the generative AI front, there’s a huge governance need, and a lot of it comes down to governing all the data used to build a generative AI model. I need to understand how I’m going to deal with hallucinations from a risk-based scoring perspective. All those things need to be set up; it can’t just be a proof of concept. 

For all traditional AI, which is still probably 90% of all the AI you need. One of my predictions in a blog is that not all AI is generative AI—for those use cases, it’s all really well established. It’s about organizations reaching out to me wanting to take a look at our model development standard and implement a similar standard. On generative AI, it’s about identifying what pieces we need to know how we develop this generative AI model, how we’re going to use it, how we’re going to minimize risk around it. Those are some other applications that get put on blockchain. But we go one step further. We want to govern everything, including when people use our software. We provide something called the FICO Platform that runs these models, but customers also configure their own rules, strategies, and more. All that needs to be on blockchain too. Because if Cindy or Bob came in and changed rules that impacted customers or overwrote a model, you need to have an audit trail of how that decision got made or if a mistake was made. So, I see this becoming essentially table-stakes technology that most AI platforms will need, whether it’s developed internally in your own organization or whether you’re using a different platform. That’s where I think it will be headed from a blockchain perspective.

Kashyap: On that note, thank you so much, Scott, for making the time. It was great talking to you. I love how passionate you are about blockchain. I see your eyes light up when you talk about it, and I really appreciate when someone is so enthusiastic about the technology. I truly believe you are sold on the vision, and I am very keen on observing how other financial institutions, and even other industries, adopt it. And thank you for giving me a break from talking about agentic AI for a change!

Scott: My pleasure. There’s more to the world than agentic AI. There’s plenty more.

📣 Want to advertise in AIM Research? Book here >

Picture of Anshika Mathews
Anshika Mathews
Anshika is the Senior Content Strategist for AIM Research. She holds a keen interest in technology and related policy-making and its impact on society. She can be reached at anshika.mathews@aimresearch.co
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!