Importance of AI Governance in healthcare with Mitch Kwiatkowski

It's really important that we're making sure that we're doing the right things for the right reasons.

AI Governance in healthcare stands as a critical cornerstone in the intersection of technology and healthcare services. As artificial intelligence continues to revolutionize medical practices, ensuring a robust governance framework becomes paramount. This framework not only navigates the ethical and regulatory challenges but also orchestrates the responsible, ethical, and secure use of AI-powered tools and algorithms within the healthcare landscape.

To give us more insights on this for this week’s CDO insights we had with us Mitch Kwiatkowski, Chief Data and Analytics Officer at Marshfield Clinic Health System. With a rich background spanning over two decades in healthcare data and informatics, Mitch plays a pivotal role in spearheading data and AI initiatives at Marshfield Clinic Health System, a prominent integrated rural health system in the Midwest. Specializing in data management, AI, information governance, ethics, and more, they are instrumental in developing and implementing enterprise-wide strategies aligned with organizational goals. Collaborating seamlessly with diverse teams, he drives the creation of innovative analytic products, enhancing clinical, operational, and financial outcomes. His passion lies in fostering a culture of data excellence and promoting responsible AI use across the organization.

AIM: Before delving into AI governance in healthcare, can you help set the context by defining what AI governance means to you?

Mitch Kwiatkowski: At its core, it’s really about how do we develop and use AI responsibly and ethically. I think there’s different ways you could slice and dice that but at the end of the day, you’re making sure that we understand, this is, I think about it in healthcare, what are we creating, what are we buying, what are we using, how are we using it and who are we using it on or for. I think those are really important things to think about and we have to have those discussions openly and honestly, in the organization. And in some cases, we may look at something and say, “Well, we can do this.” We are a highly regulated industry, there may be some things we say, Well, there’s no reason we can’t do this, but should we? Is this something we’re comfortable doing? If we have to talk to patients about what we’re doing? Why are we doing it? How comfortable are we there? So, again, it’s just really making sure that it’s responsible and ethical and how we’re using it.

AIM: Could you share insights on the challenges surrounding AI governance in healthcare, specifically regarding ethicality and responsibility? What examples have you encountered in this field that highlight these governance challenges, emphasizing the intersection of ethical considerations and responsible AI implementation?

Mitch Kwiatkowski: We have had challenges for a long time, not just within AI but also in how we use and share data, and what we’re doing with it. There have been a number of examples over the last few years, looking broadly across any industry. There are many instances where people felt that maybe there was a bit of a privacy violation or maybe an organization shouldn’t have done something. In healthcare, we start to see things – I mean, we’re dealing with some of the most personal information with a person, so it’s about their health, well-being, social risk factors, mental health, etc. As we look at different populations and how AI gets used or how the data gets used, we’ve seen cases where some of the ways in which an AI model or system will work can be biased against certain areas or races; it could be genders. This has largely been, if you look back, because of the training that’s been done and maybe not using a good set of data that represents all protected classes equally. We might be using things like cost, for example, as a predictive feature. If you’re not looking at the data going into training the model, you may unintentionally invite some bias in there because you’re not accounting for the fact that maybe some races in some areas of the geography there aren’t as many costs or as much expense on healthcare, but that’s not because they don’t need it. So we’ve seen commercial models that have been biased towards race because they use cost as a predictive feature. We’ve seen risk models or predictive models for conditions, pain levels, things along those lines that have not taken certain races into account. Again, race tends to be one of the bigger ones we see. But again, gender, and there have been cases and I believe it was in the UK not too long ago, there was a model to predict heart attacks, and it was something like 50% more likely to misdiagnose a heart attack for women because it didn’t account for the fact that women have different symptoms of heart attacks than men do. So these are definitely out there. Those are the ones that we know about. And again, going back to governance, a lot of it is about trying to proactively prevent some of these from happening as best as we can, or when they do happen, how do we react and mitigate those quickly.

AIM: Regarding data privacy, you highlighted the extensive collection of patient healthcare data by various entities. While this aids diagnosis and care, where do the pitfalls lie, and what frameworks can be implemented to prevent privacy issues from impacting individuals or communities?

Mitch Kwiatkowski: One of the challenges around privacy, as I mentioned before, healthcare is highly regulated. We’ve got HIPAA in place, and you can collect and use patient data for treatment, payment, and operational purposes. Broadly, when organizations create their notice of privacy practices that say to a patient, ‘we are collecting your information, we can use it for these things,’ etc. It’s very broad and general. It doesn’t get into the details of these use cases that are out there. What we see though is healthcare organizations; they’ll go out and they’ll get a product or service from a vendor, they share the data that they’ve got on patients. Sometimes vendors ask for more than they really need. In some cases, they’re collecting this information, and it stays isolated just for that customer. In other cases, the data gets added to their large set of data that they have for all customers, and then that can go in and that can further help train their models and so on. These are things that again, I know healthcare organizations that are trying to tackle this question, ‘Are we comfortable with our patient’s data? Is that allowed to be used to further train some of these commercial products and solutions that are there?’ And again, it goes back to how much do the patients know about what’s happening. We’re not required to necessarily collect consent for individual use cases. When we do our notice of privacy practices in healthcare, we look at that, and we say, ‘you’re signing this, you approve it, we’re doing this again for treatment payment operations.’ But there are times, and again going back to where governance can play a role, is the use case, or is the solution something we feel that we should apply informed consent where we go out to patients and say, ‘here’s what we want to do. Do you consent to your data being used?’ So, I think those are pieces, and again, once that data leaves your organization and it goes into another environment, what happens to it? Do we really have good eyes on it to see and how is that third party or fourth party? Are they making sure that they’re protecting the privacy in that data as well?

AIM: Is transparency crucial in addressing data privacy concerns amid data collection and usage, especially considering its significant benefits for both private companies and overall patient care?

Mitch Kwiatkowski:  We have to strike a balance too because we certainly want to try to be transparent, but we can’t consent for everything, or we don’t necessarily want to because we have to be careful. Being transparent and explainable, making sure that people understand what it is that we’re doing, we don’t want people to get nervous. It’s very easy for people to get excited about hearing AI and thinking, ‘I don’t know where my data is going? I don’t know what they’re doing with it,’ and shutting down and saying no, I don’t want my data to be used for that when in reality… And again, this is where governance plays a role. How do we weigh the pros and cons, the benefits and the risks that are there and make sure that we’re doing the right thing? So it is still a balance, but that transparency and again explainability can help.

AIM: Could you discuss biases in AI, particularly regarding race, gender, or communities, and how data collection methods may contribute to inaccuracies in models? Have you noticed such issues in healthcare, and if so, how have you worked to rectify these biases?

Mitch Kwiatkowski: What I often tell people is that AI is not biased. AI is not the problem; people have to create it, people use it. So I think there are two pieces, maybe even three. There’s definitely the data that goes in. Somebody’s creating that, and if you’re not choosing the right data, you don’t have parity across different groups and different populations. If you’re not being careful about what you’re using and thinking beyond just what’s in front, that can get dangerous. Obviously, how the model’s created, in which things we decide are predictive features, the decisions that get created out of that, that’s certainly another aspect. And then it goes to how it’s used. So, we have a responsibility when we go and use something. No model is 100% accurate. So if we think about it, we want to find, in some cases, where true positives are more important for us. In other cases, maybe true negatives are more important. It’s more important for me, maybe, to not miss a cancer diagnosis versus ensuring that I have all the true positives in there. So I want to minimize the false negatives, I should say. I want to minimize those. So it is important to do that. And again, I think this is where we have to find some good ways to test for bias, potential fairness results that are coming out of these models that we’re building. So as AI data scientists, analysts, we have that responsibility to make sure that we’re either using tools or doing some very basic things like confusion matrix or matrices. If we compare men versus women in the results, are we getting the same sort of accuracy in our model? If we compare different races, the same thing applies and so on. That’s a very basic way of looking at it. There are some solutions out there that I think the big tech companies are pushing more to say these will be python modules or R packages, etc., that can help with some of these. But I think, again, it’s gonna go back to that diligence and making sure that we’re actually checking, we’re actually trying to test, and not making assumptions. And you’d be amazed how many vendors I’ve talked to who will say, ‘I know our solution can’t be biased. It can’t do anything unfairly because we don’t use demographics,’ and we have to take time to educate them as to why that’s not true. And it’s happened with some very large organizations, and it gets into some interesting debates. And that’s largely because in my organization, the governance process that we’ve got, that we challenge that. And those are the bigger, the vendor organization, the harder those conversations are because they’ll try to hide behind intellectual property and say we can’t share our bias and fairness tests with you if we do them, etc. But again, it’s that we have to force that conversation. We could be very open about and honest with that dialogue.

AIM: Do you believe that AI in healthcare, while having the potential to significantly impact and enhance healthcare, should have limitations in specific critical areas where the risk of AI providing inaccurate results is notably high? In your opinion, where are those areas where AI implementation should be carefully evaluated, ensuring human oversight, and where it should potentially be avoided?

Mitch Kwiatkowski: In my opinion, I think it’s probably too early to say where things shouldn’t be implemented. I think that the potential for AI and Healthcare is amazing—anything from clinical care in our organization. We’re a rural Health Care system, so we’ve got disparities of care based on that. We have access challenges; there are a lot of opportunities to help not just improve access and improve the care that they’re getting but to try to drive costs down as best as we can. And some of those are operational; we’ve seen some uses in our organization. I’ve seen peers and colleagues and other Healthcare systems that have helped drive some costs down. We’ve gained efficiencies; we’ve done things better. So I think as an organization, it’s important to have a risk management framework that you have to decide. What’s your risk appetite when you’re using AI? How do you want to classify these things? And when you classify something as low, moderate, high, or maybe there’s just something you say, ‘Hey, we’re not going to use this.’ Right? Maybe as a healthcare organization, we say we’re not going to use facial recognition. Now you have to decide what goes with that, what other requirements do you have? But it goes back to what you talk about. No matter what at the end of the day, it’s about accountability, and that has to be a key guiding principle for organizations because someone still has to make a decision. A physician, a provider—they have to make that decision, that ‘Yes, that is the diagnosis,’ or ‘No, that’s not the diagnosis,’ or ‘This patient has a high probability of getting readmitted to the hospital or acquiring a chronic condition over the next six months, 12 months,’ etc. What do they do? I mean, that doesn’t take away from the training that they’ve got; that doesn’t take away from that human decision-making process. So it’s a combination of things. But again, at this point, I think we just have to be careful, and governance can’t be a way to say no. How do we just be careful? How do we create a little bit? Maybe it’s a little bit of a speed bump just to make sure that we’re not going too far too fast with this assumption that hey, this AI is gonna solve all of our problems. It’s going to make everything better because there are risks, and we just have to be aware of what those are and be prepared to mitigate those.

AIM: Considering your view on AI governance not limiting innovation but closely monitoring it, what’s your vision for responsible AI implementation? What guides your decision-making framework? How do you see this evolving in the next one, five, and ten years in terms of responsibility and transparency?

Mitch Kwiatkowski: I think in recent updates to upcoming regulation that comes around AI is going to certainly help. So we’ve got the executive order from the White House now that says, we want to be able to look at how we upskill staff, how do we bolster data privacy regulations for patients or for anybody for consumers? How do we make sure that we’re using AI Ethically, but mitigating bias and fairness issues etc. I think that’s certainly going to help. That’s not the solution everywhere. a lot of organizations, they don’t think yet have AI governance and It’s not a complicated thing to implement. if you take a step back and you look at it’s actually pretty simple the hardest part about it is you have to get people to have the conversation and get comfortable with looking at everything, adding that to the process requiring it where it’s necessary having a framework in place and again making sure that that’s communicated internally. So I do think organizations, we’re starting to see third party solutions come out and offer sort of these platforms that will help with AI governance and I think there are some gaps that those can fill . It’s not to say AI governance is not a complicated thing. There’s a lot of work that has to go into documentation and inventory and tracking decisions and so on but technology is not going to be the solution. it really is Going to come back to the people in the process just like data governance. So we have to make sure that people understand. what those risks could be that you want to talk about, be willing to have open and courageous conversations about things and there will come a point where a request comes through that the governance, committee or counsel isn’t comfortable and they may deny it and that I’ll still have some ripples and create waves in the organization because now someone’s being told they can’t do something not because it’s illegal, not because it’s too expensive but because there are risks that the organization is not comfortable with and what could happen and so on and again those are tougher conversations. But I would encourage organizations to look at opportunities, not to say no, but this is how we look at in my organization. How do we try to set things up so that we’re saying, “Yes, we want to make sure that we’re managing risk as best as we can, that we’re prepared” and then once something gets implemented, it’s not done. You have to continually monitor and make sure that model drift doesn’t create unintended consequences. There might actually be interviews, in person observations on how something’s being used or the impact it’s having. It’s a very disciplined process.

So I think we’re gonna start to see a lot more organizations. This is a really really hot topic. I see it a lot in my social media feeds. I see it a lot in the media now. And again, I think with the White House Executive Order, it’s going to blow up even bigger and I think it’s a great thing. I think again, it’s really critical in any industry. It’s a critical aspect that we’ve got to look at. But again with healthcare we’re dealing with people’s lives. It’s their health and well being and it’s the social aspects that also can impact their care. It’s really important that we’re making sure that we’re doing the right things for the right reasons.

📣 Want to advertise in AIM Research? Book here >

Picture of AIM Research
AIM Research
AIM Research is the world's leading media and analyst firm dedicated to advancements and innovations in Artificial Intelligence. Reach out to us at info@aimresearch.co
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!