Search
Close this search box.

AI’s Impact On Technology Teams with Jason Cooper

Highlights
Artificial intelligence is going to continue to reshape how businesses operate and collaborate with each other and we need to embrace that interdisciplinary nature of that collaboration. It's not just a technology, a data or an analytics conversation.

Artificial Intelligence (AI) stands as a transformative force within technology teams, reshaping how tasks are approached, challenges are addressed, and solutions are devised. Its integration has not only accelerated the pace of innovation but has also presented both opportunities and complexities for technology teams. 

In this week’s CDO Insights we have with us Chief Technology Officer at Paradigm, Jason Cooper. Jason, leads initiatives driving innovation and catalyzing growth within the Technology, Data & Analytics Group. With over two decades of executive leadership experience, he specializes in harnessing the potential of technology, data, and analytics to unlock significant business value. His expertise spans diverse sectors, including private enterprises, for-profit organizations, and nonprofits, showcased through pivotal roles at renowned companies such as HMS, Blue Cross Blue Shield plans, Cigna, and CVS. 

Drawing from his extensive experience in healthcare and data management leadership, he highlighted how AI has reshaped team dynamics, both enhancing efficiency and posing challenges. Jason explored the delicate balance in using AI for workers’ compensation management, addressing concerns about job displacement and the need for streamlined AI-driven processes. He emphasized the importance of thoughtful implementation to maximize AI’s benefits while minimizing drawbacks, citing successful real-world examples and strategies to overcome adoption hurdles.

AIM: How has your experience been with the evolution of technology, especially in AI, impacting not just traditional applications but also AI technology teams working closely with business teams? Have you observed this evolution as consistently helpful, or have there been instances where it felt like an unnecessary intrusion into day-to-day operations?

Jason Cooper: I’ll answer in the context of today and then we’ll take a little ride in the way back machine, and we’ll talk about how this evolved over time. But I really do believe that artificial intelligence is reshaping how businesses operate and collaborate today, full stop. Everyone’s talking about it. We’ve had three really distinct waves over the evolution of artificial intelligence globally. I personally started doing artificial intelligence in healthcare in the early 90s and then later in the late nineties first with computational lung modeling at the National Institute of Occupational Safety and Health. We were trying to predict when coal miners might be showing signs and symptoms of black lung and then later to predict cholesterol management in adults meaning would it be better for you to use medication therapy or would it be better for you to use diet therapy and cholesterol?

Fundamentally, what I feel we’re dealing with is a bit of a human education and a human trust problem. So if I go back to the early and mid-90s, healthcare professionals weren’t yet ready to accept machines helping them make smarter decisions. We didn’t have the trust factor. We didn’t have a lot of the ethical practices and principles that we have today, but more importantly really good decisions could still be made by humans. Based on the knowledge that we had on the date. Now in healthcare at least what we know now is that so much new knowledge is produced on a global basis every day that if a physician as an example wants to practice at the top of their license meaning utilizing every bit of knowledge they have to make the best decisions possible for a patient. They would have to read something like 16 to 18 hours every day. That’s impossible. I mean, how do you do that? And take care of your patients, take care of your family and take care of yourself. So what most of us have realized is we have to have machines help us make better decisions. So part of that trust and understanding ethical principles has really guided the evolution I believe of artificial intelligence.

AIM: As someone experienced in leading technology teams and now a CTO, how has the rise of intelligent technology changed how teams collaborate? Can you share examples of how it affected team interactions, both before and after integrating this technology, in terms of efficiency or potential challenges?

Jason Cooper: As the Chief Technology Officer at Paradigm, I’m really privileged to work with a very great group of leaders. And as part of that, I have not only technology but also data and analytics, and our group is actually called Technology, Data, and Analytics, purposefully put in that order because technology enables the data capturing that’s important to then enable the analytics that drive decisions.

And so if we focus on the left-hand side of that first, which is the technological side, I think about things like automation and efficiency. AI can really help those in industry today streamline a lot of the repetitive tasks that we have to do, and that frees up resources and employees to focus on more strategic, creative, and complex work. So, as an example, if we can automate problem-solving for a call center or we can automate the reading of medical records to extract and understand those important things that are in that, then we can spend more time developing applications or enhancing applications that bring a lot more business value.

This crosses into what I would call interdisciplinary collaboration. Artificial intelligence really requires cross-functional collaboration between data scientists and engineers, domain experts, and all the business leaders that then help us develop and implement the AI solutions. The reason that I’m sort of tying that to the whole automation and efficiency component is it’s not simply a technology team’s remit to go and decide what to focus AI on. That’s actually an organizational and an enterprise conversation.

Imagine if we created a 2 by 2 grid that had on one axis how complex the business problem was that you’re trying to solve with artificial intelligence, and the other axis was the business value that you’re trying to accrue from it. And you have conversations with your business stakeholders and you start to plot different problems that you need to solve with how much value can be derived from it and the complexity of the technological and analytical solution that you need to create through AI. Then you might decide wow, there’s this cluster of really cool ideas that are not that complex but do require AI. But also would generate high orders of business value whether that’s in better healthcare outcomes or reduced operating expense, therefore efficiency, and those things. And that’s where you focus your efforts. But that’s a conversation that technologists have with analytic partners as well as your business leaders to come at that.

So my point is AI has become a very interdisciplinary conversation to arrive at the things you want to focus on. Now the other thing I would mention on this topic is AI is just one tool in our toolkit. Some problems don’t require AI, so it’s also really important that we have the subject matter expertise within our teams to understand when to deploy AI to solve a problem and when to say ‘Hey, standard statistics or epidemiology or other analytical approaches will work just fine, and we don’t need to spend the time and effort to do an AI solution for this business problem.’

AIM: In the context of Paradigm’s focus on workers’ compensation management, how does the introduction of intelligent automation impact job displacement despite freeing up resources?

Jason Cooper: I have multiple voices to think about this. Let’s take our way back to the machine again and think about a historical example. This was really a concern in car manufacturing back in the day when robots started to replace individuals to do certain parts of the manufacturing process, whether it was spot welding or other component assembly or things of that nature. But what we learned were a few things. One, in certain instances machines were more efficient and faster at humans and doing those things. Two, highly repeatable and therefore higher accuracy and less error-prone. Also, you didn’t have to worry about injuries and other things that you might be concerned about. And so however what came to pass though was not an actual overall job reduction. Yes, that more people get displaced to use your terminology from the manufacturing floor by those robots that were welding and things. Yes, but guess what, we needed more knowledge workers because someone needed to build those robots and install them and maintain them and also someone needed to program them, because cars change every year. it isn’t like these robots who just created the same car for a decade, there had to be a lot of changes. And so now I fast forward to today and your point about automation and efficiency and I’ve actually had this question and a few different conferences and other venues where I’ve spoken about artificial intelligence. Questions to some pointed degree come up and it says “Is AI going to take my job?” and my answer is artificial intelligence will not take your job. However someone that understands artificial intelligence and more importantly someone that understands how to leverage artificial intelligence for the benefit of their business may in fact take your job. So this really becomes an opportunity for us to rapidly advance and adapt to artificial intelligence. It also gives us a chance to upskill and reskill our teams in the way that they think about an approach, problem solving. I also think about the way that we also began to do automation in the past was around self-service. And if you remember not just seven, ten years ago data visualization was a really huge thing. I can take a huge workload off of my analytics team by allowing the business to probe and ask questions themselves. We’ve taken that a step further with things like large language models and ChatGPT and OpenAI and things of that nature where we’ve placed new self-service tools in our business stakeholders so that they can ask more complex questions and get answers. And what does that allow our analytics and other scientists and technologists in our organization to do? Focus on more complex tasks and focus on innovating and thinking about the business problems of tomorrow where yesterday they were solving those more rote tasks that we were unable to create self-service for our business stakeholders. So this is a secondary evolution in my opinion of data visualization and dashboarding and things of that nature and now we’re moving into asking questions and probing our data using large language models. We’ve even moved to the place where we’re optimizing the way our data science teams operate by saying, “this is a problem that I have and these are the data sets that I’m working against. What neural network or support vector machine or Kohonen map or pick your methodology. What would you suggest ? And before you know it we don’t have to go and attempt a hundred or two hundred different models. But artificial intelligence itself recommends based on the data set type and the problem we’re trying to solve. We might want to try these five or six and so it’s actually allowing us also speed to answer. allowing us faster resolution of problems and getting to Business Solutions that are much quicker. So again, I don’t see an immediate displacement as much as your upskiling and reskilling teams on these new capabilities and assets that you have at your disposal. And so again, I don’t think AI take someone’s job. I just think someone that knows, understands and can leverage AI may take someone’s job.

AIM: Have you seen instances where these technologies hinder rather than aid, much like excessive processes can impede efficiency? Specifically within AI-driven automation in tech teams, could you provide examples where an overload of processes reduced effectiveness? Also, what parameters would you suggest to avoid such hindrances in broader AI-driven initiatives?

Jason Cooper: First of all, not every problem requires artificial intelligence, and secondarily, just because you can do AI doesn’t mean you should. Part of your business process decision-making, that interdisciplinary collaboration that I mentioned earlier, should be looking at efficiency and saying, ‘We could do this with artificial intelligence. It might actually take us longer to create the models and deploy them than it would to solve this using something else.’ Like some smart ETL process or some other way to automate things.

I can think of things in the healthcare world. We just need a really good epidemiological approach, as an example, like a retrospective matched case-control study, or we want to design a prospective randomized trial. That doesn’t require an AI solution. And as a matter of fact, introducing artificial intelligence may, in fact, complicate matters. Especially when I don’t think that in the majority — and I’m sure that there are centers of excellence, whether it’s Duke University or Stanford or Johns Hopkins or others (and I’m only speaking in the US framework) — but typically in healthcare when you go about doing clinical trials, whether it’s for pharmaceutical options or for healthcare treatment, you have to go in front of something that’s called an Institutional Review Board (IRB). And IRBs aren’t always traditionally versed in artificial intelligence. So it may take you longer to get approval of a study because you’re dealing with a body of professionals that first need to be educated on what it is, rather than going back and using standard statistical and epidemiological methods to solve a problem that you have, that actually in the beginning doesn’t require artificial intelligence.

Two, you really have to be careful about how you train models, and I know this wasn’t the root of your problem, but it gets to inefficiency because if you come out with the wrong business solution, then you have to go back to point zero and start all over again. And therefore, it is inefficient. So, I promise I have a point here, which is the data that you train your models on, you have to think about potential biases that you’re inserting. So, I’m not going to get political or anything else, but just to give some examples I think that are important outside of healthcare. If I have a dataset and all I train that dataset on are Republican viewpoints or Democratic viewpoints or Independent viewpoints, or if I have a dataset and all I train it on is the Southwest region or the Northeast region.

The thing most of us are trying to do, not always because there are these rare special circumstances that require highly fitted artificial intelligence models, but in general, like large language models, what we’re meaning to do is generalize. And in healthcare specifically, if you’re not really critically thinking about the datasets that you’re training your models on, you can lose generalization and therefore lose the ability to apply your models to both underserved and underrepresented populations. And then what you’ve done is you’ve actually inserted bias in your business decision-making because the data that you train your models on was fundamentally biased to begin with. And my point there is then if you go down to deployment and you realize, ‘I used AI to solve a problem. My underlying data was biased to begin with. Now I have to start all over again.’ Perhaps if you would simply start with much easier mathematical models or statistical models or epidemiologic methods, you wouldn’t have gone down that path, and you wouldn’t have inserted that underlying data bias to begin with.

I think it’s a really, really important thing that when you make these grids of business value versus analytical complexity, then you start to look at what are the problems within that grid of business value and complexity that can be solved with AI and only those that one, really and truly require artificial intelligence but secondarily, have robust datasets that either can be highly specialized or generalized to be utilized in such a way that doesn’t unethically insert bias into your business answers. Those are the things that you go after because if you do it any other way, you’re actually going to have to retreat and start all over again. And therefore, you’ve inserted an inefficiency in their business process.

AIM: Is there both a technological side and a social dimension to consider when determining when not to use certain technologies, such as AI?

Jason Cooper: Socio-economic side as well as a psychosocial side, the human aspect of this especially in healthcare when we’re dealing with people and caregivers, and a caregiving network, are so important. That’s actually one impact that we haven’t talked about, which I think artificial holds a lot of promise and enhances patient experience. Artificial intelligence can help personalize patient experiences and interactions, and that improves both their satisfaction of the services that you’re providing as well as the outcomes that they get.

If you think about how in our day-to-day interactions we’re interacting with Alexa or Siri or some other AI agent on our behalf, we can do that in healthcare as well for the betterment of our patients. Or we use AI to do those same types of things for care teams. One example I can give in my organization is we have a new robust product that we’re offering for musculoskeletal conditions. So you might have issues with your knee or your shoulder or your back, and you need either a surgical or a non-surgical intervention.

Part of a care team’s decision-making process is one, to understand the complexity of what you’re presenting so that they can help guide decision making towards a non-surgical event. But also to understand complexity and comorbidity factors and psychosocial factors and socioeconomic factors, and there’s so much data there that’s both in an organization’s data sets like ours, but also publicly available or that the individual patient might self-report to you that you never would have thought to include in a model that helps you triage where an individual needs to be: high risk, medium risk, low risk.

Understanding that and embedding it in your care models so that your care team most efficiently can care for that patient, that too leads to a much better patient experience, and we’re doing that in our musculoskeletal program. Just as a real-world example.

AIM: How can we address the challenges and boost adoption in scenarios where AI is genuinely beneficial, considering what we’ve discussed about where it shouldn’t be applied? Are there successful real-world examples that showcase strategies for overcoming these hurdles?

Jason Cooper:  What you’re getting at, to some degree, is DevOps and ModelOps—Development Operations and Model Operations. In healthcare, we would call this ‘Bench to Bedside.’ What I mean by that is, how do you take your R&D efforts, your research and development efforts, and then move those to where you can rapidly deploy them at the bedside or, in this case, in the real world instance? That’s both on the efficiency of how you develop. This is an example of ‘fail fast and learn fast.’ What I mean by that is, the test and learn cycles that you do in AI modeling need to be very rapid, agile, and iterative. Once you find something that works, you have to understand how to develop, deploy, and model it.

The other reality is that in AI, we have both supervised and unsupervised learning. Supervised learning requires human intervention, and part of the problem we just discussed earlier is the underlying bias that data can bring with it into modeling and things of that nature. The other bias actually is human curation. So, supervised models that require tagging of elements for image recognition, as an example, or other things in AI, you have to think about that human curation bias as you think about moving from development to model operations.

The other thing about the dichotomy of supervised and unsupervised learning is how often does your data change such that your models need to be updated? There’s no one-size-fits-all answer. Some models, the data changes on a daily basis. I think of the stock market and trading and things of that nature, and the financial and the macroeconomic conditions can change overnight. You may need to be training your models continuously, whereas in other instances, whether it’s highly rare but catastrophic injuries, you may only need to retrain your models monthly or quarterly.

Understanding how much your data changes to then drive the learning cycles, therefore the development and model operations of your artificial intelligence solutions, is really important. Again, not one size fits all, and certainly not the case in healthcare because you have very prevalent events, like cardiovascular disease and diabetes, and models like that not only have large populations to work with, but also the data is relatively known. Only if a new treatment or a new medication comes out might you start to have the need to retrain. But on more rare events or things that have a very low frequency, that’s when you need to accrue more data and more treatment exposure, if you will, to then know better when to optimize your models.

So, understanding that life cycle from R&D to development to then operational deployment and how much you need to retrain your models is really important. If we go to the full version of unsupervised learning, one would assume that the models themselves will understand that data latency and that data frequency problem and be able to train themselves. But I would say that human supervision is still really important because outliers and edge cases and other things can still insert really weird error parameters in models.

Even if you have unsupervised learning and you feel like your models are on a really good cadence, you still should have some type of a manual review cycle where you bring data scientists together to understand and review model outputs. To best guide things and at least have a quality check for what is not working. But that life cycle of deployment of artificial intelligence solutions, I feel like it’s still maturing, to be quite honest.

AIM: Any closing thoughts, Jason?

Jason Cooper: First of all, really appreciate the time with you and AIM, inviting me to have this conversation and this podcast and I simply would reiterate that artificial intelligence is going to continue to reshape how businesses operate and collaborate with each other and we need to embrace that interdisciplinary nature of that collaboration. It’s not just a technology, a data or an analytics conversation. It is full stop, a business discussion and understanding. How AI can bring value to the table at the right time and the right place is really important. But it’s here to stay and having someone that understands artificial intelligence and how to leverage it I think is really critical for organizational maturity.

Picture of AIM Research
AIM Research
AIM Research is the world's leading media and analyst firm dedicated to advancements and innovations in Artificial Intelligence. Reach out to us at info@aimresearch.co
Meet 100 Most Influential AI Leaders in USA
MachineCon 2024
26th July, 2024, New York
Latest Edition

AIM Research Apr 2024 Edition

Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
Our Upcoming Events
Intimate leadership Gatherings for Groundbreaking Insights in Artificial Intelligence and Analytics.
AIMResearch Event
Supercharge your top goals and objectives to reach new heights of success!

The AI100 Awards is a prestigious annual awards that recognizes and celebrates the achievements of individuals and organizations that have made significant advancements in the field of Analytics & AI in enterprises.