Search
Close this search box.

Right Talent and the Right Tools Drive Real Business Change, According to Stephen Harris

The world will be a better place when curiosity and knowledge are applied in the right way to drive meaningful AI solutions.

AI talent is a game changer in today’s business world, but effectively harnessing it requires more than just hiring skilled individuals. Stephen Harris, believes that solving complex business and technology challenges begins with understanding the importance of talent and transformation in the workforce. As organizations increasingly rely on AI, it’s about not just getting the right people but empowering them to deliver value in a way that simplifies the path forward. In this episode, Stephen shares his journey and insights into how businesses can truly integrate AI talent, transform their workforce, and drive real impact.

Stephen, Corporate Vice President, Global Head of Data Science & Growth Analytics at Microsoft for over 20 years  has managed teams of 400+ professionals and budgets exceeding $1B. Stephen brings a wealth of knowledge in leveraging data science to create impactful solutions that simplify the path forward. One of his proudest moments was being recognized by NASA for his critical role in the STS-107 Columbia Accident Investigation. Today, he joins Kashyap Raibagi, Associate Director of Growth at AIM Research, to discuss how data science is transforming industries, the future of AI, and his approach to building high-performing teams that deliver lasting value.

Key Highlights from the Podcast:

  1. Strategic Role of Data Science: Stephen shares how data science techniques are not just about numbers but about solving the most pressing business challenges that drive value for global organizations.
  2. Building High-Performing Teams: A deep dive into his experience managing large-scale teams, where Stephen discusses the key factors to creating and leading teams that excel in complex projects.
  3. NASA Experience: Stephen reflects on his proudest moment—being recognized by NASA for his contributions to the STS-107 Columbia Accident Investigation—and how that experience shaped his professional journey.
  4. Navigating Challenges in Technology and Business: Insights into the critical intersection of business and technology, and how he approaches some of the toughest problems in today’s digital landscape.
  5. Future of AI and Data Science: A forward-looking conversation about where AI and data science are headed, and how businesses can leverage these technologies to stay ahead in an ever-evolving industry.

Kashyap: Hello and welcome everyone to the next episode of the AIM Media House Podcast Simulated Reality. Today we are with Stephen Harris, how are you doing today?

Stephen: I’m doing great Kashyap. It’s a beautiful sunny day here in the DC area.

Kashyap: So, Stephen, before we dive into the topic, I wanted to help our audience understand your journey in the world of AI. You’ve done some truly interesting work, and that’s why I didn’t want to narrow down your designation to just one title. I wanted to keep it open. Could you share a bit about the work you’ve been doing over the years? And, in line with that, what inspired you to speak about AI talent, AI work, transformation, and how to get it done?

Stephen: There’s a couple of things just in terms of my journey. So, I have been on this journey around data for nearly 30 years now, and it all started for me with a pivotal moment in my career related to the work that I was doing many years ago at NASA, where I was building the NASA Assurance Technology Center. This body of work was very important, supporting all of NASA’s Safety and Mission Assurance Organization. Basically, at the time, it was implementing the first agency-wide knowledge management system. We called it Process-Based Mission Assurance (PBMA) KMS—Knowledge Management System.

During that time, the Space Shuttle Columbia disintegrated upon reentry. We all know that the causal factors related to that were associated with foam escaping from the shuttle, which caused damage to the shuttle. Upon reentry, because that surface area was exposed, as the shuttle reentered the US atmosphere, the temperature and the heat basically accelerated the erosion, resulting in a catastrophic failure and loss of life.

On my wall, I keep an award and a plaque that I received from the agency. It has—I don’t know if you can see it there—but it’s this award here, given to me for the body of work that we led in support of the Columbia investigation. The reason that was so pivotal for me is two factors. The first is, I was in Washington prior to that accident, presenting our strategic plan for the year, supporting SMA with all of our scope of work. One of the items that I had requested during that timeframe was for a probabilistic risk assessment tool. My team, at the time, used shuttle data to help us predict or better understand where we would have impending risks, to give us better insight to predict where we potentially might be entering into a zone where we have factors that would lead to catastrophic failure of any particular mission.

Ironically, while that purchase of that software was not approved in the budget, we understood as a community within NASA that foam escaping on the shuttle wasn’t an acceptable risk threshold and an acceptable risk that was managed. What we didn’t understand at that time was the fact that foam escaping could actually create significant damage during a particular mission, and that damage would lead to the catastrophic event that we all are familiar with. The reason that’s important to me, and the reason it was such a pivotal moment both in my career and transition to double down and focus on data and the ability to tell effective data storytelling, was because seven people lost their lives—seven mission specialists lost their lives.

I walked into the room and said, ‘Here’s our strategic plan, but there’s one topic that I think is most important that we discuss.’ That topic was related to our next catastrophic failure, which was sitting right under our nose. Raising the red flag, telling the story in a way that would have created pause for investigation, may have contributed to the saving of those lives. For me, I took personal accountability as a result of that event. And for the next 11 months, during that time where the Columbia Accident Investigation Board and the Columbia Task Force were assembled, I was at the center of enabling that investigation. All of the members of the Columbia Task Force, of which I was a member, and the Columbia Accident Investigation Board, as they assembled all of their findings and created their report, which went on to Congress.

As a result of that, NASA formed what they called the NASA Safety Center. So, the NASA Assurance Technology Center became the Safety Center, which then followed military protocols that were established based on the Navy Nuclear SUBSAFE program. And that, I think, put NASA on a declared sort of mission in terms of how safety, mission assurance, and execution thereof around the future space program—and all the archetypes of that particular program—were then emulated and brought over and established for NASA. I exited at that time, but the thing that rings true and stays in my mind is, at that time, we were doing data work, if you will. It was under the umbrella of knowledge management.

Today, I think that work is still relevant. And in all that I do, I literally remember this program as the founding, pivotal, changing career moment for me, and I have ever since been committed to data analytics, the world of AI—which is not new to me, although it’s all the rage in the marketplace today. I think there are significant elements around how enterprises, organizations, communities, and we as individual consumers need to be mindful in understanding the power that sits behind this particular technology and all of its capabilities. But we also need to be mindful of the risks associated with it to avoid pitfalls, potential failures, and, in cases where AI is being used to govern and protect global citizens, where AI is enabled to provide insight, actions, and recommendations, that those are well-proven, validated, and have rollback capability so that we can prevent harm to both humans and enterprises as we all look to adopt, take on, and scale this technology.

Because it does have its place in our world. It’s very present. It’s not our future. It is our present. And so, in our present, we all share an accountability of understanding the power of the technology, how it should be used, and where we need to be vigilant in terms of driving adoption for its intended purposes.

Kashyap: Thank you for sharing that story with us; it’s quite inspiring. I completely understand why you’re so driven by the field of data science and AI. In line with that discussion, today, we’re going to talk a little about transformation. As someone who has been through this journey in AI, I’m sure you’ve worked with different sets of teams, and at the same time, the talent requirements have evolved over the years. What has your experience been in terms of how the talent working on AI has transformed itself? Are they quick to adopt the new requirements and tools needed for the field?

Stephen: I have had a very interesting, albeit good, set of experiences. It’s been a learning journey, not only for myself but also, I think, for the teams and all of the functions around an enterprise that my teams have supported, both past and present.

I think there are a couple of things to be aware of. I think the first thing is understanding the ecosystem in which that talent exists because the culture of an enterprise or an organization will actually have a very significant influence on the shaping of that talent and what the talent is able or unable to effectively bring to life as part of their own journeys.

Uniquely, I have seen my teams leverage machine learning and the capability of AI to not only drive insight that fosters better understanding around product development, product adoption, and the use of that product but also evolve this matriculation of talent in a pretty significant way. Meaning that, for many years, folks that were in this data space have had a data domain space with so many different derivations in terms of job classifications. You have data analysts, advanced analytics capabilities, machine learning engineers, AI engineers, ML engineers, and data engineers.

That vast, broad sort of variance around those skills actually are all elements that I believe, layered with strategy, help drive, prioritize, and inform what they are producing as a product. Because I view the AI capability, data in its entirety, as a product but also as a service. And so, talent that is assembled in a way that allows you to build both the talent around a particular vision, mission, and set of goals for the evolution of creating products in the process of establishing software patents and helping to achieve a set of very specific business outcomes where you specifically measure and land that business impact requires, I think, the amalgamation of those different skill sets.

As we think about AI in its entirety, the journey has been one where I have had to help individual and collective team members understand and respect the domain of their peers. Meaning, someone who comes to the table from a background of data management and governance is equally as informed and valuable as the data scientists and the data science team, who are among the top ten when it comes to worldwide Kaggle contestants. They hold those rankings, and they’ve worked for me.

And so, getting them to understand and appreciate the value that each brings to the table is equally as important as is my role in ensuring that the leadership, the suite, and those that we serve in the community value what we bring to the table as a profession.”

Kashyap: As you explained well the array of data roles and the significance of your teammates understanding the difference between these roles and the intricacies they bring to the table, the work itself has evolved a lot. When I was a data scientist in 2015, I used to do mainly ETL and data wrangling, a lot of querying. Today, much of that work has been automated to a large extent, and now a lot of the focus is on generative AI. What has your experience been working with these teams in terms of the transformation that fresh graduates coming into this data ecosystem have had to adopt? How do you ensure they are trained and ready for the real world, so they can hit the ground running?

Stephen: There’s one thing that you can’t train for, but I think is essential in the DNA of any individual that sits within this domain or space, and that is the element of curiosity. I think any individual who comes to the table and has aspirations of becoming a data scientist must first do an internal check and say, Am I actually a person that is curious?

Do I seek to understand when something is not prescribed, written down, or in a book? Do I uniquely go on a journey and endeavor to learn, understand, dissect, and break down a problem and then think about paths to solutions? I think that, at the core, is the very first enabler for anyone that sits in this discipline and wants to be successful.

It’s all the hype and rage today, but it is largely a domain and discipline that is founded in the space of applied mathematics, right? And so, it’s like, How do I take applied mathematics and make it work in an ecosystem—for healthcare, for finance, or any business domain? Essentially, it could be product development or product creation—it doesn’t really matter. I think the foundation needs to be wrapped in this element of DNA called curiosity.

As an explorer of this particular space, starting your career in analytics, data science, and evolving into this world of AI requires someone who is committed to the journey. As a leader, it is my responsibility to ensure that newcomers to the table are paired with seasoned team members who can help curate a learning journey. This journey gives them exposure not only to the technology and the ability to put their hands on the keyboard for development but also rounds it out with a business understanding—understanding why we do what we do, why it really matters, and the essential components of how we ensure that what we are delivering has the ability to make an impact on the business.

The value must be well-articulated and understood. It should be more than just perceived—it must be realized. And, going back to my days of learning at NASA, it is about telling the story. It’s not just about presenting a set of results to a business leader—it’s about interpreting those results in a business context so that the recipient of the product we are delivering understands and appreciates the value, consumes it in a way that is useful for them to make decisions, and acts on it.

I think helping the workforce transform in that way requires those sorts of critical tenets when you think about the journey. So, it’s more than just having a career map, taking courses, and passing internal exams. It’s about inserting yourself into an environment where you can actually thrive as an individual but also collectively as part of a team. That team is able to use its talent to bring value and benefit to the enterprise or the organization of which they are a part.

Kashyap: Given the evolution of AI tools like ChatGPT and Claude, which allow users to build machine learning models with drag-and-drop interfaces, how can organizations ensure that they are building strong teams with a solid understanding of foundational concepts, avoiding the risk of developing models without true comprehension? Additionally, what is your definition of a good machine learning engineer going forward?

Stephen: I think there are a couple of things to unpack in one’s understanding of the journey of a data scientist. First, there’s the recognition that the proliferation of tools and technology will be expansive, irrespective of the landscape or environment in which you operate. I think what’s most critical, as we think about talent and developing that talent, is also taking on this vast ecosystem of both technology and processes in shaping the development of models, curating those models, better-fed management, or governance around those models, and the adoption of those models.

The principles by which one operates, or how that technology operates, really form the foundation. Stepping into this space, I think there needs to be a set of guiding principles that one operates by. Those guiding principles should be sustainable and should not change based on budget, board changes, or shifting business priorities. These are foundational design principles.

When you think about data architecture at its core, there are architectural principles by which one validates their design, right? And so it goes for this space of AI & ML—there are design principles by which one should govern all of their bodies of work. I think where the rubber meets the road is in the process of taking the curation of that model from its inception into production. Understanding is important. Yes, you can basically build ML models and move them into production without fully understanding the data they consume.

But my premise, foundation, and principles all start with the data. The technology will be the technology, right? I’ll use a simple example of business intelligence (BI) tools. Every tool has the capability to consume data. But here’s what happens: you can have the same set of data run through five different BI tools, and they’ll give you five different answers. Why? Because the semantic layer within that technology is not common or the same. That discrepancy happens from the semantic layer to the presentation layer in a BI tool.

It’s very akin to transformations that have the potential to make incorrect recommendations because of the foundation of that data and how the model uses it. This may give you a different result set and a set of unintended consequences, creating risks for the business that could all be avoided. How do you manage that? Part of it is understanding the journey that the data has been on and ensuring there’s the right classification and attribution that sits around that data, particularly in its metadata layer.

More importantly, you must validate for outcomes. You are never going to be able to design a model that tests for every possible risk—that is never the goal. But you need a capability that allows you to validate the model as it learns, grows, and becomes more powerful and effective. This involves contrarian models. Contrarian models help you better understand, as the creator of the model, where the model potentially has opportunities to go off the rails.

You can manage those deviations through capabilities that allow you to alert, roll back, and divert scenarios in which the recommendations or outputs are not in line with your original design principles and the intended outcomes you inspect and expect.

Throughout that lifecycle—from development into production, and all the monitoring that surrounds the production version of the model—these elements need to be fully understood by the team or individuals delivering those models as a service to the enterprise.

Kashyap: Given the evolving hiring technologies and the irreplaceable value of curiosity in talent, how can organizations effectively utilize these tools to attract the right talent, ensure they foster curiosity, and support reskilling and upskilling efforts to build teams that fundamentally understand AI and data science, delivering solutions that are both sustainable and scalable?

Stephen: I take on the same principles of risk management and apply them to this domain and space. As both, I educate, lead, and continue to grow as an executive in this space, and that is in effect. We are all risk managers, right? We must operate as managers of risk, irrespective of what our job classification and title are. I think that holds and rings true in this domain and space of AI.

When you think about talent and workforce development, hiring, and skills development, I think there are a couple of things that chief people officers need to be acutely aware of, which requires a bit of a double-clicking and not a sort of waving of the wand saying, “Make sure you do this.” But it is the CPO’s job to inspect what they expect.

That means, if you are acquiring technology that has embedded AI capability, and that capability is part of your job applicant tracking system (ATS), and that capability has not been fine-tuned to your business process, to your priorities, to what’s important to you as an enterprise, you are putting your organization at risk for bias that becomes inherent in the model. This is because the community of talent that floods that applicant tracking system is training that model.

And so, if that model is filtering out highly qualified candidates, you are found suspect of that. As we know, this is a very present scenario for many organizations that turn the AI capability on but didn’t understand what was really going on in the background. This can become a significant legal matter—not only for the enterprise, creating risk for that enterprise, but also potentially impacting your reputation in the market as an employer.

So let’s start at the beginning. When we are doing talent acquisition and understanding at the front and edge of our enterprise where we want to bring in good talent, we must ensure that we have tested, validated, and understood the levers that need to be modified, adjusted, and fine-tuned over time. This ensures that we are seeing candidates and that they are not falling off in our ecosystem, putting us at risk because we placed our priorities in a different area and turned over that responsibility to technology with no human in the loop.

That’s number one. Secondly, I think there’s an internal responsibility because there’s fear that AI will replace jobs, and in fact, that may be true. So let’s deal with that. But what does that mean for existing talent? I think it’s opportunistic, and I think chief people officers and C-suites need to have really intense strategic intent around how to evolve the workforce and leverage the knowledge of these seasoned individuals or developing individuals.

We must elevate and train them to not only be curators of the technology and governors of the technology but also to help them think more strategically so that we can leverage that talent in a much more effective way to drive business value and impact. As we continue to evolve and scale the technology for many different use cases across the enterprise—in effect, reimagining how AI sits at the center of the enterprise almost as a center of excellence—it drives operational efficiency, reduces business risk, curates product development in every domain, improves our financials, assists with regulatory response, aids legal in case management, and amplifies audit ability to be more effective as a function.

You pick the domain and the discipline—AI can sit there. But the individuals who have requisite knowledge need to kind of be branded, if you will, and quickly built with skills that help them edge and govern both policies and standards around the application of technology to support their business function and capability.

Moreover, they must become voices of reason across the enterprise so that technology doesn’t become the active agent for all things within an enterprise, removing the sensibility of what a human in the loop brings to the table and the domain expertise those individuals provide.

So, there is a fiduciary responsibility. There is a workforce automation and transformation responsibility that sits with the enterprise. I think reimagining how those work together in concert ensures that adoption happens in the right way, meeting expectations and results that satisfy commitments to the board but also involve the talent within your ecosystem.

This enables you to become smarter and much more efficient in how you operate and deliver your portfolio of capabilities and services—be it to civilians or citizens of a particular state, customers and partners within your ecosystem, or your employees. How do we make the world and the life of an employee working on a daily basis much more effective and efficient?

Kashyap: This has been fantastic, Stephen. Thank you so much for sharing your insights and enthusiasm with us. It’s clear how passionate you are about the subject, and your work is truly inspiring. I’m confident that our audience will greatly benefit from the valuable perspectives you’ve shared today. Thank you once again!

Stephen: Absolutely. It’s been a pleasure. The world will be a better place when curiosity and knowledge are applied in the right way to drive meaningful AI solutions.

Picture of Anshika Mathews
Anshika Mathews
Anshika is an Associate Research Analyst working for the AIM Leaders Council. She holds a keen interest in technology and related policy-making and its impact on society. She can be reached at anshika.mathews@aimresearch.co
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!
Join AIM Research's Annual Subscription Today

Unlock Unlimited AI Insights for Just $9999!

50+ AI and data science reports
All new reports for the next 12 months
Full access to GCC Explorer and VendorAI
Stay ahead with cutting-edge insights