AI vs. Human in the Loop: Scaling AI for Contact Centers and CX

AI can give choices, calculating benefits and risks, and bringing up policies. But, at the end of the day, the human has to be the one to make the conscious call, because they understand the context in a humanized way.

AI in the loop represents a transformative shift in how we perceive the integration of artificial intelligence in customer service and experience management. Rather than viewing AI as a replacement for human roles, this concept underscores the importance of human intuition and empathy in enhancing AI capabilities. In a recent discussion, industry leaders Katie Stein, CEO at IGT Solutions and Amaresh Tripathy, Managing Partner at AuxoAI, explored how this paradigm can unlock human potential, emphasizing that true customer experience is built on a foundation of understanding, context, and the nuanced interactions that only humans can provide. By focusing on the interplay between technology and human expertise, organizations can redefine their approach to service, ultimately fostering deeper customer loyalty and advocacy.

Key Highlights 

  • The concept of ‘AI in the loop’ emphasizes the role of human expertise and empathy in conjunction with AI, rather than viewing AI as a replacement for humans.
  • Effective customer experience relies on understanding brand culture and the value placed on human interaction, especially during critical moments.
  • Automation should focus on tasks that lack complexity and where customers prefer lower costs, while more nuanced interactions should remain human-centric.
  • The partnership between the companies reflects an evolving narrative that prioritizes human experience and contextual understanding in service delivery.
  • Empathy, along with skills development through experience, is vital for improving customer interactions and ensuring brand loyalty.
  • The discussion highlights the potential for job transformation rather than displacement, with new technologies reshaping roles to focus on more advanced tasks.

Kashyap: Katie, what inspired you to speak about the concept of ‘AI in the loop’? Where did you first encounter this term, or did you come up with it yourself? Additionally, why did we choose to focus specifically on this topic for our discussion?

Katie: I changed roles in April, and I went from sitting in a big organization in a number of strategy conversations where we looked at the hype of generative AI, and we said, ‘What are the areas we should attack?’ Oh, let’s attack the Contact Center. We’re not that big. We could come in with a Generative AI solution, and I think there are other large brands in the market that talk this way. We’ve all read a lot about what they’re trying to launch. Then I took over at IGT Solutions, where we focus on customer experience and serve some of the largest brands in the industry, where, candidly, they don’t even have a product. What they sell is experience, and it shifted my mental position 180 degrees. I understood the impact of human touch when a customer is experiencing their worst moment, and the humanized value that’s delivered.

While there are many things on the front line, the first line, that could be automated—transactional tasks—this hype around eliminating customer experience is actually upside down. Amaresh and I spent some time together talking about this, and it’s not one or the other. ‘Human in the loop’ is very much a technical term for how humans are training AI, but it has led to a number of fallacies around eliminating the human role. Over time, we’ve seen automation reduce the need for certain humans in the past. I actually think we can change it to ‘AI in the loop,’ where AI elevates expertise—especially that human intuition component of expertise, that empathy element—and creates more value. We’re taking a very strong and optimistic position on this, which is: it’s about AI in the loop to unlock human potential.

Kashyap: So essentially, human in the loop, as a concept or terminology, there should be some level of intervention by humans between AI, but AI in the loop is an understanding where you are mentioning that you don’t take  away the human out of the equation. while we are phrasing it out differently, these terminologies seem a little similar. Amaresh, can you help us dissect it a little bit?

Amaresh: It’s about what is primary and what is secondary. Even in an augmented way, we talk about amplifying human potential. We talk about responsible AI, and we use these terms, but the idea is: who’s the primary driver behind it? With AI in the loop, it’s a commitment to humans that AI is actually in the loop. It’s not that AI is primary. If you lose the human in the loop, it feels like AI becomes primary and the human secondary—that’s the connotation. Part of it comes down to the importance of narrative. Many large organizations, and a lot of people, are nervous about AI. Both Katie and I are optimistic, but that doesn’t take away from the fact that there’s a lot of nervousness about AI adoption in many areas.

Having the right narrative matters, and words matter when you use them. So, when you talk about AI in the loop, it signals that the human is primary, and AI is in the loop. While it may seem similar, there are nuances in narrative that matter, and it amplifies the perspective. In Katie’s case, and in IGT’s case, it’s about making humans expert experience providers. That’s what they aim to do, and they want to keep the human in the loop. That’s why it matters how it’s framed.

Kashyap: Given my background, I recognize that significant changes are occurring in our industry. What drives this change? How does this shift in narrative translate into tangible outcomes in the day-to-day world?

I want to bring both of you into this discussion. Recently, IGT and Auxo.AI have partnered on an initiative called IGTX. Can you explain how this partnership reflects the evolving narrative around knowledge and communication? Specifically, I’d like to explore what these changes mean for all stakeholders involved: IGT, Auxo AI, the IGTX initiative, and your customers and users. How do these developments impact each of these groups?

Katie: So let’s move to metrics. The first thing that happened in the contact center industry was that we started to look at call deflection, which is purely about driving cost reduction. Call deflection is about what we can actually automate. So I, as a consumer, can get my answer—whether I like doing it that way or not—because my query is something simple like a password reset. That was automation in the era of automation, and it eliminated the need for a certain volume of calls to reach the contact center.

Then we started talking about AI and autonomous agents—essentially chatbots on steroids—and how we could train these virtual agents to get better and better, while putting guardrails around them. The fact is, at this moment, if you look at Gartner’s most recent study, about 70% of consumers are saying they don’t want to talk to an autonomous agent. So you have to ask yourself, why? Because we’re getting to the stage where most transactions aren’t simple queries anymore, and the technology isn’t equipped to understand my context and sentiment.

Let me give you a simple example of where human expertise and empathy come in. A couple of years ago, at the height of Game of Thrones, we had one of the actors from the show. I can see you smiling—you must be a Game of Thrones fan, went on social media. He was filming somewhere in the Northeast, and his wife was in London and he tweeted, ‘I just wish X brand could get me home to meet my new baby.’ He had just had a baby the day before. Now, a bot, an automated system, wouldn’t have understood the context of that. At first, the agent didn’t know this was someone famous, but they understood the situation. As humans, we understand this kind of context, and I don’t know how we would train technology to understand it.

What he was saying wasn’t just the post. The agent started investigating, saw the Game of Thrones insignia, did some research, and realized who this person was. So, the agent started chatting with him in Valyrian—it became this fun banter. This person, in a moment of crisis, was stuck due to filming while his wife was at home with their new baby, a continent away. He said, ‘Look, can you get me out on a flight?’ But all the flights were booked. The agent was able to go above policy to figure out how to get him on a flight.

It gets worse. The filming continued, and he wasn’t going to make that flight. He called back, and the agent did their magic again, getting him on a flight the next morning. But this is where true context and humanized behavior come in, and I’m not sure how a Generative AI, even in its next iteration, would understand this. When that individual landed during his connection in New York on the way to London, the agent had worked with our brand to have him met with a ‘delight package’ waiting for him—flowers for his wife and a baby gift.

Now, IGT Solutions wasn’t directly involved, but our customer’s brand delivered the ultimate delight. The actor, who has a huge fan following, went on social media the next day and tweeted that “X brand is the best”. That kind of advocacy is what brands strive for, and that’s not something technology or automation could have delivered at this stage of its advancement.

So, I still come back to this: if advocacy and loyalty—which ultimately drive revenue growth—are the ultimate metrics, and if care isn’t just about executing a task to policy but thinking creatively, with empathy, about how to serve a customer, then I think it’s AI enabling that. AI can give choices, calculating benefits and risks, and bringing up policies. But, at the end of the day, the human has to be the one to make the conscious call, because they understand the context in a humanized way.

Amaresh: But I think if you step back, there’s a broad notion, and I have a similar story around that, which is, it’s a friend—she’s a Harvard undergrad—and she was kind of doing a bunch of other things, but she was a manager for a rather high-end New York restaurant. One of the things they used to do—and this was like 10 years ago—was, whoever had a reservation, they would go and see the Twitter feeds of what these guys were posting. And normally most people tweet when they’re coming. So this person tweeted that day—it was a French restaurant—’I’m coming from a long flight. What I really wish is, I need a hamburger.’ But the thing is, these guys were monitoring it, and they were figuring it out. So technology had a role. There was a process around it, where they were looking at their preferences and everything.

And then guess what? They went and, in a French restaurant, made a hamburger for him. He was just kind of lamenting that he wanted a hamburger. And this is a similar story to Katie. I think the broader point is, most people do not want to do L1 activities, or even L2 activities many times. Most people want to be served and delighted every time, not just get basic stuff done. And the idea is, these technologies are fantastic, whether it’s automation or AI or generative AI. You can do more and more of the basic stuff, and it gets done. And you can see a lot of history around it. But that actually empowers people to do more differentiated things, create more differentiated experiences, which is good for you and me as consumers. It’s good for the companies and the brands, and it’s good for service providers.

Katie: So there’s elimination of work that frees up capacity for human focus and execution. The second thing really is, when you think about it, we service at IGT Solutions. We have 25,000 agents around the world, and we talk a lot internally about —how can you train for empathy? And it’s really hard. First of all, you screen for some of these attributes in people. But then the way that historically training has worked is that people sit in a classroom—young agents—and they watch a few role plays. And then we, historically as an industry, have put them on the phone, and they’re now interacting human-to-human in a context that never mirrors what happened in the classroom.

And so, when you think about it, the way we can use AI and generative AI—first, we can create endless simulations. We can combine simulations. And where do people learn empathy? They learn through experience. They learn because they’ve seen a pattern, and they know, ‘Okay, I had a friend who was sick. I’m meeting someone who’s sick. I know what to say and what not to say.’ So the more we can individualize curriculums and give them that breadth of exposure where, in a hyper-condensed period of time, they can learn years of effective experience, we position them to be more confident in their judgment, in their ability to interpret the context and actually deliver a solution that’s not to the T of a policy, because if it’s just reading a script, it can be automated.

And this is where I really believe that customer experience, which has been a cost center historically, is becoming a brand and advocacy center, and maybe even ultimately, in some companies, a revenue-generating or profit center. But on that continuum, those that remain just cost centers and focus only on productivity and on automation, I think, have a very limited scope and work for companies that aren’t seeing that the world is about experience. And brands that are moving to experience win with 80%. Those that are loyal come back to them. The moment that experience breaks, they leave. And so, I think that’s where this whole fine line becomes extremely important for companies that will win in the future.

Kashyap:  I believe the underlying theme we need to address before delving into AI, generative AI, or advanced chatbots is empathy. How do you see empathy playing a crucial role in the development and implementation of these technologies?

Amaresh: Not only empathy, it’s apprenticeship, it’s learning by doing. It’s all of it. Like, you said you were a data scientist. How did you learn to be a data scientist? Not only by reading three books—the first three projects probably taught you 90% of what you needed to become a data scientist.

Kashyap: But my point is more from the perspective of understanding what elements of human behavior we’re trying to replicate in AI. Empathy, along with values like apprenticeship, are among those qualities we’re striving to incorporate.

Katie: Before empathy, one of the most important things for us in the customer experience realm is to understand that empathy is one aspect. What is the culture of the brand. What are they trying to be? Some brands are trying to be, let’s not kid ourselves. Where low-cost providers have gone awry is when a low-cost, let’s say retail market, tries to suddenly pretend that they’re a white-glove service organization. The fact is, actually, they’re ruthlessly driving pennies, cents on the dollar, and trying to provide the customer the reason the customer comes is the lowest cost. At the other end of the spectrum, you have customers who are willing to pay for a ticket where they know their plane is going to take off, they’re going to get home for the funeral. It’s important to them, or as a luxury brand. So I think what we’re trying to say is that on that spectrum, first, you have to understand what the culture is, what makes that business strategy work. Then what’s the operating model? In operating models that are just about cost and productivity, we need to ruthlessly drive as much as we can, whether the end consumer likes it or not. Digital channels eliminate wasted costs, and we don’t really care whether the consumer is super happy because they’ll come back because it’s the cheapest place to get macaroni and cheese, and that’s important to them in a place where grocery bills keep rising. A number of brands who I think have winning strategies, it’s about loyalty, it’s about experience, it’s about empathy, it’s about courtesy, it’s about what they define their culture as, and the idea of technology being able to mimic that is where we’re still short on the technology spectrum. That’s where technology plus humans can deliver at speed.

Kashyap:I want to ask a fundamental question before we discuss brands. We’ve talked about empathy and other values we’re trying to incorporate into AI. While it’s contradictory not to use AI for automation, we recognize the associated risks. So, my question is: should AI never take over certain jobs? Are you suggesting that we should be cautious about allowing AI to replace human roles, regardless of its ability to be empathetic?

Katie: So I have a very strong view on this, which is the way to look at it through the spectrum. Take a very simple chart and say, a customer’s willingness to pay and a prospective employee’s willingness to work, where the customer is not willing to pay, and the employee doesn’t want to do the job, and therefore demands that they get paid versus another job, the margin which is effective to the value being created is very, very little. You need to look at all of those spaces and ask yourself, as a services partner, why can’t we automate this? If a customer doesn’t want to pay for it, there’s no value for them, really. It’s a task. All they want is lower costs. And employees see no value to them. Therefore, it’s harder to source them. It’s more expensive for you. The thinning out there, classic—you should automate that as much as you can. Those are the L1 tasks that Amaresh was talking about, where customers are willing to pay because it’s adding value. It’s creating enterprise value for them. It’s creating loyalty, which is lifetime value. The customer is willing to pay more. We should be willing to pay for expertise. We don’t need to automate all that we should pay for what is creating that value. Now, behind the scenes, on back-office tasks, post-call interaction, documentation, we should always look for efficiency because, again, back to Amaresh’s point, that’s how you clear the mindset so that someone isn’t doing a bunch of other tasks that distract them. It’s the same thing in sales, where in sales for a long time remember, the whole metric was salespeople spend 70% of their time doing administrative things. Why would we do that, period! If a salesperson’s best use is being with the customer on a plane, sitting with them, building relationships, building trust, why do we have them filling out all these forms and all these documents that can be automated? But you wouldn’t use generative AI and replace the salesperson with a generative AI bot. If you really think the customer’s willingness to buy is higher because of the relationship. And I do believe strongly that when we talk about deals like we do, they’re not automated. Customers aren’t buying based on, ‘Give me this rate card.’ Maybe procurement is looking at that, but customers are based on looking someone in the eye and saying, ‘Do I trust that in tough times and you and your team will deliver.’ So I think you have to go through sort of that two by two and figure it out. It’s neither an all-or-nothing scenario. I think too much of the industry conversation has been this hype, especially driven by Karna, which is saying everything can be automated, AI will replace everything. But actually, that is the piece that I’m attacking and saying, I disagree.

Amaresh: I think the right way to frame it is, it’s not about job displacement. Job displacement is a function of changing jobs. I think what you do in the job will change. And by the way, that has changed with every technology before the internet, like the neighbourhood for calculators and everything. So if it changes what you do in the job, and you start doing more advanced things, that’s what’s going to happen. So what was it kind of creating a reshuffle of things? Of course, it will create some reshuffle of things.

Kashyap: I want to shift the topic slightly. Amaresh made a point that automation has always created new jobs. However, the counterargument is that previous automation wasn’t intelligent. For instance, when we created robots to manufacture glass bottles, it replaced a person but at a basic level. Now, with intelligent automation, does that present a greater threat of job displacement due to its ability to perform more complex tasks? How do you see this impacting the job market?

Amaresh: So my thing is, the definition of intelligence is something that everyone is struggling with. Intelligence is the whole thing. We could argue it’s actually a predictive model at the heart of it all, which kind of guesses the next word. So, I mean, is it intelligent? But the broader point, what I think you’re making, is that this technology has a kind of better step function change of technology. Even in automation, it is a step function change. Yes, more things and more reorganisation of jobs will happen, or jobs will change faster than before. I agree whether it’s intelligence or not. That’s a different argument. You don’t even need to define intelligence, but yeah, there’s a step function change in terms of the capability. So will it move things faster? Yes, which is why partnerships like what we are doing are important, because otherwise you have to be creative around how you actually go and approach certain things.

Katie: I think there’s a point I want to build on with Amaresh. We’ve talked a lot about technology being seen as a skill right now, in that it’s almost like I have to understand the technology. The reality is that those of us who are going to figure out how to use this will make it seem frictionless for the user, whether it’s the agent or the end user. I think that is the part of the market that hasn’t been cracked yet. This whole concept of building prompt engineers and building people who understand large language models is a very finite capacity. As IGTX, how do we actually take that and create scale so that, for an agent in that case, or ultimately for an end consumer, they don’t need to understand that there’s even technology there. They just need to understand that they’re interacting with, in the case of an agent, a series of next best actions so that they can choose. I think that’s a big missing element in the market, because the technology companies are playing into a concept of its all about technology, and every customer I talk to says it’s all about the business process. How do we embed it in the business process where it just feels like yet another thing happening behind the scenes that is delivering the outcome we want, whether it’s first call resolution, shorter average handle time, or higher NPS.

Kashyap: I enjoy discussing the philosophical aspects of technology and automation, but let’s shift to the technology itself. You’re working with many customers to create real-world applications. I recently faced an issue with a delivery service: my cousin and I ordered at the same time, but one order disappeared upon delivery. Explaining this complex problem to an AI customer agent was difficult, as it could only provide basic responses. What are some practical applications of AI in customer care that you’ve seen? From an ‘AI in the loop’ perspective, what areas are you focusing on to enhance customer support and experience with IGTX?

Amaresh: The first thing, if you go and think about it, you give a great example, which is kind of L1 jobs that could have been automated, but you needed something more advanced or more sophisticated. At the core of it If you’re thinking about the customer experience business, what you’re trying to do is learn; you’re trying to learn the culture. You’re trying to learn the policies of the client you’re trying to imbibe, kind of what they stand for. There is a standard operating procedure; it’s actually a learning organization. If you now think from that perspective and consider the role of AI in learning, which is going to be number one—whether it’s in schools, colleges, or everywhere—I think this is what is there.

So, with IGTX, where we have started the whole journey, how do I make learning easier and more relevant, more timely? That’s kind of the core fix. Then, that will translate into how quickly I go from getting hired to getting on the phone call or into a chat in live scenarios. How well do I do in conversations? Because that’s also because I’m kind of accessing that. So, that’s the journey we are thinking about: the whole customer experience journey and trying to build capabilities that drive the human potential in each of those things.

Katie: Let me break down the math in the average contact center. An agent gets a couple of hours of training or coaching a week. Actually, the most important role in hitting your metrics is the team leader role, and a team leader typically has anywhere from 15 to 20 agents they are working with. Think of all the things they’re doing. So, with a couple of hours a week, they’re sitting with an agent, talking through what quality is found in their transcripts. By the way, quality in the customer experience environment typically involves listening to a whole call. They listen to 20 minutes of calls. Imagine how many calls they do. We audit 10% of calls.

Now, take what generative AI and AI can do. First of all, we can monitor 100% of calls. We can look not just for keyword searches but also for sentiment analysis. The quality team can serve up one minute of the call that was unsatisfactory. So now we can audit 100% of calls at the same cost base for our customers, providing higher quality monitoring. What do we do with that quality? We hand it to the team leads, who can then drive continuous training with the agents.

Similarly, we can implement technology where the agent, as they’re going through a call, is getting prompts that measure them on sentiment, efficiency, timeliness, and accuracy. At the end of the call, we can serve up to them, “Hey, agent, if you had used the following word, your sentiment analytics would have gone up by 10%.” Just think of what that means to go from a few hours a week to essentially continuous support in an environment which is really hard.

I think this is where the challenge in the industry lies. Most people who talk about customer service have never sat and listened to these calls or sat and listened to someone call in because they can’t get the wheelchair to pick them up at the gate and understand the break between the customer’s policy, what the agent is facing, and the irate customer. Can you imagine if the policy says, “ offers they can get on a different flight?” We all understand why that’s the case, but now the agent is facing that.

So how do we help them not throw their hands in the air and, as humans, get upset or frustrated and walk out? We need to provide real-time support to help them understand, or real time throw up an alarm that says, “Guys, this is broken.” How do we go back to the customer and say, “Here’s where we’re seeing massive CSAT erosion. Do you want to keep the policy this way, or what can we do differently?” That is the next generation of customer experience.

Kashyap: That’s a great example and really clarifies my thought process. I’ve often discussed with Amaresh how vague ideas from large corporations become clearer when broken down into specific instances. This example illustrates how humans can train AI to improve. Companies often demand to measure ROI for these applications. While we mainly focus on bottom-line improvements, I’d like to discuss the top-line scope as well. How do you engage with your customers regarding overall improvement? What metrics do you measure to ensure that AI is delivering value and satisfying customers? For instance, I’ve found AI agents frustrating compared to previous experiences. How are you tracking customer satisfaction and ensuring your solutions are effective?

Katie: Well, in the industry, it’s somewhat simple because we’re a highly metrics-oriented industry, and the penultimate metric today is CSAT (Customer Satisfaction). The challenge with customer satisfaction is that, historically, the way it’s been measured is by asking customers if they are willing to fill out a survey at the end of the call, and that’s what’s used to calculate CSAT in intervals for a provider.

Now, with technology where it is today, plugged into the IVR and listening to the calls, we can actually monitor sentiment and CSAT live, which gives the brand a better view—not just of the population willing to fill out the survey, typically those who were either really angry or really happy—but now we can actually monitor all of the calls. As I mentioned, we can intervene in segments where calls and sentiment or CSAT are starting to go in the wrong direction.

CSAT is the penultimate metric today. I also talked earlier about how contact centers have historically been thought of as a cost center in most companies. However, if we truly believe that advocacy and brand loyalty are the drivers of revenue, some forward-leaning customers are starting to think about how the contact center can also drive attach, cross-sell, and up-sell.

If you’re having a delightful experience—say, I call in because I’m trying to fix my soda water machine, and it’s broken—and you delight me in how you handle the situation, I might mention at the end of the call, “By the way, going into the fall, have you tried the new fall flavors?” I’m happy that my machine is going to be fixed, and I’m more likely to say, “No, I haven’t heard about the new fall flavors. What are they?” If the contact center is equipped to respond, “Oh, there’s new pumpkin spice fizzy water,” we can tap into the current obsession with seasonal flavors heading into Halloween, creating an upsell or bundling opportunity.

Many companies that are leading in the growth of long-tail sales are figuring this out, and I think this is Generation Two of where service, delight, brand advocacy to sales can come in. I used a very simple example, but you can think of it across the entire spectrum.

So, I believe CSAT is the measure today, but forward-looking, brand advocacy is where the real growth potential lies. Most companies desire growth in this environment, so let’s use every lever we can to achieve it.

Kashyap: Do you really think AI will change that? Because, and I’m just talking purely from my experiences on talking with AI whenever I’m calling someone. AI is not able to be empathetic

Katie: And this is why I’m saying it’s AI in the loop, not human in the loop. I am the human who can make that decision. It may be that the company has automation or AI running in the background that says, ‘Hey, the next best action is to offer them this new flavor.’ However, it’s me who can determine whether this customer really wants to hear about it right now—especially if we’ve just had a very tangled conversation where they’re frustrated because you’ve made ten calls and their machine isn’t working. They’ve sent it out twice, and their wife is ready to leave because of all these issues. In that case, I’m not going to offer you pumpkin spice fizzy water, right?

But if we’ve had a delightful call where I’ve explained how we’ll ship a replacement unit tomorrow, I might ask, ‘Would you like me to send some pumpkin spice with it?’ It’s a totally different context. That’s where I think the human element comes in—making those judgment calls, often aided by technology, to know what’s available to offer.

Amaresh: Right now, humans are doing what machines should be doing, with about 80% of the work being tasks that could be handled by machines. Now, think about what that 80% capacity allows you to do. That’s what Katie is talking about. That’s the kind of conversation that humans could have. 

Katie: If you spend time with these chatbots and conversational agents today, so much of life, don’t we find it in the delivery of how you say something and you watch it. Really great. Salespeople, really delightful. People have a way of understanding what tone to use with you, what words to use with you. You know that is EQ at its best. And I think that’s where we’re mingling or commingling and getting challenged. When we think of all these autonomous agents, it loses that most of them really can’t deliver on that promise yet. And so I think that’s where, again, humans and humans reading humans is where we come back to the importance of the human empathy, or human side of the conversations. To know whether to offer it or not, how to say it, how to bring that. I mean, that’s the human side of our context

Kashyap: Both of you are suggesting that certain elements of a job in a cost center can be automated through AI, while others cannot. Amaresh pointed out that understanding an organization’s culture and values is crucial when building AI to address their cost centers. Katie, you mentioned the importance of considering what customers are willing to pay.

How do you see these two points balancing out? Should we focus on building cost centers that AI can manage, or should we retain some human elements in customer interactions? Is there a right or wrong approach, or is it more about finding the right balance between the two?

Katie: I think this is why IGTX, like most customer experience firms, customer service firms don’t have a deep technology side of their organization where we’re really fortunate as IGT and partnering with Auxo.ai, and other firms is that we actually have both sides, the right brain and the left brain of an organization together. And so what that allows us to do is it allows us to actually step back and with a customer, understand, what is the culture, what is the outcome, what is the process? Which pieces can be automated, which pieces can be augmented, and which pieces have expertise that we can use technology to further accelerate that expertise, and that is embedding all of that deeply into the end to end process and bringing it to the technology stack, I think, is very unique in this industry, and that’s why I’m so bullish that in partnership with Amaresh and with our deep domain expertise and customer experience and our technology team and their AI expertise, we’re actually bringing something very different in terms of how to think about it, because it’s not binary. It requires you understand that context, and you build it into a sustainable process. So it’s app production. I don’t want to see any more POCs. No one wants to see any more POCs. What we want to see is things that can run app production can be used by the teams using them, so that when an exception does come up, the agent in the process is able to catch it and resolve it. Otherwise, you get all these technology firms, all these, startups, all these process firms, and it’s like they’re not talking to each other, and everyone’s trying to optimize, as you said, Should we automate it all, or should we have humans? It’s a combination. Its heterogeneous. 

Amaresh: It’s very real for a lot of your audience, who all have use cases on the contact center and the Customer Care, and they are like, normally in the technology side of the house. It’s like, almost like, kind of pushing something to work, rather than co creating and building it together with the nuances that I think Katie was talking about. How do you actually make that happen. If you just start from the technology side and just kind of serve it up, it won’t work. It’s much more complex.

Kashyap: As a journalist, I find it challenging to grasp advanced technology and its implications. It seems essential to balance empathy and values with the efficiency of tasks in AI applications. I know many companies approach you asking for AI or generative AI solutions. What critical questions do you ask them to understand their culture, industry, and processes? How do you ensure that you deliver the right balance of AI intervention and human agency in your solutions?

Amaresh: The businesses are the same. They have profits to make, employees to keep happy, and customers to satisfy. The businesses haven’t changed. You can fundamentally go back and understand what a company is, its corporate strategy, what it’s trying to achieve, and which metrics they are aiming to move. Sometimes they might be trying to do something entirely new, but usually, they are focused on moving a specific metric.

Then, you work backward to identify which processes and jobs matter, who will execute them, where changes will occur, and how these tasks are currently performed. After that, you determine whether you need AI; that’s often the first question. Everything is being packaged as AI because, as we discussed, talent wants to work on AI. So, everything has to become AI; otherwise, it’s not exciting enough. But in reality, you should consider which tools and technologies you need to solve the problem.

More often than not, you will find three or four intervention points in that whole cycle where new technologies could be impactful. That’s how you start building the case. If you follow the decision flow, you will identify the right set of things. That’s how we begin the conversation. Having said that, there’s a huge push towards AI—everyone wants AI; they want everything to be AI.

Katie: And I think, as Amaresh said, once you have the end-to-end process established, I’ll give you an example because you love examples. One of the end-to-end processes we serve is baggage handling. Most people in your audience have likely experienced standing at a baggage carousel at midnight, only to find that their bag doesn’t arrive. They’re watching everyone else get their bags and leave, while theirs is missing. Honestly, they would be happier just knowing their bag isn’t coming, so they can leave the airport and deal with it later. That’s where the problem statement starts: this is the worst experience.

To make matters worse, you go over to the baggage counter at midnight when everyone has checked out, and you’re trying to work with them to find your bag, but they have no information either. We worked with a technology partner in the airline space who developed a system that scans your bag when you check it in, allowing them to know at any moment where your bag is. The first question in our end-to-end technology solution was: can we eliminate any of this friction? Specifically, can we notify you that your bag isn’t coming before you even stand at the carousel? Yes, we can. We know your location and where your bag is, but we’ve never from a technology perspective eliminated it by telling you.

Imagine getting an alert while you’re on the plane that says, “Hey, Kashyap, you’re landing in 20 minutes. Your bag is actually in Charlotte.” Now there’s a decision tree, as Amaresh mentioned. Since you know your bag isn’t coming, we can help you understand your next steps. Do you know who to contact? We can use technology to proactively email you your options, eliminating the need for you to talk to a human at that stage. Essentially, this is just FAQ support provided proactively.

However, let’s say you figure out your bag will arrive the next day, but you’re going to a wedding and have no clothes, or you’re headed to a business meeting with nothing to wear. This is where delight comes into play. You call and speak to someone who has all the information and understands you’re an XYZ platinum customer. It’s essential that we satisfy you, so a representative can handhold you, explaining what will happen next, what you can buy, and how we’ll credit you.

Stepping back, we ask: what can we not do? What can we do faster and better to inform you? Where would you like to talk to someone because you’re upset and want to understand all your options? Often, the questions people have go beyond simple policies. They involve several interconnected issues that no knowledge graph chatbot today can connect and answer in real-time.

For example, you might ask, “Hey, you said I get $100 credit, but I’m not happy with that.” A human can decide if we can adjust that policy or apply something to your frequent flyer account. The chatbot doesn’t even know your frequent flyer number, but the agent can understand your situation. This is where we go through the process of determining what we can eliminate and where do I need that human touch is the questions we go through and we put an end to end solution, which is technology plus person.

Kashyap: Any closing thoughts?

Katie: The concept of IGTX, as I believe and as Amaresh also emphasizes, is that this is not just a technology, process, policy, or people question alone. It requires partners like IGTX, who understand these complex domains very deeply. They can bring the entire stack together, determine what to eliminate over time, identify where to introduce new technologies, and drive expertise and job skill creation in people. This is fundamentally what we’re bringing to market.

We’re starting with the travel, transportation, and hospitality industries as our core focus. This doesn’t mean we aren’t working with other sectors; we are. However, because we have a deep understanding of this industry—having run core operations for almost every major travel brand over the past 25 years—we will eventually expand into other industries with which we also work.

The idea, again, is that we believe technology first can only be effectively delivered in the context of its domain. Unless you understand the core operations of how the baggage problem works, it’s very challenging to stitch everything together to create the most frictionless end-to-end experience for the customer. This ultimately drives customer advocacy and loyalty, which is the ultimate metric.

Amaresh: To reiterate, this whole notion of industry-specific vertical stacks in AI is where everything is heading. AI cannot function in isolation from the processes and technologies it interacts with. Each technology in the travel industry has its unique tech stack, with its own set of technologies and vendors that are intricately tied to the process. Value is created when you integrate these elements appropriately.

You can’t simply push a tool and expect people to start using it; nothing will happen. This notion of vertical AI is essentially what IGTX is, which is what we envision for the future.

📣 Want to advertise in AIM Research? Book here >

Picture of Anshika Mathews
Anshika Mathews
Anshika is the Senior Content Strategist for AIM Research. She holds a keen interest in technology and related policy-making and its impact on society. She can be reached at anshika.mathews@aimresearch.co
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!