Data has emerged as a core driver of innovation and strategic decision-making. Yet, while organizations have access to an overwhelming amount of data, the key to truly unlocking its value lies in how it is analyzed and applied. In this insightful conversation, we hear from Catalina Herrera, Field Chief Data Officer at Dataiku, as she dives deep into the practicalities of predictive analytics, the role of data in shaping business strategies, and the need for more skilled professionals to meet the growing demands of data-driven industries. Catalina sits down with Kashyap Raibagi, Associate Director of Growth at AIM Research, to discuss her insights on the evolving role of data in business strategy and innovation.
Catalina is an influential leader in data analytics, with over a decade of experience using data to solve complex challenges and drive organizational success. Throughout her career, she has worked across various industries, including alternative energy, semiconductor manufacturing, and technology. With a profound understanding of data science and a passion for predictive analytics, she has helped organizations turn vast data sets into actionable insights that fuel innovation, enhance efficiencies, and create value.
Her leadership is marked by a commitment to advancing business growth through data while fostering the development of high-performing, data-driven teams. Catalina is a firm believer that the future of business will be powered by data, but it’s the ability to interpret that data in the right context and with the right tools that will truly drive transformation.
Key Highlights:
Predictive Analytics: From Data to Strategic Decisions:
Predictive analytics enables businesses to move beyond understanding the “what” to forecasting the “what’s next.” This approach allows organizations to anticipate trends, mitigate risks, and seize opportunities before they arise, positioning them as industry leaders.
The Strategic Importance of Data in Decision-Making:
Embedding data-driven decision-making into business strategies has proven to be a key differentiator. Companies leveraging data to guide strategic decisions in industries like semiconductor manufacturing and energy have seen improvements in efficiency and customer satisfaction.
Overcoming the Data Skills Gap:
With the increasing demand for data science and analytics expertise, businesses must invest in upskilling and training to bridge the skills gap. Fostering a culture of continuous learning is essential for adapting to emerging technologies and solving complex business challenges.
Data’s Human Element:
Data should be seen not just as numbers, but as a narrative that tells the story of a company’s challenges, opportunities, and goals. The success of a data strategy depends on how well insights are communicated to stakeholders and aligned with organizational priorities.
Navigating the Future of Data in Emerging Industries:
In emerging sectors like alternative energy, data analytics plays a pivotal role in optimizing energy use and driving innovation. While challenges exist in managing complex datasets and integrating new technologies, the potential to create lasting positive impacts makes data’s role in these industries incredibly exciting.
Kashyap: Hello and welcome everyone to the next episode of The Aim Media House podcast Simulated Reality. Today, we have with us the Field Chief Data Officer of Dataiku Catalina Herrera. Catalina, how are you doing today?
Catalina: I’m very good. How are you doing?
Kashyap: It’s wonderful to be speaking with you, Catalina. You’ve had such an extensive career in data, beginning back in 2005 when the field of data analytics was still emerging and far from the buzzword it is today. Over the years, you’ve held multiple roles, evolving with the field. Could you take us back to the beginning of your journey? If I’m correct, you started with data analysis at the Alternative Energy Institute, working on alternative energy data. What was it like working in analytics at that time, and how did that experience shape your career leading up to your role as a Field Chief Data Officer, where you now collaborate closely with clients?
Catalina: Almost 20 years—oh my gosh, it sounds like a long time! But if you think about it, we have been digitizing everything we do, collecting data from all kinds of sources, structured and unstructured. Twenty years ago, I started my journey at West Texas A&M University, where they had this Alternative Energy Institute collecting data from wind turbines. When you’re in the field and see these big wind farms, with all the turbines spinning, there’s so much beyond that. I was part of the team studying wind direction to identify correlations with seasonality, altitude, pressure—everything—to optimize wind farm development.
The role was very step-by-step; you had to collect the data somewhere. The data was actually on these SIM cards that you’d put inside the wind turbine. You had to physically go to each site and collect them from all locations across Texas before consolidating that data. So, this was the first time I faced big data, because we were talking about so much data coming from all over the place. I had to be very aware of what I was looking at, where it was coming from, to get an idea about geo analytics. And then you’d start to see different seasonalities as part of that research. I actually published a book at the time, co-authored with one of the main figures in Texas’ wind industry, Dr Vaughn Nelson—who was also my thesis director. I had the opportunity to learn a lot, not only about wind and the theory behind how to really maximize what you can get out of these turbines, but also about the process of the data itself.
I think it was the first time I faced something hard and the challenge of consolidating information from multiple sources to create one holistic view of what was going on here and how I can make use of it and to translate the information to somebody that perhaps is thinking about the developer aspect of it. Like where we are going to put this and maximize what we are producing based on all the information and insights we are generating. That experience taught me the power of data—it was the first time I had that ‘aha’ moment of painting a picture holistically. Because sometimes we just focus on the little piece, and that little piece is very isolated. You need to think bigger. The more holistic the perspective, the better the insights, and the better the decisions you can make.
Kashyap: In the early days of your career as a data scientist, you mentioned the thrill of spotting patterns in the data, even before it was clear how useful those patterns might be to decision-makers. It sounds like you were working on elements of IoT long before IoT became a recognized field, physically collecting data in ways we now associate with advanced IoT systems. Given that data science and analysis work best when they’re solving a specific problem, what were some of the key problem statements back then? What were the main insights or issues you were focused on addressing through data analysis, and how did you see data making an impact?
Catalina: The problem was very simple: where are we going to put these turbines and how are we going to maximize what we are producing? First of all, it’s an alternative energy source, which is huge for our planet and for everything that we are thinking about in consumption and the balance between the equilibrium. We needed to be green and ecological. We wanted to ensure that the turbines were in a place where the balance was as best as possible in terms of the birds and the species that were flying there and the location from the altitude perspective to maximize the production, but at the same time, understand directionality and how that was going to correlate with the actual seasonality of the weather and understand that it was a multivariable problem.
So, the problem we were trying to solve was simple: we needed alternative energy. We needed to support these developers and see where we were going to put these turbines physically, and at the same time, how we were going to maintain that ecosystem in equilibrium. So, we needed to study all the insights that we could possibly have. And that’s also when I discovered the importance of, as you say, the patterns, but also the different colors that the different layers of the information bring to the table. It was not only the physical capability of the turbine that this is going to be able to generate X watts or whatever. But we also had to really consider the altitude, or we had to really consider what it was facing and which time of the year, because it actually stayed with decisions and everything else that you have as part of that characterization.
So, we need to think about these as a multivariable problem. Therefore, we need to process all the variables as part of that and be able to bring by-layer information. Until you have this picture that is literally showing you the maximum point and where to do it and the seasonality with the wind and the altitude and all the other variables that we were collecting through all of these sensors. And then, you had to think about where the closest grid was so you could actually share this energy with somebody and how that was going to work logistically—what you were producing, where it was going, how far it was, and how you were going to get it there.
So, it had to be a combination of factors and variables. And where you have all of these inputs coming from all over the place, this is where data science brings the sweetest spot, because you are collecting all of this data, processing all of this data, understanding what it’s telling you, identifying these patterns, and then connecting the dots between what you are seeing versus what the developers are thinking and asking. And that’s part of what we do as data scientists, we connect those dots.
Kashyap: During your five years at Texas Instruments as a product engineer and data analyst from 2007 to 2012, data science began gaining recognition as the ‘sexiest job’ of the decade. How did your role evolve during that time, and what were some of the key differences in your work at Texas Instruments? Additionally, how did the industry begin perceiving data-related roles, and what was your overall experience during this transformation?
Catalina: Texas Instruments was a beautiful experience. I call it my big Data university. I thought they actually threw me in the pool. Like my job description I was a Yield Engineer, and what that means is that you have to optimize production. And what that means in the semiconductor industry is that you are literally consuming millions of data points an hour across 10,000 different steps, because to produce a semiconductor, you need to go through all kinds of chemical processes.
First of all, you have a silicon wafer, and you’re trying to cut it into the different dice or actual circuits that have all the logic, which you’re then going to put into package form. So, physically, you’re going to put it somewhere, and that as well has to be tested and tested and tested. So, at the end, if you want to optimize production, you need to understand the problem holistically. And understanding the problem holistically with 10,000 steps in between is not an easy thing to do when everything is data. But at the same time, it’s coming from completely different location. Because, as you can imagine, I had the most diverse infrastructure to work with ever. We were working with fabs in the Philippines, we were working with fabs in Dallas, and we were working with test floors in completely different physical locations. So you’re talking to a bunch of people, and everybody has their own perspective, their own process, their own in and out.
But for me, it was: how am I going to combine all of this from all of these sources and really maximize what we can produce? But most importantly, how do we avoid producing something that is not going to be successful, something that is going to fail the final test? And how do we save the amount of money that it’s going to take to code it, to put it into package form, and to actually produce it? How early can we detect that? And that’s where data science became a thing for me, because it’s like, I’m not only analyzing what is happening here.
I am becoming predictive. So I started to grow my own analytics journey, and I went from descriptive—‘show me the problem and show me where we’re having the failures’—to predictive, which is: based on these initial tests that we’re doing very early in the process, can we predict which of these dice are going to fail? So we don’t have to cut them, we don’t have to package them, and therefore, we’re saving millions of dollars. And when I made the connection between what I was doing and the value that was actually generated for it, that was aha moment in my career when I saw the power of data for the first time.
When I saw how impactful it can be when I learned at a more granular level, telling me what to assemble and what not to assemble because it had a high probability of failure. There are some lessons I’ve been learning about these data and the patterns and everything else together. It was a phenomenal experience. But it took me five years to really understand what it is that I was doing.
Kashyap: For our audience, it would be amazing if you could share a few examples and walk us through your journey from your previous roles to where you are now. You had to transition from descriptive and inquisitive analysis to predictive analytics. Obviously, upskilling was key, but aside from that, when it comes to managing different stakeholders, communicating results, and ensuring trust in your probabilistic predictions, how did your role evolve? How did you ensure that the models and predictions you were making gained adoption and were trusted, especially when the shift was from ‘This is the data’ to ‘This is the data, and this might fail at some point?’ Stakeholder management must have changed drastically—I’m sure it would be interesting to learn from your experiences and anecdotes.
Catalina: Drastically, so I have a couple of things to share with you. First of all, I’m from Colombia, so my first language is Spanish. At Texas Instruments, it’s a very diverse company, and we have people from all over the world. So, it was not an easy journey at all, especially the first year. I literally had to study the meetings five times to really connect the dots. So, communication, as you mentioned, is fundamental; it’s the number one thing that needs to happen. You need to understand that whatever you’re doing—you can be the smartest person in the world, you can dive into Python and programming, and algorithms one versus two versus three—but if you’re not able to communicate that to your stakeholders, you’re done. It doesn’t matter how good you are; it doesn’t matter your level of expertise or all those skills. You’re gone because the real value of what we do in terms of data is what it means for the stakeholders that actually have a value metric as part of it.
So, it’s the ‘what’ and ‘how’ that actually impacts the business. The journey started, first of all, with upskilling I was spot on. I have three master’s degrees and I’m an engineer by trade. I was a professional back in Colombia. I am very deep into engineering, I have very serious engineering skills, but none of that made a lot of sense when I was trying to put all of this together because I was learning pretty much on the fly. What is it that we have to do? I started programming in R very early in my career, and that was kind of the beginning of that predictive aspect of it, from descriptive to predictive. The descriptive aspect was very oriented. That was the era of business intelligence. Everybody was talking about business intelligence, everybody was talking about dashboarding, and out of dashboarding, producing all of those BI dashboards that could actually describe what the problem is. So, that’s stage one: ‘Okay, this person really sees the problem.’
So, I’m seeing it, I can paint it in this visualization. That’s very important. I can explain how I landed there, and I can make somebody in the room, who has zero data science or analytics skills, understand what it is that I’m seeing within that part. So, that’s stage one, you’re spot on. Then you grow within that. Once you get that trust, it’s like, ‘Okay, we can see the problem here,’ and then everybody has an opinion. ‘Have you tried that? Have you seen that?’ So, I think that actually pushed me to my own skills. ‘That’s a great idea, let me try to add another layer here.’ Every time I needed to add another layer, I faced all kinds of obstacles, friction, and problems. I didn’t have the right permissions. I didn’t have access to the right data. I had to know how to program in Perl, Avenue, you name it, but that was a big issue at the time. So it took me weeks in between to actually be able to add that extra layer and start adding more and more to the pattern so it could actually emerge.
So, it was kind of science, not only from the data analysis perspective but also from the people perspective—the communication, the stakeholders. Communicating to Group One that I needed permissions for this data because Group Three is asking about this layer of information, and I can’t even have access to it. So, how do you communicate with one person and the other one? And what is it that you are trying to accomplish? It becomes a huge component of what you bring to the table when you’re trying to analyze this data and do any kind of prediction. So, then you kind of build the predictive part of it, as I mentioned, it was R mostly at the time. But at the end, it was also correlating that with a number. So, that is what actually captures the attention of the stakeholders. It’s like, when I say, ‘Okay, we have the potential to save two million dollars,’ everybody listens. And then I go back. This is how we’re going to do it, and this is why, and these are all the inputs.
So upskilling is part of it, but at the same time, communication has to be a fundamental part of when you’re trying to express what the data is telling you, and most importantly, what the potential value is that it brings to the table. And that’s where you get the attention from the different stakeholders.
Kashyap: After that, you moved on to consultancy roles, then to Tibco. I’d love for us to dive into how the data industry evolved from around 2012 to 2016-2017, with shifts from predictive to more advanced analytical methods. But at the same time, your role changed from being an internal stakeholder, working on data solutions within a company, to consulting and solving data problems for other companies. Can you share your perspective on the differences between solving data challenges internally versus as an external consultant? And, how did you see the role of analytics, data science, and AI evolve over that period?
Catalina: Yeah, and that’s a big one. So internally, when you are working for a company, you’re facing all kinds of obstacles and friction, but it’s your role to say, ‘Okay, I own this. This is my goal.’ My days were 100%, I’m gonna fix this, and I’m one of the levers into what is expected from me from this angle. But that gave me power. It gave me very special powers.
So five years of that, I learned how to communicate with third parties, I learned how to break the barriers of the tech pockets and actually consolidate information from the data aspect of it. I learned how to put together the two plus two so you can actually get the support from the stakeholders that you need to get the support. I learned how to gain the trust from the people that you need supporting your journey. And that was my career in Big Data and Data Science at TI that gave me all of those new skills. So I learned how powerful data was, and I learned the potential value of it and how to connect the two plus two. And that gave me what I needed to be successful in the consultant stage.
So what it means is that I knew my ABC’s and I was able to apply those ABCs externally with multiple companies. Now it was like, ‘Okay, this is what you are trying to accomplish, let me tell you why you are gonna be facing challenges. Let me tell you how you can potentially fix it. Let me tell you what kind of technology could be involved.’ And that’s how I ended up at TIBCO. So, at the time TIBCO was leading the business intelligence side with Spotfire. I was very good at Spotfire, and actually became like my right hand. I had a YouTube channel where I solved a lot of problems using Spotfire for different industries, including energy and oil in Houston. So it was kind of an organic thing to happen.
And the trust comes with the fact that I was able to paint the pictures. So when somebody comes and says, ‘So I have a problem, and we are trying to do predictive maintenance. We have all of these sensors coming from all over the field and this is our situation.’ I was able to simulate the solution. I was able to provide the predictive maintenance solution and show them, ‘This is how it can potentially help you. And this is the value that it can potentially bring to the table, and these are the frictions you’re gonna be facing, and these are the stakeholders that you need to have involved.’ And so I applied everything that I was learning from my previous life into my new role.
As I’ve been there and I’ve done that. I know how to help you, and it comes from my core because I’ve been there. So it was a very natural transition, but at the same time, it was another layer of education for me as well because now I was hearing about all the use cases being deployed in the field at different verticals. So it was not only semiconductors; it was everybody else. It was energy, it was oil and gas, it was upstream, downstream, renewable energy and everything in between—supply chain and everything in between. So data is data, and I definitely understand the power of it.
So in the end, it doesn’t matter. Everybody’s trying to optimize something. Let’s make this better, let’s reduce costs here, let’s optimize the process—that’s kind of what everybody’s targeting. And where the data is coming and what is the technique that we’re gonna use and what is the infrastructure that’s involved? It’s a different story, but everybody has their own story about it. And that gave me extra power now because not only was I able to solve the problem from the TI perspective and the use case perspective, but now I know what everybody else is going through. And I can share lessons learned.
I learned that from this person. I can give you that. We can accelerate your own process. So I became even a better consultant. Because now it’s like I have a lot of knowledge that I can share and I can save you some time and we can accelerate your time.
Kashyap: Now, moving on to today, after your post at GPT World, and I want to touch on the topic of Gen AI. We can’t escape it, right? But before that, I think you joined Dataiku in 2021, and Chad GPT was released after that. Your role must have evolved a lot. You went from working on products and being a solutions engineer to becoming a full-fledged CDO. Walk us through that transition. What kind of language was coming from your executive leadership, especially as you transitioned to being a field CDO? Post-ChatGPT, what were the conversations you were having with clients? Were you asked to focus more on generative AI as a service and other aspects of AI?
Catalina: Everybody is talking about Gen AI right now. I love not only my journey from the data science consulting aspect, but also the community journey because I believe that we have been evolving together in the last 20 years or so. I like to think about myself as an electronic engineer as well. So, I come with a hardware background, not only the software aspect of it. For me, it’s been an evolution from the hardware aspect and the compute, and the capacity, and the infrastructures, and the cloud, and everything else that we have as part of where this is happening physically.
And I would say that the journey has been evolving tremendously with the different techniques that, AI as a big umbrella, bring to the table. So, if you think about AI, we can think about machine learning, computer vision, natural language processing etc. At the end, they’re all techniques that are being applied to different data, depending on the use case that you are trying to solve. So, it can involve data on data, semi-structured. But with generative AI, and this is the beautiful part, it’s been fascinating to see the general public facing a new era where AI is part of their everyday tasks and we are all involved in something that is AI-related and we’re not even thinking about it.
Simple examples like Netflix, you watch a movie, and then it recommends what to watch next. Or Amazon, you’re buying something and it’s giving you recommendations, or media, Facebook, Instagram, or whatever, when it’s recognizing the faces of your friends and studying that for you and things like that. You are not even thinking about all that is happening here. But from the enterprise perspective, generative AI, I would describe it as a pivotal moment. I would say the era of AI began when GPT was released to the public. Because if we think about large language models (LLMs), this has been going on for years. We have had LLMs available to the public for years, like Hugging Face or some others that are even a mix between open-source and not open-source, and so on. But GPT, and the release of that by OpenAI, has been a line in the sun. Because everybody is empowered by that technology outside enterpirse.
So what that’s triggering for everybody else in the community is that the conversation is moving from, well, I am not only talking with the data analysts and engineers, but now the curiosity is coming from the C-suite. So that is also my evolution to Field CDO, which is, ‘Okay, let’s talk to the C-suite. Let’s ensure that the C-suite understands what this is,’ because there’s a lot of education that needs to happen. Not everybody’s free and aware of the pros and the non-positives and negatives. And this is not a silver bullet. It’s not like we’re gonna fix the world with this. We still have to think about this holistically. We have to keep that human in the loop. We need to ensure that you minimize hallucinations and everything else that you may have. But it is an accelerator. And I am seeing it in the field right now. The most beautiful use cases that involve generative AI with these agents, it’s pretty much accelerating the insights into these data in a more real-time fashion. So, it’s changing the way that the enterprise is questioning this data, and it’s accelerating the insights that a lot of these subject matter experts or domain experts have in terms of that data that becomes crucial when they are making decisions, and therefore accelerating that time to value. So, generative AI is a pivotal shift. It’s changing tremendously the conversations that we are having, because now it’s pretty much chatbot-driven. A lot of people are targeting those kinds of use cases in the field where they can leverage not only the structured, classic data, but now, more than ever, the unstructured data that is part of these use cases.
So, a fundamental piece of the equation that we are in right now as data scientists is understanding what these LLMs bring to the table and how it can be leveraged within enterprises. But in a way that is still governable, secure, and that you are pretty much checking all the boxes that you need to be checking from the responsible AI, risk and legal vertical perspective, and everything else that we have been learning during the last 20 years. So, what it means to have explainability, why it means to handle hallucinations, and is it going to be a legal concern for us, and how we’re gonna deal with it. This is a phenomenal transition but yes, generative AI is a huge shift, and I would call it a pivotal moment for the industry right now.
Kashyap: And now, moving forward from Generative AI, we’re transitioning into Agentic AI, which is the new frontier. What do you anticipate for your role as it evolves with this shift?
Catalina: Supporting the evolution of the use cases targeting enterprise data using that technology. I actually had a keynote a couple of days ago on that, and that’s exactly what’s going on right now. So, when we think about those AI agents, at the end, where we’re going to have multiple micro-automation jobs. If we think about it from that perspective, what’s going to happen is you’re going to have a task-specific agent. I see it as a digital interface or digital internet that you are training, that you are helping, that you are providing the guardrails for, and describing specifically what your mission is now. So, your mission is to help me with this and this and that and I’m going to empower you, agent, to have access to that documentation, unstructured data, and whatever else you need to provide the best possible answer within that specific role.
If you put many of these together, you have this agent and then you have a flow that actually even allows them to communicate with each other. And from here, you start building into these new AI system architectures that are going to allow your SMEs (subject matter experts) to question these data in real-time and have these phenomenal insights in a way that is now very natural for them because it’s natural language, and they’re communicating with the data in the most organic way from the human perspective. So it’s empowering users from all domains and from all business units.
My favorite use cases come from examples from the Human Resources groups or the marketing teams, who usually weren’t the data scientists using structured data. Now, they are empowered by these user interfaces that are actually querying LLMs but with retrieval augemented generation techniques that are enriching, and all that is going through these LLMs with that domain-specific data. Again, people are talking in the field right now, in terms of what this can do and the value it can bring to the table.
So, how is my role going to evolve? The beauty is that I am leading the innovation. And I am with technology in my hands that is actually leading these use cases being deployed in the field, scalable, and thinking, not only, “I’m going to experiment with this one agent on my desktop,” but now it’s like, “How are we going to release this for the 50,000 people that are going to be questioning these data in real time?” So, I am leading that change, and that’s how my role evolves every day because next week I’ll have a new use case to share with the community, and it’s a phenomenal opportunity.
Catalina: Technology can be overwhelming, especially if you’re not from a tech background per se. But I will also say that it has come a long way. The skills required to have an AI agent today are completely different from what was needed 15 years ago. So, my advice, number one: don’t be afraid to dive in. There’s so much out there from a resource aspect. It can be overwhelming, but at the same time, if you have the end goal in mind—my end goal is to have my own AI agent that’s going to talk to me about music because I’m passionate about music. So, one needs to think about that. My number one recommendation: yes, I work with Dataiku, and maybe I’m a little biased, but there’s a phenomenal academy available to the public that is not only focused on the technology but also highlights the “whats,” “hows,” and “whys” of each use case.
So, what the solution is going to do is allow you to connect the dots a little bit. It doesn’t matter if you have coding experience or not. That’s the beauty of what technology brings to the table today—it’s here to close the gap between developers, business users, and everyone else. I used to use Excel, and now I’m ready for AI-driven tools. You can make that jump today because technology is bridging those gaps, and it’s something really cool.
For example, this can accelerate the process. I don’t need to know SQL or anything like that. I can still apply the technology to have a generative agent that will answer questions in a more natural way.
You have to remain as agnostic as possible to the infrastructure, though. There’s always going to be a new piece of hardware involved. You’ll have better computing power somewhere—better GPUs, for instance. But it’s crucial to remain agnostic to that change and keep orchestrating the pieces necessary to accomplish your goals. I think tools like open-source frameworks are a phenomenal starting point. If you have programming skills, you can use Python today, call the LLM from Hugging Face, and get started. But when I say “you can do it today,” I’m referring to everyone else, those who aren’t data scientists by trade.
Today, we have the opportunity to sit down for a couple of hours, go through a few exercises in a portal or data academy, and understand what an LLM or AI agent is. You can get examples, review what it can do for you, and gain some insights. I love podcasts too—especially those that interview people in the field, sharing lessons learned. I also attend a lot of conferences. As we were talking about earlier, conferences bring the state of the art, showcasing what’s been done recently.
To stay as up-to-date as possible is important, but ultimately, it’s up to you. Don’t wait for your company to provide training. Today, you can be super productive by just Googling things, asking for examples of how AI is applied in your industry, and getting inspired. Complement that with open-source tools or a public academy portal to start doing a couple of hours of exercises. You can accomplish a lot, even without coding skills. Local code doesn’t matter—you can reach the same goals and go a long way with it.
This is a great opportunity to question how AI can benefit you and how you can apply it. But if you have no idea what LLMs or AI agents are, it’s time to explore. It’s the perfect moment to experiment. You can use tools like MidJourney to generate cool graphs, or even use WhatsApp to embed Llama shots and ask something crazy just to see what happens. This is your time to experiment, understand how these tools work, and see how they empower you as a human.
Don’t be afraid, but also remain the one in control. See what it does, and decide how you want to use it to augment your capabilities. That’s the key to all of this.
Kashyap: Fantastic! This was really insightful, Catalina. I loved walking through your journey with you—it must have brought back so many memories. I’m sure our audience will truly appreciate hearing your story and will be inspired to pursue their own successful careers in the field of data and AI. Thank you so much for taking the time to share.