Search
Close this search box.

Exploring the Interplay: Cloud Transformation and Enhanced Area Relations With Sol Rashidi

We are exponentially growing, and the slowest it's ever going to be is today, which means tomorrow is faster, and the day after that is faster.

The intersection of cloud technology and artificial intelligence (AI) marks a critical point in contemporary technological advancement. This dynamic relationship between cloud transformation and AI innovation has become the cornerstone of numerous industries. It encompasses a myriad of challenges, opportunities, and evolving paradigms that shape how organizations leverage these technologies. To give us more insights into it, for this week’s CDO insights we have with us Sol Rashidi, former Chief Analytics Officer at the Estée Lauder, is acclaimed for her ability to unite technologists and non-technologists in leveraging data and technology for business growth. Recognized with prestigious awards like CAO of the Year and Top 100 Innovators in Data & Analytics, Sol excels in shaping strategic visions and execution strategies, guiding companies towards cohesive paths of success. Beyond her professional accomplishments, Sol is a dedicated parent, an active runner and krav maga enthusiast, and recently embraced golf. Her passion for exploration and empowering others to unlock possibilities defines her approach.

The interview delves into the interconnectedness of cloud transformation and AI, focusing on their symbiotic relationship and critical roles. It explores key considerations in AI application development using cloud technology and assesses the impact of lacking cloud resources on technical processes. Additionally, it discusses the balance between on-premises and cloud data strategies, talent prioritization, evolving concerns about data security and privacy, and anticipates the future dynamics of real-time data usage in cloud-AI relationships.

AIM: What drew your attention to the symbiotic relationship between cloud transformation and area relation as a podcast topic? Could you share the specific significance or relevance you find in these technologies that made you choose this subject?

Sol Rashidi: So, I’ve been in data analytics since the late ’90s, when Excel was and still is the primary data tool used by many corporations and companies. But I’ve ridden the wave of business intelligence and data engineering and blockchain and IoT and CAMS initiatives and web 3.0 and Big Data and ML and Data Science and now AI, so I feel like every two to three years, there’s a new buzzword that’s come around, and everyone has a tendency of focusing on the user interface. These products are released out there. But there needs to be more attention to the backend workloads that fundamentally need investments or are needed even to make those applications work. And sometimes, we tend to gloss over how things are done and made. We go straight to the pretty product that everyone sees without a true appreciation of all the workloads, compute, GPU, CPU, Cloud Ops, identity access management, like the non-sexy stuff that no one talks about that seems very defensive driven, but fundamentally the power of the applications. And so it needed some attention in the space.

AIM: What key considerations do you believe are vital when constructing an AI application using Cloud technologies to ensure successful implementation and scalability?

Sol Rashidi:  I think the area I’m personally passionate about applies to verticals such as oil and gas, mining, and industrial Telco logistics because what ends up happening is we end up grouping everything as if, okay, Cloud operations, Cloud Ops, “we need extra capacity to run our LLM models. We use an API to ingest the data.” The words tend to get interchanged. But I think folks aren’t fundamentally aware that when you have an AI application that’s gone out of a functioning prototype or has passed the POC and you want to push it into production, it’s not code at rest. It is code that fundamentally always needs continuous training as new data continues to get ingested. It’s what I call a live wire. It’s always living, eating and breathing, so the compute cost of building an AI production-ready app is massive. When you’ve got industries in oil and gas that are in other verticals like Mining and Logistics and Telco and the ones that I mentioned, the amount of computing needed to keep these applications alive is tremendous, and the cost is tremendous. And what ends up happening is when you’re pulling in data from sensors, IoT devices, drones, you’ve not only got the volume of data to deal with that in and of itself creates challenges, you’ve got connectivity challenges. So if you’re operating in remote areas because you need additional real estate, or you’re running a mining operation, or you’ve got rigs or power grids out in the middle of remote areas connecting these metropolitan cities. You have to worry about things that most companies establishing main cities don’t worry about. So, for me right now, cloud and computing are really interesting areas because you have these massive companies trying to do big things, and no one’s aware of what it takes from a cloud computing perspective and the cost it’s taking. One of the problems I’m trying to solve with a partner startup, more of a scale-up company of mine, is how we solve for Cloud computing on edge. So if you’ve got these massive companies that fundamentally rely on censoring drone IOT data, how do you process information out on edge, parse out non-essential data from essential data, and then only transmit the essential data through your cloud environment, assuming you have connectivity so that you could lower the total cost of operation (TCO) for your Cloud Ops when running these AI apps in production. And so that’s a passion project in my mind, amongst others. But that’s why this is interesting. People aren’t thinking about that. They’re not talking about it, and that fundamentally, for me, is a trigger of not many people having AI apps in production yet because if they did, more people would be talking about Cloud computing and how it can be an inhibitor at this point or cost prohibitive.

AIM: What potential impact do you believe the absence of Cloud technology would have had on the intricate technical processes you mentioned earlier?

Sol Rashidi: There are a few things. Cloud is a great thing. But I was there during the IBM years when they pushed their Cams initiative. And it was about the mobile security Cloud, which was the sea, and there was this knee-jerk reaction about ‘what I’m not putting my data on the Cloud. It’s unsecure.’ There was a lot of pushback and resistance; now, it’s the new way of doing business. But if you look at the ecosystem, yes, for small businesses, the Cloud is more cost-effective than for medium businesses. Companies that are in massive scale-up mode can idle up and idle down. But if you look at the main processing compute storage power needed for a lot of the Fortune 100 companies that aren’t in the most glamorous sectors or feel that may not be consumer-oriented but are more B2B, a large majority of their footprint is still on-prem. You still fundamentally need the on-prem servers that can support your operations with high stability and high reliance to support the operations. So the world thinks everyone is on Cloud, but most people forget that IBM mainframes, DV4’s Teradata, still exist. They still have a very strong consumer base because some people are still fundamentally reliant on these massive physical data centres. If we didn’t have a Cloud, and the Cloud is not great for all use cases, by the way. But if we didn’t have Cloud, we would be forced to keep the existing data centres alive and create an element of mobility within the containerizing data centre. You can drop ship them anywhere these remote locations or verticals operate, or they have to; now that storage and computers are a lot cheaper, reduce the total unit cost of these things so that they can scale. So I’m glad, Yes, Cloud is around, but I think those are still solved with on-prem data centres, and we don’t talk about it because it’s not sexy.

AIM: How crucial is it to balance on-premises and Cloud data strategies? What key considerations define a successful data architecture, harmonizing both Cloud and on-premises data while integrating seamlessly with Engineering Systems for a thriving AI approach?

Sol Rashidi: That’s a heavy question, but I’ll give you an example and a simple answer, although it’s not my complete and comprehensive answer. But when advising or helping companies, one of the key questions I poke into is the maturity of the workforce. So, for example, I have worked with many companies that have migrated from an On-prem to a cloud environment. That’s great.

If your workloads are light, no problem. But if your workloads are heavy, you have to do a cost comparison to see if it makes sense to go and migrate to the Cloud. But even simple examples that I give is if you’re running a basic analytical function and you’ve got a non-mature team of business data analysts with maybe a few data scientists here and there who may be recently out of school or our PhDs and have always been in a research capacity. They don’t have a heavy background in computer science or data engineering. So you run the risk that if you do migrate over to a cloud environment, having an inexperienced analytics department may run a job overnight, not knowing to turn it off. That’s a massive cost to your Cloud, or they may run a query instead of doing Delta’s job. So, let’s say they ran the original query for the past two years, and they need to run this report on a weekly basis. So they need to pull in the last week’s data, but instead, because they’re immature, they rerun the entire report for all two years. That’s a heavy load. If you’ve got an inexperienced analytics team or a team that doesn’t have the fundamentals in place around computer science or doesn’t have any education in terms of how storage costs are created, compute costs are created, how workloads impact both storage and compute, and they don’t know how to optimize their query. Don’t expect cost savings because they are running jobs overnight; queries are running overnight. Sometimes, they time out only for them to start over again. They’re not pulling in Delta loads. They’re running an entire query.

And most people don’t even consider this as an element in their framework when deciding which workloads to carry to the Cloud and which ones don’t. But I’ve had CIOs come to me, and our Cloud costs have imploded. We don’t know how to cost contain it. And I’m like, ‘All right, start with your analytics department.’ So these are things that aren’t even taken into consideration. So, part of me is saying that before you decide that you’re going to migrate from 100% on-prem to 100% cloud, let’s take a few areas: manufacturing plants and sensor information, your analytics department and their maturity. Let’s go through and understand the workloads and the necessary GPU and CPU storage needed, even to ensure that there’s a cost-conscious component before you migrate everything over because it just sounds good and everyone else is doing it.

Not to mention, some folks are in on-prem environments. So they’re using SAS. If they’ve got hundreds of models prebuilt and SAS as a platform that allows you to do the entire data engineering and data modelling lifecycle in a single environment. If all these models are prebuilt, no one’s taken into consideration all the automated pipelines that have already been built. Having to regenerate that into the new environment, rerun the models in the new environment, and then do control to ensure that they’re providing the same output. No one’s even considering the migration cost of doing that, assuming you have hundreds. Imagine if you have thousands because you’re a large organization. So, for me, going Cloud isn’t as easy as ‘yes, because everyone else is doing it. Let’s do it.’ There are so many functions within an organization that even migrating is cost-prohibitive, but because of the immaturity, the organization will also be cost-prohibitive for you.

AIM: In a scenario where talent is deemed the primary consideration, creating a bit of a chicken-and-egg dilemma, what would you prioritize first: ensuring a pool of skilled talent for adaptable cloud architecture to facilitate scalable AI, or tailoring your architecture based on the existing talent pool?

Sol Rashidi: You can design the best data ecosystem and the best data architecture. But if you need people who can capitalize or learn or know how to leverage it, it doesn’t matter. So, you may not have the business talent to know how to run optimized queries and how to do delta loads, but you need to have tremendous IT talent to put controls in place that block queries from running past two hours. But you can’t have no Talent on either end. So you either have to have a very mature infrastructure and IT team that’s already gone through the gotchas and the lessons learnt that you can systematically put the controls in place and teach and train the data and analytics groups that are going to leverage the cloud environment or you need a very sophisticated business or a user group that knows query optimization and when things time out into two delta loads so that they’re not putting such a heavy burden on the cloud environment.

So it’s not a chicken and egg. I always say you can build whatever you want. It doesn’t make it successful or popular. So you have to start with assessing the maturity on the IT infrastructure side to ensure that they know how to put the process controls and places in, and they can upskill or train the DNA team, or you need a sophisticated business team that knows how to run this function.

AIM: How have you observed the challenges around data security and individual privacy evolving, especially with the increasing reliance on AI and cloud technologies? Considering your experience at this intersection, what notable challenges are prevalent in 2023 during the implementation of GenAI compared to previous years?

Sol Rashidi: Data governance is a very broad kitchen sink term, and my running joke is that if you ask ten people to define data governance, you’re probably going to get a hundred different answers. So, instead of tackling data governance, I’ll focus the conversation on one massive issue that I see that often gets overlooked and again, no one wants to talk about the non-glamorous stuff and the non-sexy stuff. Everyone wants to talk about the application itself, the business value, and all the things you can do. But when we migrated into a cloud-dominant environment, we leveraged things like the data lake in an S3 bucket and just dumped all the data. We’ve got into this data hoarding mode, which is fine because storage is now cheap. If you use other native components within any of the big three or big four Cloud environments, you can separate, for the most part, compute and storage, and storage is very cheap. My running joke is you can buy a four terabyte junk drive for 129 bucks on Amazon now. It costs IBM millions to run four terabytes of data in a mainframe in a room in the early thousands. So, storage is super cheap. So, we end up hoarding a lot of information. More data is better data because everyone always says, I wish I had more access to data. But when governance kicks in, the biggest issues I’ve seen in the teams I’ve led, the companies I’ve joined, and the companies I’ve helped we have an amazing ability to dump data. But we don’t have the patience and the tolerance to categorize the catalogue and track the lineage of that data.

So what ends up happening is whether you’re building an LLM model or need to provide a data set in a data domain to a data science team that is running a spoke analysis or prediction model or whether you need to import that information into your data warehouse environment. So your BA’s and DA’s can run their reports, but now they want to enhance it with new data sets. Fundamentally, the biggest challenge is: Where can I get access? How can I get access, or they say everything’s in here? But no one knows what’s in here. You have this massive S3 bucket. And this is why there’s that running joke of the data swamp and the data ocean. Because we always end up short circuiting, taking shortcuts when it comes to cataloging and lineage. So often, you don’t know what data domains are in there, you have absolutely no visibility of where it came from and if it’s the right source of truth. So now the data validity gets questioned, and that’s a big No if you’re in the DNA space. And so if you talk about governance, the two steps that we always end up skipping or bypassing or funding never gets approved it’s cataloguing lineage, which gives validity to the information that you access to begin with. So we end up getting into this data hoarding mode, but we never really organize or make it available. So, we end up paying for storage. Another way of looking at it is you buy a bunch of clothes and just throw it all into a closet. You haven’t divided it by sneakers and jackets and pants and shirts. So you can’t grab and go. What you’re doing is sifting through a big pile of stuff that you bought, and you just throw it into a closet. How is that usable? And so overall what cloud has created It’s reduced the barriers to entry by collecting and aggregating more data, but what it’s introduced is more concerns around certain governance protocols, like cataloging the data and having proper lineage. So, you know the validity of where it came from.

AIM: How will the cloud-AI relationship evolve with real-time data usage? What changes do you anticipate in this relationship over the next one or two years, five years, and a decade?

Sol Rashidi: I don’t even know what will happen in two or three months. Everyone was building products and wrappers around an API, whether it’s coming from anthropic or OpenAI or whatever it may be and then during DevDay. It was announced that everyone can now create a GPT. You don’t need an Enterprise license to do XYZ or access the API. So, at this point, my goal is to keep pace week after week without predicting or anticipating what will happen in one to two years because Moore’s Law is in full effect. We are exponentially growing, and the slowest it’s ever going to be is today, which means tomorrow is faster, and the day after that is faster. So even those of us in this space can’t even call ourselves experts because we’re barely keeping up with the pace of change. My honest and humble answer is I don’t know.

AIM Research

AIM Research

AIM Research is the world's leading media and analyst firm dedicated to advancements and innovations in Artificial Intelligence. Reach out to us at info@aimresearch.co

Meet 100 Most Influential AI Leaders in USA

26th July, 2024 | New York
at MachineCon 2024

Latest Edition

AIM Research Feb 2024 Edition

Subscribe to our Latest Insights

By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.

Recognitions & Lists

Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM

AIM Leaders Council

An invitation-only forum of senior executives in the Data Science and AI industry.

Best Firm Certification

“Gold standard” in identifying & recognizing great data science & Tech workplaces

Stay Current with our In-Depth Insights

Our Upcoming Events

Intimate leadership Gatherings for Groundbreaking Insights in Artificial Intelligence and Analytics.

Supercharge your top goals and objectives to reach new heights of success!

The AI100 Awards is a prestigious annual awards that recognizes and celebrates the achievements of individuals and organizations that have made significant advancements in the field of Analytics & AI in enterprises.