Jesse Anglen Predicts a Future Where Humans Become Increasingly Unnecessary With Agentic AI

I have the really unpopular view that humans are going to become increasingly more unnecessary.

Agentic AI represents a groundbreaking advancement in artificial intelligence, where multiple specialized agents collaborate to achieve complex goals. Unlike traditional AI systems that perform single tasks, agentic AI mimics human-like problem-solving by integrating various functions such as research, analysis, and decision-making. This innovative approach enables AI to undertake sophisticated projects, streamline processes, and enhance productivity across diverse industries.

This week, we delved into the potential and challenges of agentic AI with Jesse Anglen, the CEO of Rapid Innovation, a company specializing in user-generated stabletokens. With over a decade of experience in the real estate sector, Jesse has been leading Rapid Innovation since January 2019. Before stepping into this role, they served as Project Team Lead at PEG Network, overseeing the development of the PEG Protocol from February 2018 to January 2019—a tool enabling users to create stable versions of ERC20 tokens. Jesse also held a project management position at SmartLaw.IO from January 2018 to June 2019.

Key Highlights:

Evolution of Technology: From data engineering and MLOps to the rise of agentic AI, reflecting the rapid advancements in technology.

Practical Applications: Real-world examples of agentic AI in action, from automating sales floors to revolutionizing content creation.

Cost and Efficiency Gains: Immediate benefits such as significant cost savings and enhanced operational efficiency.

Human-AI Collaboration: The balance between leveraging AI for repetitive tasks while maintaining human oversight for context and reasoning.

Future Implications: The potential for AI to reduce the necessity for human involvement in various tasks, raising questions about the future role of human agency.


Kashyap: Hello and welcome everyone to the next episode of the AM Media House podcast simulated reality. Today we have with us Jesse Anglen. He is from Rapid Innovation. He is the founder and CEO there. Jesse, how are you doing today?

Jesse:  I am doing phenomenally today. I think it’s been productive at least.

Kashyap: Election day in the US. Are you following it closely?

Jesse: I follow it just enough to fulfill my responsibility to vote, but beyond that, I try not to get too caught up in the political whirlwind. I’m fortunate to live a good life and focus on things that truly matter to me. My work is largely apolitical—it doesn’t matter what side of the aisle you’re on; there are inspiring entrepreneurs doing amazing things everywhere. While I do keep an eye on the developments—Twitter’s open in another tab, after all—it’s more about observing the drama unfold than diving deep into the chaos.

Kashyap: We’re going to talk about agentic AI. I think it’s funny how 2025 is going to be all about agentic AI. The last one or two years before that were about data engineering, and before that, it was MLOps. I think every year I could host the same podcast with the same set of leaders and have more and more perspectives. But yeah, tell me a little bit about your journey. What is it that made you want to talk about agentic AI? What cool stuff have you been doing around it, and how have the different advancements in generative AI kind of taken you on this journey?

Jesse: Background on me is I’ve been an entrepreneur for a long time. I started working with startups who wanted to use cool technology to build things probably 15 years ago. Back then, the technological advancements were not nearly as cool as they are today. Then blockchain came around, and that was super fascinating because that was a new technology. I drank the Kool-Aid 100%. It was bad. I completely dove into the blockchain world head first. I actually even started Rapid Innovation in a lot of ways thinking that I’m going to fundamentally change the way the world works by helping entrepreneurs build blockchain applications. That was what, five years ago now. 

And about three years ago, I started playing around with AI. As generative AI became better and better, I actually started seeing how you could build basic agentic systems with generative AI. It was a little bit clunky. Even two years ago, we started working on some agentic systems that I mean they worked and they were really cool but for me, Gen AI has always been a stepping stone towards the really, really cool things that you can do with agentic systems. I have no idea. At some point, we’re going to get really, really powerful reasoning machines that can almost mimic human intelligence and decision-making. I think at that point, agentic AI is going to blow up. You can see some of it now and in my mind, it’s one of the most practical applications of technology for business that exists today beyond emailing people. It’s more efficient to email someone than drive to their office or whatever. There are obvious advancements in technology, but agentic systems are wildly practical. 

What’s interesting is when I talk to most people, they have absolutely no idea what I’m talking about. They think of generative AI as something that writes poems or helps rewrite emails or whatever people are using it for. In my mind, there are just amazingly powerful things that you can do with generative AI systems that actually all work together, reason together, accomplish tasks together. And I find it super fascinating.

Kashyap: One of the things you mentioned is that there are a lot of business applications for agentic AI. Personally, I believe that agentic AI will go beyond business and won’t be limited to large enterprises. Here’s my thing: I want to draw a distinction, and with your expertise, we’ll be able to do it even better. Let’s try to define what constitutes agentic AI. When I ask my Alexa to turn on the light, it’s doing a task by itself. If I ask my ChatGPT to write or refine my email, it’s doing another task. At what threshold or level do a series of tasks become complicated enough to be called agentic AI, and how should that threshold be defined?

Jesse: I think that you could define an architecture to be agentic when it has multiple specialized agents working together to accomplish a common goal. Whether that’s two agents or 30 agents working together, I don’t know. And how complex it has to be, I don’t know. But the first project that came into my mind when you said that is we have a project that we built for writing blogs. You can take that all the way from generative AI, meaning you can hop into ChatGPT and say, “Write me a blog about X,” and ChatGPT will spit out a blog. It’s not going to be a very good blog. The content’s going to suck. It’s going to sound very AI-written. It’s not going to be SEO optimized. It might not be tailored to your business or your audience. 

But you can take an agentic system and do the same thing and accomplish something incredibly different. So, for instance, a lot of the content that we create, the written content, is actually created by an agentic system with 38 different AI agents that all fulfill different roles way beyond content writing. Because if you think about this, if I said, “Hey man, I need you to go write me a blog article about Agentic AI,” you’re not going to open up a Google Doc and start writing. That would be stupid. No one does that. You’re going to number one, create a project brief and understand what’s the purpose of this content? Who’s my audience? Who am I reaching? Then you’re going to think about search engine optimization. How am I going to make this content applicable? Then you’re going to go and do a bunch of research on different companies that are really playing around with agentic AI. You’re going to compile that research and create a piece of content. You can fulfill all of those different roles as an intelligence in one-shot prompting style. Where you first sit down, think about the audience, then next you move on to the next task, and so forth. 

But if you had a team of people, and there were five of you or 10 of you or 12 of you, how would you divide that work up? And who would you put in charge of that work? Ultimately, when we think about agentic systems that we build, that’s how we think about those tasks. And so, on this blog writing agentic system, when a human being interacts with it and says, “I need you to write me a blog article about agentic AI and I want it targeted in healthcare,” the very first thing that will happen is an agent will spin up that’s actually just a manager. It’s not going to do any writing. It’s not going to do anything. It’s going to go through and create the job descriptions for all the people that it needs. It needs an agent that can do research and has access to the internet. It needs agents that can copyright. It needs agents that can look at SEO optimization. It needs agents that are just critics that can read it and provide feedback. And it needs the ability to create those agents and fire those agents if they’re not doing a good job. And then each of those agents needs the ability to communicate with each other. 

So, a true agentic system, in my mind, at least when I’m looking at this system running, if I go give it a task, it hires a bunch of other agents. Those agents start communicating with each other. They’ll lay out tasks. One of them’s going to hop on Google, start researching, and bring topics back. It’s going to give it to the manager. The manager is going to pick from the best of those topics and say, “Okay, this makes sense.” It’s going to create a brief. That brief is going to go to a copywriter agent. The copywriter agent is going to write a first draft of the copy. He’s going to give it back to the review agent and the SEO agent. The feedback from the two of them will be combined, given back to the copywriting agent, who will recreate the copy, submit it for approval, and then that approval will happen, etc. It’ll get posted on the blog. It’ll go to the system that works with WordPress, post it, and send an email to whatever the human counterpart that requested it. Then you’ve got a new blog posted. In my mind, that’s an agentic system. It’s a workforce of AI intelligences that all have specific jobs working together to meet a common goal.

Kashyap: However, a lot of these systems that were created were probably designed with the intention of reducing the effort needed from humans, but at the same time, human agency was considered equally important when it comes to agentic AI. Do you envision a future where technology again mainly does that, or do you believe that human agency will be taken away? How do you see that balance evolving over time?

Jesse: I have the really unpopular view that humans are going to become increasingly more unnecessary. This is mostly just because of a couple of things. If you go back three years inside my company, I used to have over 300 people working for me. We started building agentic systems internally to make our processes more efficient. Let’s pretend that three years ago, we could get 10 units of work with 370 people. Today, I have 100 people working for me, and we can get 14 units of work. I’ve gotten rid of 270 people and can get more work done because I have created a bunch of agentic systems that do the work people used to do. Those people are no longer necessary. To me, the writing’s on the wall.

The people I’ve kept are the ones who really understand how to interact with these agentic systems. Generally, it’s problem-solving skills because AI is still pretty stupid. Problem-solving skills, organizational skills, architecting skills, creativity skills—those kinds of things are where I still need humans. But in the actual doing of the work, strategy would be another one because AI sucks at strategy. In the actual doing of the work, though, I don’t need a bunch of people to do that work for me anymore. The AI does it. As AI gets smarter and better at strategizing, creativity, problem-solving, and reasoning, I think you’re going to find that you need less and less human involvement. So, I am personally of the unpopular opinion that human agency is going to become less and less relevant as we move forward. Now, is that 10 – 20 years from now? I don’t know exactly, but I’m seeing it happening.

Kashyap: My main problem is, are we ready for this conversation? Today is election day, and we are talking about gun rights, abortion rights, and other issues that are more concerning to people, especially in the US context. This includes conversations around how Republicans or Democrats will be divided over time, or what will be the left or right. At the same time, what does this mean for enterprises and humans during the transition? For example, three years ago, you said you had 100 employees, and within two years, 200 people have become redundant. How does that transition occur? It can’t be a one-off; it needs to be a gradual process. What is your opinion on all of this?

Jesse: Can you imagine if some company like Anthropic or Open AI came up with an AI tomorrow that could just replace human workers altogether? The amount of chaos that creates on planet Earth would be staggering. It couldn’t happen. If it did happen, it would throw the entire planet into a kind of chaos that we haven’t seen probably since the Dark Ages. It would be horrible. And so I think that any technology that has the potential for a massive amount of disruption can only move as fast as the people who are adopting it will let it move. For instance, I get to have conversations with large enterprises where agentic systems could replace very large workforces. I had a conversation with someone the other day, and a system that we could build in maybe three months would replace 10,000 employees. They’re not going to do it. Because at the end of the day, it harms their reputation. It’s just harmful.

Kashyap:  I think especially in terms of opportunities for agentic AI, the conversations are not as black and white as any other topic. For instance, let me give you an example. When I was having a conversation with a vegan, they said we should all move to a plant-based diet and shouldn’t have animal products. I asked, are you saying that should happen in one day? Are you saying that even if you want to transition, and I agree that probably red meat consumption is bad for greenhouse gases, should it happen in one day? Can you imagine the number of livelihoods it will disrupt if you eliminate meat in one day? You have to transition out of it even if you are, and I’m not taking sides on whether you should be vegan or not. I have grown up eating meat, but I also believe that there is a case when vegans make that you should probably move more towards a plant-based diet. But it’s not either/or. The same way with AI, it’s not like tomorrow there’s going to be agentic AI and suddenly, boom, there are no jobs anymore. Everybody’s listening. So, my question is, how do you see the transition to agentic AI affecting job markets and livelihoods?

Jesse: It would be terrible. If you look at the impact that the Industrial Revolution had on the world and the jobs that were lost and the poverty that was created from that, especially on the East Coast and in the farming communities and manufacturing areas, it was devastating. That took 15 years for the Industrial Revolution to really come into full swing. The technology existed to accomplish everything they accomplished in those 15 years on day one. It just took them 15 years to implement it. If they had been able to implement it on day one, it would have been devastating. So I do think it’s going to roll out slowly. It’s going to have to roll out slowly. I don’t think there’s a way for it not to. There’s something else too that I think is important. 

Human beings bring something unique to the table. We’re not just data processors. If you look at an LLM, they’re able to mimic some human emotion, solve problems, and let’s say they get way better. Let’s say they become significantly more intelligent than the most intelligent human beings that have ever existed, creating a super intelligence. Let’s say they can reason better than most humans and mimic creativity, emotion, and all the other things we think of as human. The truth is that they can’t be human. An AI will never have the drive to build a big business because it’s inside them to be successful. They’re never going to want to create a piece of art that changes the world or create a system that changes an industry. They’re going to be following directions. They’re going to be order-taking machines. That’s a calculator at the end of the day, maybe a really complex, smart calculator that can do a lot of things. 

But in my opinion, that needs to be driven by some form of humanity. What I’ve noticed inside my company, as someone who has probably done more than most in implementing agentic systems and AI into the day-to-day operations of a company, is that today I could flip a switch and get rid of every single person that worked for me and just have computers do everything. I wouldn’t. Personally, I wouldn’t because this morning I had an awesome conversation where I laughed and we made jokes with each other, talked about the different clients we’re working with, and had a good time. I can’t do that with an AI. There is no such thing as that human interaction. It actually makes me more creative, inspires me to solve more problems, encourages me to do the things I want to do. I want to surround myself with people who understand that that is what they bring to the table as human beings, not necessarily the amount of work they get done. Because this is the fundamental shift that I think needs to happen brainwise for people. 

Right now, human beings tend to value others or themselves based on the amount of work they can get done. You go back five years, and I had a guy that worked for me who was amazing at writing procedure manuals. He could sit down in a day and do something that would take me a week because he just loved to write procedure manuals. To me, he was very valuable as an employee because he was really good at writing procedure manuals. The truth is that was what he had learned to do that made him valuable. Everyone focused on that as his KPI for success. Right now, you can go on ChatGPT. There are platforms that exist where you can create process manuals. That’s easy to do. It now costs nothing and takes two minutes instead of a whole day. But the thing that made him valuable wasn’t his ability to write process manuals. This is where I needed to start thinking about the difference in how people actually work. It was his ability to look at an entire process and understand how all the parts and pieces work together. He was an architect. At the end of the day, that’s what he was really good at. He was an architect. If I give him the ability to work with an AI as an architect, all of a sudden, he’s able to 50x his productivity because he’s relying on his humanity and the thing he’s good at. He has outsourced the doing. 

I believe that that’s what the transition will actually look like. Human beings source the doing, and they will maintain control of not the right way to say it. I’m thinking of a different word, can’t think of it, but they’re going to maintain the humanity part of what is happening, the thing that makes us human, the reason we’re not still in the Stone Age, that we’ve innovated beyond that, we’re not still in the Industrial Age, that we’ve innovated beyond that. Life is better today than it ever has been. The world will just become a better place because we won’t have to do everything anymore. The robots will do it. We’ll just think.

Kashyap:  I think this is a very interesting conversation, but at the same time, I would also like to bring us back to what agentic AI will look like in another two or three months in 2025. What are some of the cool use cases you’re looking at? Whether it will replace jobs or completely revolutionize how human societies exist is a very fun conversation to have, but I want to focus on more immediate opportunities that we are seeing. What are some of the cool applications, immediate cost cuttings, and immediate revenue streams that you’re seeing from an opportunity perspective?

Jesse: Let me rattle off the first three that popped into my head when you said that. I worked with a guy in the Midwest who had a manufacturing plant. He had already automated his entire manufacturing plant. It was all robots doing the assembly, cutting, building, and shipping. But his sales floor was very human-heavy. He had about 400 people working on the sales floor. They would read emails and input that into the automated floor robotic system that was actually doing the work. So we built an agentic system that could basically take in all the emails and all the phone calls, understand what the client wanted, and interface directly with the automated production floor. Basically, he went from having 300 people on his sales floor to having two people on his sales floor who just watched the agentic system. It took one guy three weeks to build that for him. Practically speaking, it was probably one of the best cost savings I have ever seen across any of my clients. He had a 400-person company, and then a month later he had a five-person company, and fewer mistakes were made. Customers were happier because they got exactly what they needed instantly instead of having to wait for a human. That was an agentic system that went through and basically acted like all of those people, read the emails, and did all the stuff that it was supposed to do.

So that’s the first one that popped into my head. I’m building another one that I’ll actually launch here probably in the next week. I’ve got an agentic system that replaces our entire discovery system. So let’s say you had an idea for an application you wanted to build. The standard process would be you’d call a development company. You’d explain your idea to them. They would then ask you a bunch of questions. You would answer those questions. They would take those answers, architect a solution, figure out how long it was going to take to build it, and then give you a timeline and a price to build the thing that you wanted to build. That current process right now to estimate a project takes anywhere between a week to four weeks, let’s say, depending on how complex the process is. We’re going to launch an agentic system next week that basically you hop on the website or you call, have that conversation with an AI. The AI asks all the questions, puts everything together, and then the agentic system goes in, figures out how long it’s going to take to build, what the best tech stack is, figures out what questions to ask you, runs through that entire process that usually would take five or six people three weeks to do, and it’ll get it all done in two days. That’s another interesting application of this technology.

What’s kind of a silly one that’s been fun is our YouTube channel is completely an agentic system. There’s an avatar of me that goes on and hosts a YouTube video. There are agents that research topics that would be interesting to our clients, put together scripts, and feed all of that stuff to a PlayHT or 11 Labs cloned voice of mine that then creates me talking and having a conversation. That gets fed into another agentic system that edits a video, grabs all the B-roll, puts it all together, and posts it to YouTube. That’s another agentic system that has been really interesting client-wise.

I had another client that was really interesting where they wanted to explore the idea of solopreneurship, where you have a single person in charge of a company who becomes the bottleneck to their company because all of the value of the company is wrapped up into however many hours they stay awake and can be productive. Think of someone who’s like a business coach, but there are a lot of different examples of people who are doing that kind of job. They built a system where you take their entire body of work that they’ve ever produced, feed that into a vector database, and then give access to that via an RAG system. You give access to all of that data to an LLM and then create a personality for it through a bunch of interesting prompt engineering and some clever architecting. You create basically a synthetic clone of that person that is available 24 hours a day, seven days a week. How do we get that information out to everybody? Those are agentic systems, right? That basically will replace you and give value to people based on whatever special knowledge that you have. 

Kashyap: One of the consistent themes around all of these is that while human agency in terms of repetitive tasks is reduced, the specific understanding of context behind each one is something that remains relevant and will likely continue to be relevant. My final question to you is regarding some of the challenges when we are trying to scale this technology beyond a certain point. Where do you see some of the limitations that might prevent agentic AI from scaling further? Heat is one issue. What are some of the other challenges, such as the human context or the human element of bringing in reasoning and other characteristics that humans are able to provide?

Jesse: The number one thing that pops into my head is the human-machine interface problem. Currently, no one likes to talk to machines. There are a few people like me who will wake up in the morning and have a conversation with ChatGPT, but for the most part, most people aren’t interested in that. If they want to get something done, they want to talk to a person. So there’s a technological barrier to entry that’s going to have to be passed, where human beings are as comfortable talking to a machine as they are talking to a person. The technology itself is a limitation there. We just haven’t gotten there yet.

Although AI is crazy smart, it knows more than anybody, but it can’t figure out how many Rs are in the word strawberry. It’s also dumber than the dumbest person. My little tiny children who can barely talk can count the Rs in strawberries. So there’s a certain level of native reasoning we have to move beyond, the prediction of how to predict the next token in a sequence, and actually have the ability to reason. You’re seeing some of that with OpenAI’s newest reasoning models. Others are doing research in those areas, but even then, if you look at how those machines reason, they are so much more inefficient than a human being. If I asked you to count the Rs in the word strawberry, you’d be like, “Oh, one, two, three, three.” It’d be almost instantaneous. You might even be able to tell me instantaneously. Whereas it’s going to take 15 seconds to figure out a very simple reasoning problem. That is a massive limitation because right now, human beings are the best reasoning machines on planet Earth. As far as I can tell, AI can’t touch that. Reasoning is something that we use regularly. It’s what makes people valuable, generally speaking. So that’s a huge limitation.

The other one is just going to be the amount of power needed to go to the next level. We’re going to have a hardware problem, a physical one. I was reading an article the other day that said we might run out of sand because we need sand to make silicon, and we need silicon to create bigger, better, smarter models. There was a prediction that sand is becoming a scarce resource and will become much more scarce as we move into the future because of all the silicon people are trying to make to keep up with the blockchain space and all the GPUs to keep up with the AI space. There’s going to be way more demand than we’ve ever had before. So I think how we build hardware that is efficient enough and good enough to use, in my mind, if you can solve those three problems, which we’re smart enough to do, then we’re going to see just crazy growth in the AI space.

Kashyap: One final question to you. This year probably 2025 will be about agentic. What will be in 2026?

Jesse: I would say it’s going to be reasoning AI. But I think AI reasoning is going to be the buzzword of 2026 if I had to guess. Yeah, I’m excited for machines that can reason. It’s a little scary, maybe some of the stuff, but I’m excited for machines that can reason. I see the writing on the wall, man. You start reading some of the papers that are coming out where people are just thinking about how to make these machines reason like a human. We’re talking about a time in the future where we may not be the dominant race on planet Earth. We may have literally just created the dominant race on planet Earth. So it’ll be fascinating to see that happen in slow motion.

Kashyap: On that dark note, no, I’m just kidding. These are our predictions. Sure, I’m excited for the future. Thank you so much, Jesse, for making the time. I will let you go back to switching on the TV and finding out who the next president of the US is going to be. So let’s get back to our TVs and probably talk very soon. Thank you so much for your time.

📣 Want to advertise in AIM Research? Book here >

Picture of Anshika Mathews
Anshika Mathews
Anshika is the Senior Content Strategist for AIM Research. She holds a keen interest in technology and related policy-making and its impact on society. She can be reached at anshika.mathews@aimresearch.co
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!