NSFW AI Chatbots Keep Getting Funded Despite Their Dark Side

What’s even more troubling is how these platforms, often marketed to young users, lack sufficient safety measures

Digital world often feels like an extension of one’s own. People share their thoughts, their lives, and sometimes even their deepest secrets online. But with every step into the virtual space, there’s an unseen shadow—lurking just behind the glow of our screens.Cyberstalking is one such shadow, a creeping form of harassment that hides in the anonymity of the internet, silently making its way into people’s lives

This form of abuse have taken various shapes, from repeated unwanted messages and threats to the use of personal information for manipulation or intimidation over the years. As technology continues to evolve, cyberstalking has become an increasingly complex issue, requiring a combination of legal, social, and technological responses to protect victims and hold perpetrators accountable.

Tools for Connection or Manipulation?

In the U.S., several high-profile cases have highlighted the severity of the issue. For example, the case involving eBay employees charged with conspiracy to commit cyberstalking and witness tampering shocked the public. These employees, tasked with providing customer service, instead turned their attention to a couple who had criticized the company in an online newsletter. Over time, their harassment escalated, sending the couple disturbing packages and threats. This case highlighted not only the danger of cyberstalking but also the potential for it to be carried out by individuals in positions of power.

There are also heartbreaking stories of sexual exploitation and psychological manipulation, such as instances where victims were coerced into compromising situations through persistent online abuse. And as technology advances, so does the sophistication of the tactics employed by cyberstalkers—making it even harder for authorities to track down perpetrators and for victims to escape the harassment.

AI—typically seen as a tool for efficiency, convenience, and innovation—has unfortunately found itself on the wrong side of this equation. While AI’s benefits are undeniable, its potential for misuse has rapidly expanded, making it a double-edged sword. Platforms like CrushOn.AI and JanitorAI allow users to create customizable chatbots that can generate sexually explicit or suggestive content. Initially marketed for companionship or entertainment, these platforms are now being hijacked by cyberstalkers. The ease with which AI tools can be used to craft harmful content speaks volumes about the dark undercurrents running through some corners of the internet. And the consequences are devastating.

Take the case of James Florence from Massachusetts, who used AI platforms to create targeted chatbots aimed at harassing a specific victim. Florence generated sexually suggestive and threatening content, using the bot to manipulate his victim psychologically. This story is a grim illustration of how AI—when misused—becomes a tool for emotional and psychological abuse. The danger lies not just in the harmful content but in the ease with which it can be created, tailored to exploit individual vulnerabilities.

What’s even more troubling is how these platforms, often marketed to young users, lack sufficient safety measures. By 2025, there will be over 100 AI companions available, many free and unregulated, creating a fertile breeding ground for abuse. The fact that users, especially minors, can access and use these platforms without sufficient safeguards is a recipe for disaster. Without proper monitoring, the ability to create AI companions—emotionally resonant, highly personalized—can be weaponized to manipulate and harass.

The FBI has already issued a warning about the rise of AI-enabled cybercrimes, including phishing, social engineering attacks, and voice and video cloning scams. As AI continues to evolve, so too will the threats posed by its malicious use. The real danger isn’t just the technology itself—it’s the human element. Cyberstalkers exploit our behaviors, vulnerabilities, and interactions to cause harm, making it all the more crucial for us to consider both the technological and ethical dimensions of AI’s power.

Many AI startups have emerged to develop chatbots that help businesses improve their customer engagement. Platforms like BotPenguin and Botsify provide businesses with the tools to create intuitive, automated chatbots without requiring any coding expertise. These AI-powered chatbots are a great example of how technology can drive efficiency and enhance user experience in professional settings. But there’s a darker side to this tech. Platforms like CrushOn.AI and Character.AI are designed to provide emotionally personalized experiences, from casual companionship to intimate conversations. While some users might seek harmless fun or entertainment, the personalized nature of these platforms makes them ripe for exploitation.

AI’s Emotional Grip

AI chatbots are creating deeply personal and sometimes dangerous emotional attachments. A poignant example comes from the tragic case of Sewell Setzer III, a 14-year-old who reportedly sent his last message to a chatbot before taking his own life. The young boy engaged in highly sexualized conversations with the AI, and his death prompted a wrongful death lawsuit. This story speaks to the potential harm of AI chatbots—especially when they’re used inappropriately or without adequate safeguards.

Despite all the controversies surrounding these platforms, the reality is that some of these NSFW AI chatbot companies continue to attract significant funding. Character.AI, which raised $150 million in Series A funding in 2022, is just one example of a company gaining recognition and financial backing despite its controversial content. Similarly, AI Dungeon, a text-based AI game powered by Latitude, has raised millions in funding, including an $8.5 million Series A round in 2021. Both platforms cater to users who are drawn to the personalized and immersive experiences these bots offer, but without sufficient regulation, they also provide a breeding ground for abuse.

In response to the growing concern over AI’s role in cyberstalking, experts are raising alarms. James Steyer, founder and CEO of Common Sense Media, emphasized the need for increased awareness and regulations to ensure that vulnerable individuals—especially children—are protected from the harms of generative AI. His organization’s report, “The Dawn of the AI Era,” reinforces the idea that while generative AI is powerful, we must tread carefully, ensuring that its deployment doesn’t expose users to exploitation.

As one regular user of NSWF AI chatbots commented on Reddit, “Janitor LLM is awful, the replies feel robotic, are way too long and “Shakespearian” and it constantly tries to end the scene.”

📣 Want to advertise in AIM Research? Book here >

Picture of Upasana Banerjee
Upasana Banerjee
Upasana is a Content Strategist with AIM Research. Prior to her role at AIM, she worked as a journalist and social media editor, and holds a strong interest for global politics and international relations. Reach out to her at: upasana.banerjee@analyticsindiamag.com
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!