Virtue AI Raises 30 Million To Guard AI Safety Like A Hawk

Recently, the company put its advanced red-teaming evaluation to test OpenAI’s GPT-4.5 and Anthropic’s Claude 3.7 to evaluate key security & safety perspectives

“Virtue AI is shaping the future of GenAI security.” – Lip-Bu Tan, CEO, Intel

In Silicon Valley offices, many AI executives often face a common crisis: their company’s AI chatbots or models malfunction. These systems may generate harmful content, fabricate product information, or even disclose private customer data. The situation can quickly escalate into a legal and public relations disaster.

Meanwhile, in a nearby lab, a different AI system monitors this situation. This AI isn’t designed to create content, but to safeguard. This is the work of Virtue AI, an AI cybersecurity startup founded on the principle that the most significant AI threats aren’t just external; they can come from the AI models themselves.

Founded by renowned AI researchers Bo Li, Dawn Song, Carlos Guestrin, and Sanmi Koyejo, Virtue AI combines academic depth with startup execution. Their platform spans real-time safety enforcement, red teaming and evaluation and secure agent deployment for high-stakes industries like healthcare and finance.

Bo Li, co-founder and CEO of Virtue AI, said, “We saw companies struggling with the same challenges repeatedly—subpar evaluation methods, inefficient guardrails, and manual processes that created bottlenecks in AI deployment pipelines.” 

Guarding AI from Itself

The narrow focus of many current AI security solutions implies that they only offer a limited number of specialized features, like red-teaming or guardrails, or they only cover specific large language models, but Virtue AI is an all-encompassing platform that monitors, assesses, and safeguards AI models, applications, and agents across all modalities, including text, image, video, audio, and code. 

Recently, the company put its latest advanced red-teaming evaluation to test OpenAI’s GPT-4.5 and Anthropic’s Claude 3.7 to evaluate key security & safety perspectives.

Virtue AI offers a three-part platform designed to safeguard AI systems across all modalities. The first pillar of which is a VirtueGuard, a real-time moderation and enforcement engine. It monitors AI model outputs for harmful content, hallucinations, policy violations, and sensitive data leakage. By intercepting unsafe or non-compliant responses before they reach users, this feature protects companies from PR disasters, regulatory blowback, and ethical breaches, especially in high-stakes sectors like healthcare, finance, and education. VirtueRed is another toolkit that stress-tests AI models with adversarial prompts and bias detection, simulating real-world attacks. It helps organizations find vulnerabilities and benchmark model safety, which is crucial for regulatory compliance and risk management. Lastly, VirtueAgent provides enterprises with ready-to-deploy AI agents that are designed with safety at their core. These agents are built for secure use in sensitive environments and are reinforced with guardrails to prevent unpredictable or unauthorized behavior. Whether used in customer service, financial advising, or healthcare assistance, VirtueAgent gives organizations a trusted way to scale AI without risking autonomy gone wrong.

Funding to Build AI’s Safety Net

This all-in-one approach has attracted the attention of investors. Virtue AI has successfully raised $30 million across its seed and Series A rounds, led by Walden Catalyst Ventures and Lightspeed Venture Partners, with participation from TenEleven Ventures, Conviction, and Nucleate. The funding underscores growing demand for AI security infrastructure as enterprises rush to adopt large language models and autonomous agents.

Axios reports that Virtue AI will use its new funding to hire 30 additional business development and engineering employees by the end of the year. The company also plans to launch new features over the next 12 years that will enable its tools to “protect most AI product layers.”

The company boasts an impressive roster of star clientele, including Uber, Glean, and Intel, as well as Microsoft Research. As the founder and CEO of Glean Arvind Jain said, “Our collaboration with Virtue AI helps us stay ahead of emerging threats and deliver on our promise to keep users in control and their data protected.”

📣 Want to advertise in AIM Research? Book here >

Picture of Upasana Banerjee
Upasana Banerjee
Upasana is a Content Strategist with AIM Research. Prior to her role at AIM, she worked as a journalist and social media editor, and holds a strong interest for global politics and international relations. Reach out to her at: upasana.banerjee@analyticsindiamag.com
Subscribe to our Latest Insights
By clicking the “Continue” button, you are agreeing to the AIM Media Terms of Use and Privacy Policy.
Recognitions & Lists
Discover, Apply, and Contribute on Noteworthy Awards and Surveys from AIM
AIM Leaders Council
An invitation-only forum of senior executives in the Data Science and AI industry.
Stay Current with our In-Depth Insights
The Most Powerful Generative AI Conference for Enterprise Leaders and Startup Founders

Cypher 2024
21-22 Nov 2024, Santa Clara Convention Center, CA

25 July 2025 | 583 Park Avenue, New York
The Biggest Exclusive Gathering of CDOs & AI Leaders In United States
Our Latest Reports on AI Industry
Supercharge your top goals and objectives to reach new heights of success!