The artificial intelligence race is no longer just about building the most powerful models. It is also about keeping them safe. In September 2025, Irregular, a rising AI security startup, announced that it had raised $80 million in funding, sending a clear signal: AI security is not a niche concern anymore; it is a central pillar of the industry’s future.
Artificial intelligence has reached a stage where the risks are as impressive as the opportunities. Models like Anthropic’s Claude 3.7 Sonnet or OpenAI’s o3 series are pushing the boundaries of what machines can do. But with these capabilities come new dangers such as data breaches, unintended emergent behaviors, or even models being manipulated in ways their creators never foresaw.
This is where Irregular steps in. The company does not just patch known issues. It anticipates the unknown. By simulating real-world attack environments and running AI-versus-AI stress tests, Irregular is positioning itself as a kind of immune system for the next generation of AI.
The $80 million round, led by Sequoia Capital and Redpoint Ventures, reflects growing awareness among investors that AI safety is no longer optional. As one analyst put it, “Just as cybersecurity became a must-have for the internet age, AI security is becoming a must-have for the AI age.”
In this article, we will explore who Irregular is, why this funding matters, and how their work could shape the future of frontier AI security.
2. Who Is Irregular? The Company Behind the Hype
Irregular is not a completely new face in the AI ecosystem. Originally known as Pattern Labs, the company rebranded as Irregular to better reflect its mission: tackling the unpredictable and irregular behaviors that advanced AI models can exhibit.
The company was founded by a team of AI researchers and security experts who saw the writing on the wall. They realized that while companies like OpenAI, Anthropic, and Google DeepMind were racing to build ever more capable models, very few players were working on systematically testing and securing these models.
At its core, Irregular is about anticipation, not reaction. Traditional cybersecurity often deals with threats after they have been discovered, patching vulnerabilities, responding to breaches. Irregular’s philosophy is different: test AI systems under extreme conditions before they ever reach the public. That way, potential failures are caught early, and models can be released more responsibly.
The leadership team is backed by top-tier investors. Heavyweights like Sequoia Capital and Redpoint Ventures bring financial strength, while individuals like Assaf Rappaport (CEO of Wiz) bring deep cybersecurity expertise. Together, they are giving Irregular not just capital but also credibility in an industry where trust is everything.
The company’s vision is ambitious: become the standard bearer for AI security, much like how companies such as FireEye and CrowdStrike became household names in cybersecurity. If AI is the new electricity, as some like to say, Irregular wants to be the circuit breaker that prevents blackouts.
3. The $80 Million Funding Round: Breaking Down the Details
The $80 million round is significant not just for Irregular, but for the AI security industry as a whole. Here is a closer look at what makes it noteworthy.
Lead Investors: The round was co-led by Sequoia Capital and Redpoint Ventures, both known for spotting future-defining startups.
Strategic Angels: One standout investor is Assaf Rappaport, CEO of Wiz, one of the fastest-growing cloud security companies. His participation signals strong alignment between cybersecurity and AI safety.
Valuation: The funding round values Irregular at about $450 million, a remarkable figure for a company still in its early stages. This valuation reflects both investor confidence and the projected size of the AI security market.
What will Irregular do with the money? According to company statements, the funding will go toward expanding technical infrastructure, hiring top-tier talent, and scaling its AI security simulations to keep pace with increasingly large models.
The message here is clear. Investors believe that AI safety is not an afterthought but a foundational requirement. Much like how cybersecurity spending exploded after the rise of the internet, AI security spending is likely to become one of the fastest-growing verticals in the AI ecosystem.
4. Why Securing Frontier AI Models Matters
To understand why Irregular’s work is important, you need to grasp what frontier AI models are. These are not your average chatbots or basic recommendation engines. Frontier models represent the cutting edge of AI capabilities, systems that can reason, generate, and interact at levels approaching general-purpose intelligence.
But here is the catch. The more powerful a model becomes, the harder it is to predict. Researchers call this emergent behavior, when a system begins to show abilities or vulnerabilities that were not explicitly designed. Think of it like giving a student all the tools to write essays, only to find out they can also write malicious code you never taught them.
The risks are huge.
- Misinformation: A model could generate highly convincing fake news.
- Cyber threats: AI might unintentionally help attackers find vulnerabilities in code.
- Bias amplification: Instead of reducing prejudice, an unsecured model could spread harmful stereotypes.
- Autonomy gone wrong: As models gain the ability to act on behalf of users, small misalignments could lead to big consequences.
Irregular’s mission is to get ahead of these issues. By simulating both human-on-AI and AI-on-AI interactions, they stress test models in ways traditional security cannot. The goal is not just to detect vulnerabilities but to anticipate what could go wrong before it happens in the real world.
In short, if the last decade was about making AI useful, the next one will be about making it safe.
5. Inside Irregular’s Technology: The SOLVE Framework
At the heart of Irregular’s work is something they call the SOLVE framework, a structured approach to evaluating AI vulnerabilities. While details are still emerging, here is what we know.
Systematic Testing: SOLVE does not just test models against known risks. It sets up simulated environments where models are pushed into unusual scenarios, often ones developers never considered.
AI-on-AI Red Teaming: Unlike traditional red teaming, where human experts try to “break” a system, Irregular uses AI systems as both attackers and defenders. This allows them to scale testing far beyond what human teams alone could achieve.
Continuous Monitoring: Security is not a one-time event. Models evolve through updates and retraining. SOLVE allows for ongoing evaluation, making sure that new versions do not reintroduce old vulnerabilities.
Compared to traditional security, which might focus on patching a server vulnerability or encrypting data, AI security is about behavior. Models are not static. They learn, adapt, and sometimes act unpredictably. That makes frameworks like SOLVE not just useful, but essential.
By providing a measurable way to evaluate model safety, Irregular is helping AI labs answer the big question: “Is this model ready for real-world deployment?”