OpenAI Acquires AI Security Startup Promptfoo

Promptfoo develops tools that help companies test AI systems for security vulnerabilities during development.

Share

OpenAI has announced plans to acquire Promptfoo, a startup focused on securing large language models, as the company moves to strengthen safety protections around enterprise AI agents.

The San Francisco-based AI company said Promptfoo’s technology will be integrated into OpenAI Frontier, its enterprise platform designed to build and manage AI-powered assistants, often referred to as AI coworkers.

The deal reflects growing pressure on AI developers to demonstrate that autonomous systems can operate safely as businesses increasingly deploy them to handle real-world workflows.

Promptfoo, founded in 2024 by Ian Webster and Michael D’Angelo, develops tools that help companies test AI systems for security vulnerabilities during development. Its platform allows organisations to simulate attacks and evaluate how models behave under potentially harmful prompts or adversarial conditions.

According to the company, its tools are already used by more than 25% of Fortune 500 companies.

Financial terms of the acquisition were not disclosed. Data from PitchBook shows Promptfoo had raised around $23 million since its founding and was valued at approximately $86 million following its most recent funding round in July 2025.

OpenAI said Promptfoo’s capabilities will enable Frontier to perform automated red-teaming of AI agents, assess agent-driven workflows for potential vulnerabilities, and monitor activity for security and compliance risks.

“As enterprises deploy AI coworkers into real workflows, evaluation, security and compliance become foundational requirements,” OpenAI said in a blog post announcing the acquisition.

“Enterprises need systematic ways to test agent behaviour, detect risks before deployment, and maintain clear records to support oversight, governance and accountability over time.”

With Promptfoo’s tools integrated into Frontier, organisations will be able to run automated security tests directly within the platform. These tests can identify vulnerabilities such as prompt injections, jailbreak attempts, data leakage, misuse of external tools, and other forms of out-of-policy behaviour.

ALSO READ: OpenAI Launches GPT-5.4, Claims Higher Results on Multiple Fronts

Staff Writer
Staff Writer
The AI & Data Insider team works with a staff of in-house writers and industry experts.

Related

Unpack More