Interview

Promptfoo raises $18.4M Series A from Insight Partners and a16z to secure AI applications at the enterprise layer

Jul 29, 2025 with Ian Webster

Key Points

  • Promptfoo raises $18.4M Series A from Insight Partners and a16z to automate security testing for AI applications, targeting a gap where enterprises bear full responsibility for model behavior in production.
  • The startup deploys adversarial agents trained on uncensored models to probe applications across 70-plus risk categories, stress-testing guard rails at scale manual QA cannot match.
  • Fortune 50 companies are sitting on hundreds of internal AI prototypes blocked from production by reputational and liability risk, creating demand for systematic application validation before deployment.
Promptfoo raises $18.4M Series A from Insight Partners and a16z to secure AI applications at the enterprise layer

Promptfoo has closed an $18.4 million Series A led by Insight Partners with participation from a16z, targeting what founder Ian Webster describes as a critical but underserved layer of enterprise AI deployment: application-level security.

The company's core argument is that foundation model providers like OpenAI and Anthropic have financial incentive to prevent their models from producing harmful or offensive outputs, but bear no responsibility for how developers integrate those models into production systems. When a model is connected to a PII database, an external API, or an agentic workflow, the security obligations shift entirely to the enterprise builder, and most are unprepared.

The company that we're building is called Promptfoo. We are building security tools for Genai applications... The big news today is that we have raised a Series A — $18.4M from Insight Partners, with a16z also participating. The thing that scares a lot of large companies working with LLMs are things like PII leaks, tool misuse through agents, and softer issues like the LLM recommending competitors.

Promptfoo's approach is adversarial by design. The company trains uncensored models to behave as malicious or misbehaving users, then deploys agents built on those models to probe applications across 70-plus risk categories, including PII leakage, tool misuse through agents, competitor recommendations, and out-of-scope advice. The system generates thousands of attack conversations to stress-test guard rails at a scale that manual QA cannot match.

Webster brings direct operator experience to the problem. Before founding Promptfoo, he led generative AI engineering and product at Discord, shipping AI features to hundreds of millions of users. He frames manual testing as the industry's current default, noting it covers only a fraction of the actual attack surface given that, as he puts it, the attack surface of an AI application is "all of human language and then some."

The commercial opportunity sits inside a specific bottleneck. Webster claims Fortune 50 companies are sitting on hundreds of internal AI prototypes they have not pushed to production, citing reputational and liability risk as the primary blocker. High-profile incidents reinforce the concern, including Air Canada's chatbot committing to an out-of-policy refund and Microsoft's early Bing persona, Sydney, exhibiting erratic behaviour at scale. The argument is that enterprise AI adoption will stall unless companies can systematically validate application behaviour before going public.

On the closed versus open-source security question, Webster says large enterprise customers currently prefer closed-source models, but performance per dollar is the driver rather than any perceived security advantage. The security gap, he argues, exists equally across both paradigms at the application integration layer, which is precisely where Promptfoo operates.

Every deal, every interview. 5 minutes.

TBPN Digest delivers summaries of the latest fundraises, interviews and tech news from TBPN, every weekday.