Swiss Startup Lakera Secures $20 Million Series A to Protect Generative AI from Malicious Threats

Swiss Startup Lakera Secures $20 Million Series A to Protect Generative AI from Malicious Threats

Lakera, a Zurich-based startup specializing in safeguarding generative AI applications from malicious prompts and other threats, has raised $20 million in a Series A funding round. The round was led by European venture capital firm Atomico, with additional participation from Dropbox's VC arm, Citi Ventures, and Redalpine.

Generative AI, exemplified by popular applications like ChatGPT, has revolutionized the AI landscape but also raised significant concerns in enterprise settings, particularly around security and data privacy. Large language models (LLMs), the backbone of generative AI, can be vulnerable to "prompt injections"—maliciously crafted instructions that trick the AI into performing unintended actions, such as leaking confidential data or granting unauthorized access.

Lakera, founded in Zurich in 2021, officially launched last October with $10 million in initial funding. The startup aims to protect organizations from such security weaknesses. Its flagship product, Lakera Guard, functions as a "low-latency AI application firewall," securing traffic to and from generative AI applications, including those using models like OpenAI’s GPT-X, Google’s Bard, Meta’s LLaMA, and Anthropic’s Claude.

Lakera Guard is built on a comprehensive database incorporating insights from publicly available datasets, in-house machine learning research, and interactive user experiences. One such experience is a game called Gandalf, designed to improve the system's defenses by inviting users to attempt to hack it. These interactions help Lakera develop a "prompt injection taxonomy" to categorize and better defend against various types of attacks.

Co-founder and CEO David Haber emphasized Lakera’s proactive approach: “We are AI-first, building our own models to detect malicious attacks such as prompt injections in real time. Our models continuously learn from large amounts of generative AI interactions what malicious interactions look like. As a result, our detector models continuously improve and evolve with the emerging threat landscape.”

In addition to protecting against prompt injections, Lakera Guard includes content moderation features. These specialized models scan prompts and outputs for toxic content, including hate speech, sexual content, violence, and profanities, making them particularly useful for public-facing applications like chatbots.

With the new funding, Lakera plans to expand its global presence, particularly in the U.S., where it already serves high-profile clients like AI startup Respell and Canadian unicorn Cohere. The company aims to cater to a growing demand from large enterprises, SaaS companies, and AI model providers seeking secure AI applications.

“Large enterprises, SaaS companies and AI model providers are all racing to roll out secure AI applications,” Haber noted. “Financial services organizations understand the security and compliance risks and are early adopters, but we are seeing interest across industries. Most companies know they need to incorporate GenAI into their core business processes to stay competitive.”

Lakera’s Series A funding marks a significant step in enhancing AI application security, addressing a critical need in the rapidly evolving field of generative AI.