Ghostboard pixel

Subscribe to Our Newsletter

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn't arrive within 3 minutes, check your spam folder.

Ok, Thanks

Lakera Guard protects LLMs from prompt injection and other risks

Lakera launched with $10M in funding that it plans to invest in protecting enterprises from some of the best-known LLM exploits, such as prompt injection and hallucinations. Furthermore, as part of their work on AI security, the startup co-founders served as advisors for the EU AI Act.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara
Lakera Guard protects LLMs from prompt injection and other risks

Prompts are so essential in the LLM ecosystem that the field has now professionalized. Ideally, prompt engineering exists so chat-based interactions with LLMs are as accurate and effective as possible. Unfortunately, it turns out that just as prompts can be engineered to obtain the best possible outcome, they can just as well be crafted for more questionable purposes, such as revealing sensitive data, producing harmful content, and even helping produce malware or finding exploits in code. Even more worrisome is that although it is common knowledge that all systems have weaknesses that can be exploited, generative AI is so recent that an accurate understanding of all the possible user interactions with the system does not yet exist.

Enter Lakera, a Swiss startup on a mission to protect enterprises from exploits such as prompt injection and data leakage. Its flagship product is Lakera Guard, a security platform compatible with all the biggest names in the LLM field and deployable with just a single line of code. Guard is powered by a database comprising open-source data sets, the products of their in-house research, insights from LLM apps, and their AI security game, Gandalf, where users are challenged to obtain a secret password from Gandalf, who levels up each time the user coaxes the password, making Gandalf progressively harder to trick. The game is accessible to anyone who wants to play, and user inputs are collected and incorporated into the company database.

Lakera is focused on researching prompt injection, but its security services do not stop there. Among its offerings, one can also find toxic language detection, another frequent vulnerability for LLMs trained on a wide range of content that usually includes language picked up from the less secure corners of the internet. Lakera CEO and co-founder David Haber recently revealed in an interview with TechCrunch that the company is working with a developer of generative AI applications for kids to ensure that the apps do not accidentally serve content unsuitable for children.

It has become increasingly evident that AI development is not waiting for anybody and will not slow down soon. Fortunately, the attempts to understand and regulate AI development are also starting to catch up, with the EU AI Act being the first set of regulations made into law. Lakera co-founders served as advisors for the technical foundations of the Act, providing the much-needed perspective of a developer team. This is imperative, considering that when the legislation is introduced, enacting the regulations will fall upon developers currently putting LLMs into production.

On the other hand, Lakera is launching with $10 million in funding from the private sector and has confirmed that its customer roster includes AI company Cohere, along with a leading enterprise cloud platform and one of the world’s largest cloud storage services. With this, it looks like Lakera is off to a great start, and hopefully, we will keep hearing about their crucial work in the foreseeable future.

Ellie Ramirez-Camara profile image
by Ellie Ramirez-Camara
Updated

Data Phoenix Digest

Subscribe to the weekly digest with a summary of the top research papers, articles, news, and our community events, to keep track of trends and grow in the Data & AI world!

Success! Now Check Your Email

To complete Subscribe, click the confirmation link in your inbox. If it doesn’t arrive within 3 minutes, check your spam folder.

Ok, Thanks

Read More