Grant / October 2023

LLM Canary Open-Source Security Benchmark Tool

Generative AI is rapidly expanding and poised to revolutionize multiple industries. The surge in adoption has led to an increased use of pre-trained Large Language Models (LLMs), but with it comes the challenge of understanding their security implications. The combination of early-stage projects and overwhelming interest have resulted in poor security posture. While many developers are still new to LLMs, speed to deployment has taken precedence, often overshadowing safety.

The LLM Canary Project is an open-source initiative to address the security and privacy challenges within the LLM ecosystem. We are building a user-friendly benchmarking tool to detect the OWASP Top Ten for LLM vulnerabilities, and evaluate the security of customized and fine-tuned LLMs.

LLM Canary holds substantial potential impact; streamlining vulnerability detection and reporting, and highlighting security fundamentals. It can augment performance benchmarks and aligns with major research and policy organizations’ objectives; influencing R&D focus areas, and providing an impact assessment for deployment readiness.

This project will enhance AI security dialogue and offer objective metrics for discussion. It can serve as a valuable resource for organizations like the CLTC Citizen Clinic, to help ensure that AI tools are securely constructed.

The LLM Canary Project seeks to elevate the standards of AI development and contribute to a safer, more responsible AI future.

  • Empower developers to produce secure AI products
  • Enable actionable insights to respond to evolving AI security requirements
  • Explore transparent and reliable benchmarks for LLMs

In a world where AI is becoming increasingly prevalent, prioritizing security is not just a choice; it’s a necessity. LLM Canary fills a crucial gap by providing efficient, benchmarked LLM testing capabilities to accelerate our collective journey towards trustworthy AI systems and responsible innovation.