DynamoGuard: A Platform for Customizing Powerful AI Gates

In 2023, Dynamo AI had the opportunity to enable some of the largest global enterprises to embed privacy, safety, and security into their GenAI stacks. These partnerships laid the foundation for these enterprises to streamline the launch of GenAI solutions at greater speed and efficiency.
As we move into 2024, we are witnessing a wave of enterprises seeking to productionize GenAI beyond just preliminary experimentation, and are seriously addressing security and safety challenges with GenAI.
Enterprises are also weighing the impact of emerging regulations like the EU AI Act and US Executive Order, which define a new framework of compliance requirements for GenAI. For example, LLMs can be prompted in nearly infinitely diverse ways, resulting in a virtually unbound range of risk of LLMs generating non-compliant outputs. Efforts to enforce safety guardrails around models have thus far proven difficult and unsatisfactory for enterprise risk management, thus presenting challenges to adhere to guidelines in NIST’s Risk Management Framework or the EU’s risk triaging paradigm.
The enterprise’s challenge with GenAI safety and security today
In our work enabling production-grade GenAI for enterprises, we have found that most LLM safety solutions (i.e., guardrail products) fall incredibly short of emerging regulatory requirements and general safety standards. We highlight three reasons for this failure:
- Today’s guardrail offerings lack true customizability needed by enterprises: each enterprise has their own unique set of AI governance requirements tied to their specific LLM use-cases. Out-of-the-box LLMs are designed to adhere to a limited set of safety principles that are often too broad to tackle edge-cases that make up the bulk of LLM compliance violations.
- Poor detection of non-compliant usage: since LLMs are only aligned to broad safety guidelines, they fail spectacularly when encountering nuanced non-compliance edge-cases. For example, we found that LlamaGuard (based on the Llama2-7B architecture) failed to correctly flag 86% of prompt injection attacks from two popular prompt injection classification datasets.
- Limited ability to incorporate safety requirements from compliance and risk teams: compliance and risk teams commonly create a list of bespoke AI governance requirements (i.e. an “AI Constitution”) to promote safe LLM usage, but enterprises lack a meaningful workflow beyond limited prompt-engineering to enforce these requirements.
DynamoGuard is a significant leap forward in GenAI safety and security
DynamoGuard is currently enabling a critical step forward in addressing these gaps in AI safety. In contrast to previous guardrailing approaches, DynamoGuard meaningfully advances LLM safety by integrating the following key capabilities:
- Unprecedented customization of guardrails: Compliance teams can simply copy-and-paste their AI governance requirements into DynamoFL to enable truly customizable guardrails.
- A major leap in non-compliance detection accuracy: DynamoGuard achieves a 2-3X improvement in compliance violation detection (i.e., in the detection of prompt-injection violations) compared to leading LLMs by leveraging DynamoFL’s “Automatic Policy Optimization” (APO) technique to teach guardrails to address non-compliance edge-cases.
- A human-centric workflow for building robust guardrails: While edge-case reasoning is a powerful technique to bolster guardrail efficacy, compliance teams (human beings) still need to be closely involved in tuning and monitoring guardrails. DynamoGuard provides compliance teams with an end-to-end workflow to fine-tune AI guardrails and monitor their performance in real-time to close the gap in meeting compliance requirements.
How it works: The DynamoGuard user journey
- Compliance team describe their AI governance policies to DynamoGuard in natural language (or just copy and paste their existing AI governance policies into DynamoGuard).
- DynamoGuard leverages Automatic Policy Optimization (APO) to generate a series of example user-interaction edge-cases that violate AI governance policies.
- Compliance teams edit or reject these edge-case examples to refine DynamoGuard’s understanding of nuanced edge-case violations.
- DynamoGuard fine-tunes a lightweight guard model to classify the generated edge-cases.
- DynamoGuard is integrated into the enterprise's production-ready LLM system and leverages its fine-tuned lightweight guard model to flag noncompliance violations in LLM inputs and outputs.
- Compliance teams can monitor guardrail efficacy in real-time through DynamoGuard’s LLM monitoring dashboard and continue to fine-tune their LLMs to strengthen guardrails.
Expanding the reach of DynamoGuard with Dynamo 8B, our multilingual foundation model to democratize access to safe GenAI
Even with the constant stream of exciting updates in the LLM space, there is a relative lack of investment in non-English languages, resulting in a gap in performance between English and other languages for open-source language models. We built Dynamo 8B to address this gap in multilingual LLM offerings.
We are excited about the downstream applications that Dynamo 8B will support. AI teams are struggling to address the challenge of unsafe user queries and LLM outputs, resulting in major compliance challenges for enterprises deploying the technology. As language models like LlamaGuard and Phi-2 were developed to act as more lightweight guardrail models to regulate LLM inputs and outputs, we are excited for Dynamo 8B to similarly enable safe and compliant usage of LLMs globally across a diverse set of languages.
DynamoGuard completes Dynamo AI’s comprehensive GenAI safety and security offerings
Dynamo AI’s complete product offering seeks to provide our customers with the appropriate tools and techniques to enable blue teaming and red teaming, while also ensuring end-to-end auditability of the entire LLMOps lifecycle.
Our products to-date include:
- DynamoEval: Evaluate an unlimited number of existing closed or open-source LLMs for privacy, security, and reliability risks with 20+ different adversarial testing approaches.
- Regulation-compliant copilots: A pre-trained catalog of co-pilots within banking, healthcare, and life sciences that are embedded with differential privacy and optimized for cost. This ensures that AI systems are regulation compliant, without driving up your expenses.
- DynamoEnhance: Enable private and efficient federated machine learning when training AI models across distributed data sets .
Our two new product additions are:
- DynamoGuard: Enable real-time moderation of both internal and third-party hosted LLMs, based on natural language processing of your internal compliance policies. DynamoGuard then creates your AI guardrails to prevent and monitor non-compliant inputs and outputs.
- Dynamo 8B foundation model: DynamoGuard is made possible based on our multilingual Foundation Models that have unparalleled performance in comparison with similar sized models. Read more about our Dynamo8B release here!
With DynamoEval, we are introducing a unique, comprehensive, and technical approach to red teaming. Now, DynamoGuard is the first fully-customizable guardrail development and deployment platform enabling blue teaming. Leveraging both DynamoEval and DynamoGuard, AI teams can ensure they have a fully auditable pipeline from pre-deployment through post-deployment.
Learn more about our platform with a free demo.