05. AI Guardrails Overview

5.1 What is a Guardrail?
Overview of a Guardrail
What is a Guardrail in the context of this Playbook? Guardrails are lightweight control mechanisms that translate organizational policies into actionable controls that govern inputs and outputs of an AI system. Guardrails can address a variety of legal, regulatory, and organizational risks and can be applicable to a broad set of use cases. By moderating, blocking, or flagging interactions, guardrails align AI system behavior to policies and mitigate such risks.
Guardrail implementations can vary from heuristic rules to sophisticated machine learning models, however their goal is the same: ensure AI system compliance. It is critical that guardrail development occurs in parallel with AI system development, rather than being an afterthought. Effective guardrail controls also require substantial evaluation of guardrail performance and continual monitoring of guardrail effectiveness.
DynamoGuard enables enterprises to define guardrails based on natural-language rules or requirements. We refer to these as policy-based guardrails. Each guardrail is implemented using proprietary Small Language Models (SLMs) that ensure high performance and minimal overhead.
