Guardrails for UDAAP

As enterprises shift toward consumer-facing GenAI solutions, a common question we receive at Dynamo AI — especially from consumer-facing financial services organizations — is what controls should we test to mitigate UDAAP risk?
These enterprises, including their operational units (such as business and product teams, technology, risk management (model, technology, and data risk), compliance, and legal departments), are developing internally facing GenAI tools to enhance employee productivity. But it's clear the GenAI governance components they're establishing, including risk management, controls, and technology protocols, are setting a foundation for future consumer-facing deployments.
UDAAP, or Unfair, Deceptive, or Abusive Acts or Practices, is aimed at mitigating consumer harm by prohibiting misconduct by financial services organizations. UDAAP is a legal standard, referenced in the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) of 20101, and gives rule-making authority to the Consumer Financial Protection Bureau (CFPB), along with the Federal Trade Commission (FTC) under act by Congress. The CFPB has a broad mandate to define UDAAP and utilize its powers through enforcement. Over the years, distributed guidance and historical precedent resulting from CFPB enforcement have shaped how financial institutions understand and apply this standard.
"When a person’s financial life is at risk, the consequences of being wrong can be grave."
-Consumer Financial Protection Bureau
Now with the use of GenAI pivoting towards consumer-facing scenarios, UDAAP standards are front and center for regulators and financial institutions. As the CFPB has noted, "When a person’s financial life is at risk, the consequences of being wrong can be grave." This is particularly true when a customer engages with GenAI to access information about their financial health or has time-sensitive questions about financial products or services. There is heightened sensitivity from past examples of customer-facing technology deployments that left consumers unable to reach a human customer service representative or receive timely answers to their questions (through ‘doom loop’ scenarios).
Risk of UDAAP infringement when deploying GenAI may be broad, but a few key concerns stand out:
- Risks surrounding the ability to receive factual answers about financial products or services for all requests in a timely manner
- GenAI systems are known to ‘hallucinate’ and may deliver incorrect information to a consumer about a product, or their accounts
- Risks surrounding bias and discrimination
- Regulators are concerned that consumers will receive different treatment or advice based upon identified or derived personal information such as their location or protected class information
- Risks in deceptive marketing and pressure tactics related to a financial product or service
- GenAI systems may use language in a harmful way, pressuring consumers to take action
- Risks in proceeding with a financial product or service without consent or understanding
- Misinterpretation of actions may lead to a product decision being executed without explicit consent
- Risks around user experience that prohibit expedient access to critical information
- If a human or the correct information is not accessible, the consumer may experience harm
Dynamo AI is at the forefront of building, testing, and deploying controls that mitigate the types of risks identified as part of UDAAP. And while a holistic people, process, technology AI governance and risk management control ecosystem is required, targeted guardrails deployed within DynamoGuard play a critical, front-line control role to mitigate many of the UDAAP risk regulators and consumers have highlighted.

We've observed the growing need for four core categories of guardrails on consumer-facing GenAI to mitigate UDAAP risks:
- Guardrails to promote clear, understandable, and helpful language
- Enforcing clear and simple communication, ensuring models do not use slang, overly complex language structures, or other hard to understand responses
- Ensuring that promises are not made to consumers through the course of interaction unless condoned by the financial institution
- Guardrails that facilitate product transparency
- Limiting marketing of a product or service or the use of language that may appear as marketing a product or service
- Enforcing transparency when providing information on financial products such as ensuring all critical product or service details and access criteria is shared when providing information about a product or service
- Prohibiting coercive language to avoid applying pressure tactics to consumers
- Guardrails that deter discrimination
- Mitigating discriminatory language in a prompt or response
- Prohibit the discussion of Protected Class terms or other sensitive consumer information
- Guardrails that block or redirect sensitive topics
- Blocking the discussion of complaints or disputes and reroute topics like this (and of similar concern) to appropriate customer service agents
- Deploying controls that ensure a FAQ page is not repeatedly sent to a customer
- Prohibiting questions (or responses) that ask or provide advice or recommendations
Applying guardrails is one critical part of a comprehensive strategy to mitigate the full breadth of UDAAP risk. Use cases should be vetted through a GenAI risk assessment process, with the appropriate process-risk-control framework established, tested, and deployed.
While each guardrail development journey will be different, tailored to the size and complexity of the use case and enterprise, there are clear thematic guardrail requirements emerging.
It's an exciting time to expand the depth and breadth of financial services access and support to consumers everywhere. And doing so in a responsible way, with the appropriate guardrails to mitigate the risks we, as consumers, are concerned about.
Learn more about how DynamoAI can help you deploy responsible GenAI across financial services. Schedule a product demo.
- Dodd-Frank Act, Title X, Subtitle C, Sec. 1036; PL 111-203 (July 21, 2010)