03. The AI Risk Management Imperative

3.1 The AI Risk Management Imperative
Enterprises witnessed a significant surge in global AI-related guidance and regulation in 2024. Within the US alone, state policymakers introduced close to 700 pieces of AI regulation,1 with some states (Colorado and Utah) passing targeted AI-use case laws. While 2025 promises to continue this trend, early regulatory movers (i.e. the EU AI Act) will transition to testing mode as key provisions become enforceable.2 Regulatory expectations will continue to arise from a broad swath of regulatory imperatives, from AI-focused regulatory requirements to existing model, privacy, or consumer-focused regulation pertinent for targeted use cases, where an enterprise must demonstrate compliance.

A Regulatory Compliance Journey
Dynamo AI’s series of Playbooks contain ‘best practice’ operating requirements to establish AI controls and facilitate compliance with existing and emerging regulatory guidance. Today, global enterprises are currently navigating the Regulatory Operating Model phase of the AI regulatory compliance journey, where the foundation of regulatory expectations (and subsequent risk tolerances) are being defined. This will eventually lead to the next two phases: Standards Development and Best Practice Evolution. Within the Standard Development phase, industry helps shape best-practice standards that subsequently influence policy going forward. The Best Practice Evolution phase, in particular for regulated entities, is critical for AI use case advancement as this is where continued control implementation leads to executional refinement and strengthening in order to satisfy standards. Dynamo AI’s Playbook detail is critical for these last two phases, as organizations look to implement and strengthen AI oversight.
Playbook Alignment with Model Risk Management GuidanceAnother critical source of AI control expectations comes from model risk management guidance, in particular from the financial services industry. This guidance provides details around how a model should be evaluated, implemented, and monitored. Dynamo AI’s series of Playbooks incorporates key tenants from model risk management guidance (as seen in the following table) and delivers best practices that satisfy or mitigate risks that may arise from many of the AI model requirements documented.
SR11-7 | Federal Reserve, USA
Section IV. Model Development, Implementation, and Use
- Model testing includes checking the model’s accuracy, demonstrating that the model is robust and stable, assessing potential limitations, and evaluating the model’s behavior over a range of input values.
- It should also assess the impact of assumptions and identify situations where the model performs poorly or becomes unreliable.
- Testing should be applied to actual circumstances under a variety of market conditions, including scenarios that are outside the range of ordinary expectations, and should encompass the variety of products or applications for which the model is intended.
- Extreme values for inputs should be evaluated to identify any boundaries of model effectiveness.
- An understanding of model uncertainty and inaccuracy and a demonstration that the bank is accounting for them appropriately are important outcomes of effective model development, implementation, and use.
SS1/23 | Bank of England, Prudential Regulatory Authority (PRA), UK
Principle 3.2: The Use of Data
- The model development process should ensure there is no inappropriate bias in the data used to develop the model, and that usage of the data is compliant with data privacy and other relevant data regulations.
Principle 3.3: Model Development Testing
- “...Performance tests should also include comparisons of the model output with the output of available challenger models, which are alternative implementations of the same theory, or implementations of alternative theories and assumptions.”
Principle 3.4: Model Adjustments and Expert Judgement
- “… demonstrate that risks relating to model limitations and model uncertainties are adequately understood, monitored, and managed…”
Principle 3.5: Model Development Documentation
- Model development documentation should be sufficiently detailed so that an independent third party with the relevant expertise would be able to understand how the model operates.
Information Paper Artificial Intelligence Model Risk Management | Monetary Authority of Singapore
Section 6: Development
- Datasets chosen for training and testing or evaluation of AI models were expected to be representative of the full range of input values and environments under which the AI model was intended to be used. Training and testing datasets were also checked to ensure that their distributions or characteristics are similar.
- “…testing datasets that allowed predictions or outputs from AI models to be tested or evaluated in the bank’s context as far as possible.”
- Applying explainability methods to identify the key input features or attributes that are important for the AI model predictions or outputs and assessing that they are intuitive from a business and/or user perspective.
- “… required developers to apply global and/or local explainability methods to identify the key features or attributes used as inputs to AI models and their relative importance… “
- “… Model selection details of how the performance of the AI model was evaluated and how the final model was selected.”
1. Goldmacher, Shane. “Here’s a running list of all the state-level AI legislation in 2024.” StateScoop, January 2, 2024, https://statescoop.com/ai-legislation-state-regulation-2024/.
2. Charles-Albert Helleputte, Claire Murphy, and Andrea Otaola. (September 2024). The EU AI Act Enters Into Effect – What You Should Know and What Should You Do? Squire Patton Boggs. https://www.squirepattonboggs.com/en/insights/publications/2024/09/the-eu-ai-act-enters-into-effect-what-you-should-know-and-what-should-you-do#:~:text=With%20the%20entry%20into%20force,system%20will%20not%20be%20enough.