News
October 16, 2025

In Congressional Testimony, Dynamo AI Co-Founder Charts a Path Forward for Secure, Compliant AI in a Competitive Financial Services Marketplace

Dynamo AI Team

On September 18, 2025, Dynamo AI Co-Founder and President Dr. Christian Lau testified before the Digital Assets, Financial Technology, and Artificial Intelligence Subcommittee of the U.S. House of Representatives Financial Services Committee in a hearing titled “Unlocking the Next Generation of AI in the U.S. Financial System for Consumers, Businesses, and Competitiveness.”

In the testimony, Dr. Lau highlighted that we are at a critical moment for the financial services industry: while AI promises unprecedented opportunities for innovation and efficiency, institutions struggle to navigate unique risks in heavily regulated environments. He articulated a key message—AI itself can be a critical protective control to mitigate many key AI and security risks in the sector. Alongside other core principles and practices for AI risk management, emerging technologies like advanced guardrails, red-teaming, and observability technologies stand to enable secure AI development, but only if policymakers and industry leaders work together to establish the right frameworks to facilitate innovation.

Key Takeaways:

  • AI deployments in industry often fail to reach production not due to technical limitations, but because financial institutions struggle to manage AI risks in highly-regulated, high-impact environments
  • Key AI risks impacting AI adoption across the financial services marketplace, such as explainability, model sovereignty, alignment of AI systems to institutional ground truth, and vendor concentrations, can be mitigated and addressed by emerging risk management technologies like AI-powered security evaluations, real-time AI guardrails, and observability platforms.
  • The path forward requires American leadership not just in building powerful AI systems, but in developing the protective infrastructure and policy frameworks that incentivize secure AI deployment at scale. Congress should continue its work to:
    • (a) Create regulatory sandboxes to explore AI use cases and best practices for risk management;
    • (b) Incentivize and support a growing independent AI evaluation ecosystem;
    • (c) Support discussions on the future of model risk management; and
    • (d) Call on financial and national security agencies to develop comprehensive plans to defend against adversarial AI-powered attacks, especially those brought on by advancements in Agentic AI.

For more, read the written testimony here or watch the hearing here.

AI's Transformative Potential and Key Challenges

Dr. Lau highlighted a critical paradox: every day, Dynamo's team witnesses exciting AI proof of concepts that promise to transform customer experiences and enable efficient compliance. Yet for nearly every successful AI project, another fails to reach production—not because the technology can't deliver, but because institutions struggle to manage AI risk in heavily regulated, high-impact environments.

Despite these challenges, the potential is immense. Financial Institutions of all sizes are exploring AI to deepen customer relationships, prevent fraud, and improve efficiency. By personalizing services at scale, community and regional banks can compete more effectively, expand access to financial solutions, and enhance overall financial wellbeing across the marketplace. With proper risk mitigation, AI can deliver benefits ranging from scalable customer service to broader credit access in underserved communities.

Understanding the AI Risk Landscape

Dr. Lau outlined several critical risks requiring attention from policymakers and industry leaders. While generative AI introduces unique risks that organizations often struggle to mitigate and monitor—which can delay effective use and integration of the technology—financial services faces additional heightened challenges due to the sector's regulatory complexity and high-stakes environment. Understanding these risks is essential for developing effective governance frameworks that enable innovation while protecting consumers and market stability.

Cross-Sector AI Risks:

  • Hallucinations where AI generates false or misleading information while presenting it as factual
  • Adversarial security attacks including prompt injections and jailbreaking techniques that bypass guardrails
  • Data vulnerabilities where third-party AI providers may consume enterprise data, risking potential leakage of sensitive information
  • Misuse of AI systems for unauthorized, high-impact use cases without proper checks

Heightened Risks for Financial Services:

The financial services sector faces unique challenges requiring special attention from both industry and policymakers. The worthy challenge of introducing AI in the financial services ecosystem is to balance risk management expectations, while also promoting innovation for consumers, vendors, and our marketplace.

  1. The explainability challenge marks a shift from traditional models, as generative AI’s untraceable decisions may warrant revision of existing risk management guidance and controls.
  2. Model sovereignty has emerged as a critical concern, referring to an organization’s control over AI models, including data, infrastructure, and capabilities needed to remain independent from external providers. Risks heighten when U.S. financial institutions use models developed or trained in countries with adversarial relationships, such as China.
  3. Financial institutions need the ability and controls to align AI with their own definitions, policies, and risk tolerance – maintaining autonomy over their “ground truth.” This builds upon a strength of the financial services regulatory system in which institutions can interpret principles-based rules and turn them into practical compliance processes, even deciding when to accept certain risks. 
  4. Market or organizational dependence on a single AI provider increases vulnerability to operational risk or systemic failure as AI becomes embedded in existing technologies, resulting in increased vendor concentration, and third-and-fourth-party risks. Encouraging diversity in infrastructure can improve market resilience, financial stability, and reduce consumer harm.
  5. The newest frontier—Agentic AI—presents both the greatest opportunity and most significant risk. The more powerful the AI agent, the more risk there is in its deployment, as this requires organizations to transfer more decision-making authority from humans to AI systems. Even simple agents require access to sensitive information and influence impactful decisions. Therefore, for AI agents to truly provide value and minimize the amplification of all aforementioned risks, organizations will need to mobilize novel technologies to rigorously test, sandbox, and embed safeguards into these tools. 

AI as an Enabler for Risk Management and Compliance

Despite the risks presented by AI, one of the technology's most noteworthy benefits is that AI also serves as a critical protective function to mitigate many key AI and security risks. Dynamo AI has been a leader in this space, developing technologies that help product and risk management teams more comprehensively evaluate and guardrail both AI-specific threats and broader compliance and security challenges.

  1. AI-Powered Security Testing: Using AI to attack existing models proactively identifies threats for human review. Since it's virtually impossible to staff departments with sufficient personnel to predict all possible adversarial threats, AI-powered evaluations provide a scalable solution. Dynamo's guardrails service checks over 1 million user interactions daily for security vulnerabilities and noncompliance with bank policies.
  2. Real-Time AI Guardrails: These technologies moderate how models behave, control what data can enter or leave AI systems, and ensure alignment with each institution's specific policies. In a sector where compliance must be embedded into every process, AI guardrails enable financial institutions to scale their compliance interpretations and productionize high-value use cases. Once guardrails expand across the ecosystem, organizations can fully realize the value and returns of these technologies, alongside a more sound and secure banking ecosystem.
  3. AI Observability: Comprehensive human-in-the-loop monitoring gives humans the ability to oversee models and demonstrate compliance with internal policy and regulations. At Dynamo, we’ve built a full observability suite to allow organizations to continually observe all internal and customer-facing AI interactions, so they can strengthen controls as AI is in use. While a complex technical solution in its own right, AI observability provides organizations with the necessary reporting and alerts for internal monitoring of powerful systems, and it will prove to be an essential ingredient for effective AI oversight in the sector in the near future. 

These AI-powered approaches represent a necessary evolution in risk management, suggesting that sustainable AI adoption will depend on institutions' ability to implement new technology-based controls that operate at the speed and scale of the systems they oversee.

Policy Recommendations for a Competitive, Secure Ecosystem

As the Subcommittee weighs policy and technical considerations for the continued promotion of AI innovation and a vibrant financial services ecosystem, Dr. Lau emphasized several critical actions. These recommendations draw on both Dynamo's hands-on experience working with financial institutions and successful global models of AI governance. 

  1. Sandboxes are a Vital Tool for Innovative Oversight and Cross-Sectoral Information-Sharing: Regulators should continue establishing AI sandbox environments, as referenced in the administration's AI Action Plan. These environments allow both regulators and financial institutions to explore AI use cases, risks, and acceptable controls. Dr. Lau highlighted Dynamo’s involvement in sandbox programs in Singapore and applauded H.R. 4801, the Unleashing AI Innovation in Financial Services Act, noting it strikes a strong balance between supporting innovation, fostering governance, and educating regulators. 
  2. Growing the AI Evaluation Ecosystem: Supporting a vibrant ecosystem of independent AI evaluation and guardrail providers is essential to incentivize proper oversight, improve transparency, and build trust across AI applications and use cases in industry and government. This is particularly important as federal agencies deploy AI and the General Services Administration creates an "AI procurement toolbox" for federal agencies.
  3. Considerations for the Future of Model Risk Management: While mechanisms for AI model evaluation may differ from historical model risk management, ongoing dialogue between standard-setting institutions, financial regulators, and the financial services industry is essential to arrive at best practices for mitigating model risks in common use cases.
  4. Call on Financial and National Security Agencies to Plan for the Future of Adversarial AI: In addition to the benefits and risks of AI use in financial services, Dr. Lau highlighted the importance that key federal agencies develop strategies to defend against adversarial uses of AI to propagate fraud, disrupt financial markets, and spread disinformation, especially as AI agents advance. H.R. 2151, the AI PLAN Act, makes vital progress in calling on leading agencies to develop comprehensive plans in this area.

A Pivotal Moment for Financial Services

The financial services industry stands at a critical juncture in risk management. As institutions integrate AI into existing functions and frameworks, their practices across implementation and risk management will likely set the standard for other industries. Policymakers and regulators must clearly signal their priorities—the innovation they want to encourage and the risks requiring the most attention—as this will shape AI governance frameworks for years to come.

The path forward requires American leadership not just in building powerful AI systems, but in developing the protective infrastructure that makes them safe to deploy at scale. Dominance in Agentic AI will be essential for market-wide competitiveness, but only if paired with equally sophisticated red-teaming and guardrail capabilities. The challenge for financial services is to move quickly without moving irresponsibly — to capture AI's transformative potential while building upon the trust and security that underpin our financial system.

As comprehensive AI risk management continues to take shape across financial institutions and governing bodies, Dynamo AI remains committed to working with policymakers to create a competitive, secure, and compliant AI ecosystem within financial services.

Recent posts

Product
October 16, 2025

In Congressional Testimony, Dynamo AI Co-Founder Charts a Path Forward for Secure, Compliant AI in a Competitive Financial Services Marketplace

Product
June 16, 2025

Dynamo AI Co-Founders Named to Forbes 30 Under 30 Asia 2025: A Milestone Year for Innovation and Impact

Product
April 29, 2025

Itochu Techno-Solutions joins forces with Dynamo AI to Strengthen Generative AI Compliance and Reliability for Financial Institutions

Have a use case in mind?
Get in touch.

Contact Us