AI Governance
December 8, 2024

Generative AI Risk Management and Worker Well-Being

Daniel Ross
Head of AI Compliance Strategy

There has been a lot of discussion regarding emerging generative AI (AI) regulation from governments alongside guidance that is being developed by financial regulators overseeing systematically important markets. Much of this takes into consideration national policy, or individuals as citizens or consumers, and even the impacts AI will have on our planet. What has been publicized less is the impact of AI from an employee perspective, and guidance from labor departments globally on the controls that organizations (and leaders within those organizations) should be thinking about to safeguard employee safety, well being, and ultimately their rights.

On October 16th, 2024, the Department of Labor published new guidance " Artificial Intelligence and Worker Well-Being Principles and Best Practices for Developers and Employers.1" While the guidance is non-binding, it does provide organizations a different perspective on AI safety and use considerations, one that is employee and worker-focused, which should be top of mind alongside the industry-specific regulations and guidance that organizations are racing to satisfy in the midst of their AI deployments. And with the incoming US administration making public statements around worker rights and their well-being,  these considerations will most likely stay in the forefront of the AI discussion. 

The guidance covers eight principles, including Centering Working Empowerment (a North Star), Ethically Developing AI, Establishing AI Governance and Human Oversight, Ensuring Transparency in AI Use, Protection Labor and Employment Rights, Using AI to Enable Employees, Supporting Workers Impacted by AI, and Ensuring Responsible Use of Worker Data. Within each principle, a clear set of recommendations are outlined, many of which can be validated, evidenced, and tested for effective compliance. 

In partnering with organizations across industry and of various size, and complexity, Dynamo AI has worked to ensure its AI risk evaluations and ongoing post-deployment guardrail controls are used not only to mitigate regulatory or consumer risk but also:

  • Ensure there is 'meaningful human oversight' in the application of pre- and post-production AI guardrails;
  • Protect employees from leakage of personal information that may be used in harmful ways;
  • Facilitate upskilling of workers in order take on the types of jobs necessary in the future to safeguard AI and enable its use cases; and
  • Implement controls that protect users (and employees) from discrimination, and employee rights.

Dynamo AI product and research teams, in partnership with our clients, continually embed processes, features, guardrail controls, and monitoring capabilities aligned to promote each principle. Below we have mapped a number of these core product features to the principles outlined by the Department of Labor, with more to come as we continue to partner with employers across the marketplace.

  • Principle: Ethically Developing AI & Establishing AI Governance and Human Oversight
    • Dynamo AI Advantage:
      • DynamoEval reports and DynamoGuard workflows are built to implement a 'human-'in-the-loop' element, but the technology also advances training and upskilling of AI risk management as it is used by non-technical stakeholders. This empowerment of workers is a critical element of both the management of AI risk alongside the involvement of a broader workforce involved in this technology.
      • Organizations can customize guardrails to ensure an AI system does not make 'Significant Employment Decisions' or appears to, in the interaction with an employee or candidate. Employers can align guardrails to their internal policy standards and create controls so that certain topics such as compensation, performance standards, or other conditions of employment are not discussed, or only referenced in terms of very specific uses (for example, the ability to look up a particular company policy).
      • Dynamo has invested heavily in advanced guardrail monitoring capabilities, which facilitates the ability for organizations to perform impact assessments and independent audits on the use of AI, and how its impacting workers. This is one element of a comprehensive assessment to understand the enablement of AI in various use cases.
    • Dynamo AI Products: DynamoEval and DynamoGuard
  • Principle: Ensuring Transparency in AI Use
    • Dynamo AI Advantage:
      • Our comprehensive AI evaluations provide a level of detailed granularity for each of our assessment areas (Privacy, Hallucination, RAG Hallucination, Compliance which includes cybersecurity and toxicity tests, and Performance) that is cutting-edge. For example, our RAG Hallucination tests pinpoint what data elements are most at risk of hallucination with automated, detailed reporting, allowing organizations to make effective risk decisions regarding where AI is deployed.
    • Dynamo AI Products: DynamoEval
  • Principle: Protecting Labor and Employment Rights
    • Dynamo AI Advantage:
      • Guardrails can be deployed to mitigate risks related to disparate or adverse impacts on the basis of race, color, national origin, religion, sex, disability, genetic information, or other protected basis. This extends to controls that do not impede upon other worker rights of discussion, or controls to channel those discussions towards the correct venue. Federal and local governments, as well as employers also require supporting evidence of these guardrails, which DynamoGuard delivers through automated reporting. 
    • Dynamo AI Products: DynamoGuard
  • Principle: Using AI to Enable Workers & Supporting Workers Impacted by AI
    • Dynamo AI Advantage:
      • New roles and responsibilities are being crafted as part of organizations designing new operating models to deploy and manage AI. Dynamo's training modules and intuitive workflow is facilitating career advancement through the use (and upskilling) of our platforms, working to promote opportunity, productivity gain, and abilities to maintain employment in this highly technical space.
      • Each product has a number of core elements, from workflow to reporting, that is designed for non-technical stakeholders to comprehend, and incorporate into their role. This, in turn, has empowered workers to defend AI risk management controls and operational decisions. 
    • Dynamo AI Products: DynamoEval and DynamoGuard
  • Principle: Ensuring Responsible Use of Worker Data
    • Dynamo AI Advantage:
      • Our DynamoEval product includes a suite of evaluations focused on PII leakage, while our DynamoEnhance controls strengthen the mechanisms in place that are of concern to the Department of Labor. This adds advanced protections to evaluate the risks of PII leakage of each worker and candidate. 
    • Dynamo AI Products: DynamoEval and DynamoGuard

Dynamo AI receives regular feedback from regulators, government officials, and clients on actionable processes, controls, and functionality that can help promote a fair and equitable workplace in the advent of AI deployment. Our upcoming releases focus on both advanced controls to enable and safeguard AI alongside facilitating a positive impact for the employee experience. 

Let us know how we can empower workers and make AI safe and compliant for your deployment.

1. https://www.dol.gov/general/ai-principles

Recent posts

Product
March 5, 2025

Regulatory Roundup: From the desk of Dynamo's Head of AI Compliance Strategy

Product
December 8, 2024

Generative AI Risk Management and Worker Well-Being

Product
September 24, 2024

What are Trustworthy LLMs: A Road to Reliable AI Technology

Have a use case in mind?
Get in touch.

Contact Us