As AI adoption accelerates across workplaces, labor organizations around the world are beginning to take notice—and action. The current regulatory focus in the US centers on state-specific laws like those in California, Illinois, Colorado and New York City, but the labor implications of AI are quickly becoming a front-line issue for unions, potentially signaling a new wave of collective bargaining considerations. Similarly, in Europe the deployment of certain AI tools within the organization may trigger information, consultation, and—in some European countries—negotiation obligations. AI tools may only be introduced once the process is completed.

This marks an important inflection point for employers: engaging with employee representatives on AI strategy early can help anticipate employee concerns and reduce friction as new technologies are adopted. Here, we explore how AI is emerging as a key topic in labor relations in the US and Europe and offer practical guidance for employers navigating the evolving intersection of AI, employment law, and collective engagement.

 

Efforts in the US to regulate AI's impact on workers

There is no specific US federal law regulating AI in the workplace. An emerging patchwork of state and local legislation (e.g., in Colorado, Illinois and New York City) address the potential for bias and discrimination in AI-based tools—but do not focus on preventing displacement of employees. In March, New York became the first state to require businesses to disclose AI-related mass layoffs, indicating a growing expectation that employers are transparent about AI's impact on workers.1

Some unions have begun negotiating their own safeguards to address growing concerns about the impact that AI may have on union jobs. For example, in 2023, the Las Vegas Culinary Workers negotiated a collective bargaining agreement with major casinos requiring that the union be provided advance notice, and the opportunity to bargain over, AI implementation. The CBA also provides workers displaced by AI with severance pay, continued benefits, and recall rights.

Similarly, in 2023 both the Writers Guild of America (WGA) and Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) negotiated agreements with the Alliance of Motion Picture and Television Producers (AMPTP) that include safeguards against AI reducing or replacing writers and actors. WGA's contract requires studios to meet semi-annually with the union to discuss current and future uses of generative AI—giving writers a formal channel to influence how AI is deployed in their industry. The SAG-AFTRA contract requires consent and compensation for use of digital replicas powered by AI.

The International Longshoremen's Association (ILA) has taken a more aggressive approach. In October 2024, the ILA launched a three-day strike that shut down all major East and Gulf Coast ports demanding, among other things, a complete ban on the automation of gates, cranes, and container-moving trucks. The ILA and US Maritime Alliance eventually reached agreement on the terms of a CBA a collective in early 2025, which includes a provision prohibiting the introduction of "fully automated" technology—equipment that operates without any human interaction. And any new tech implementation must be agreed upon by the union and employers; if they cannot reach consensus, the matter goes to arbitration.

Unions are also challenging the usage of AI before the National Labor Relations Board (NLRB). Recently, SAG-AFTRA filed an unfair labor practice charge to the NLRB against a video game maker, alleging the employer utilized AI-generated voices to replace bargaining unit work without providing the union with notice or the opportunity to bargain. The case is pending, and we are monitoring developments.

Across the pond, trade unions have been quick to react to the disruptive power of AI.

 

In Europe, AI is emerging as a key topic with trade unions and works councils

In the EU, AI in the workplace is a particularly sensitive issue—especially when it comes to its impact on jobs. The landmark EU AI Act is currently in its phased implementation stage, with key provisions such as a ban on prohibited AI systems and obligations on AI literacy under the AI Act taking effect in February 2025, and rules for general-purpose AI models and governance structures set to take effect by August 2025. While the EU AI Act does not ban job displacement by AI outright, it does contain several employee protections. Employers must consult with works councils before implementing AI, and in some jurisdictions, obtain their agreement. The Act also empowers individual employees by giving them the right to be informed when AI is used in decisions that affect them, to request explanations about how AI influenced those decisions, and to challenge outcomes.

In France, a court recently underscored the importance of treading carefully with employee representation rights with respect to AI in the workplace, even during testing and experimentation phases. In an interim order from the Nanterre Court of Justice in February, the court ruled that a company's early deployment of AI tools in a "pilot phase" occurred before the works council (CSE) consultation process had been completed. It therefore suspended the implementation until the consultation was completed and ordered the employer to pay damages to the CSE for the harm suffered.

In the UK, the conversation around AI and employment is gaining legislative traction. In 2024, the Trade Union Congress proposed an AI and Employment Rights Bill aimed at regulating how high-risk AI is deployed in the workplace. The bill would have required employers to consult workers before implementing such systems, ensure transparency, and provide personalized explanations for AI-driven decisions. Notably, the bill would classify dismissals based on unfair reliance on high-risk AI as "automatically unfair." Though the bill did not advance, it signals growing momentum in the UK toward incorporating worker safeguards into the AI adoption process. The independent AI Opportunities Action Plan commissioned by the UK government, published in January 2025, recognizes the change that AI will bring to the labor market. The report acknowledges the importance of developing life skills and educational opportunities for development, and also of diversity in the talent pool working in AI and data science.

In Germany, the deployment of AI in the workplace is closely tied to works council co-determination. While there is currently no specific AI-related co-determination, political discussions are ongoing about expanding the works council's authority in this area. In the meantime, existing IT co-determination standards apply. Under established case law, the works council has co-determination rights whenever an IT system is capable of monitoring employee behavior or performance—criteria met by most AI systems used in the workplace. Given this legal backdrop, employers are strongly advised to engage proactively with works councils and negotiate a framework agreement on AI which can help streamline co-determination procedures and provide legal certainty for future implementations.

 

Proactive strategies for multinational employers

In both the US and in Europe, partnering early with unions and employee representative bodies on AI can help employers avoid costly disputes and disruptions, including strikes. Proactive employers looking to reduce reputational risk and promote constructive labor relations can keep these best practices in mind:

  • Have a very clear understanding of the company's obligations under any applicable CBA and with respect to employee representative bodies. For US employers with unionized labor, implementation of technology (AI or otherwise) may be addressed in the CBA (whether in a management rights clause or elsewhere). Even if the CBA is not clear or does not explicitly address AI, partner with counsel to consider closely what the company's obligations may be, as it is conceivable there is no obligation to bargain.
  • Engage with labor early and anticipate concerns. Employers need not wait for contract negotiations. By way of example, in 2023, a global tech company formed a first-of-its-kind partnership with a union to address the impact of AI on workers. The initiative involved training union members on AI fundamentals and gathering their feedback to inform AI development, as well as both parties advocating for policies supporting AI-related workforce training amid growing concerns about job displacement and AI-driven inequality. Getting out ahead can eliminate the fear of the unknown and go a long way in building trust on issues related to job security, retraining and perceived fairness.
  • Promote transparency. Proactively involve unions and employee representative bodies in discussions about AI adoption, including its purpose, scope, and potential impact on jobs. Be prepared to articular the opportunities at stake clearly, including how AI tools can optimize work and working conditions.
  • Collaborate on guardrails. Work with unions and employee representative bodies to establish boundaries on AI use that the employer may be comfortable with—such as limits on surveillance, algorithmic management, and automation of core job functions—while also exploring how AI can enhance, not replace, human roles.
  • Conduct AI impact assessments and ensure compliance with applicable law. Before deploying AI tools, obtain legal advice on the application of emerging AI laws. Consider the tools' potential impact on job functions, employee rights and workplace dynamics. This can help identify areas where labor engagement is recommended.
  • Reskill and upskill. Consider offering training programs and career transition support to help workers adapt to AI-driven changes. Jointly developing and investing in these initiatives with unions and employee representative bodies can ensure alignment with workers' needs and alleviate fears of AI-related job displacement.
  • Be prepared to bargain. Depending on whether AI tools materially impact working conditions, plan ahead, work with experienced counsel and solidify communication strategies to be ready if it becomes necessary to bargain with unions or consult with works councils.

 

For support developing your AI adoption strategies, including anticipating labor's response, please contact your Baker McKenzie employment lawyer.

 

--

1 New York's Worker Adjustment and Retraining Notification (WARN) online portal—used by employers with 50+ employees to submit the required 90-day notice of a mass layoff or plant closure—now includes a checkbox asking whether "technological innovation or automation" contributed to the job losses. If selected, employers are also asked to specify the type of technology involved, such as artificial intelligence or robotic machinery.

Explore More Insight