- Artificial intelligence becomes an established, board-level imperative
- AI legislation and regulation could turn a corner
- Cybersecurity, data privacy and geopolitics will be intrinsically linked
- Expect more turbulence in global data and cyber regulations
- E-discovery will be the latest frontier transformed by AI
- Looking ahead in 2026
In 2026, organizations face a cyber and data landscape that is increasingly fragmented, with AI continuing to dominate debate.
In the US, California, Colorado and other states advanced numerous AI regulations, but President Trump has taken a deregulatory approach to AI at a federal level. In December 2025, he took the unprecedented step of discouraging states from regulating AI, via an executive order. The California governor vetoed a broad AI safety bill in 2024, but signed a narrowed version of the bill in October 2025. Colorado delayed implementation of its AI Act, and many expect further changes to the law.
Until recently, the EU was pressing ahead with legislative solutions to digital challenges. Alongside guidance on the existing provisions of the EU AI Act, it was consulting on new measures, including the Digital Fairness Act, in a bid to root out unethical commercial practices online. Then in November 2025, the EU published AI deregulation proposals designed to simplify the implementation of the AI Act.
Other countries have introduced bills and guidance but are reconsidering and holding off on finalizing new regulations and statutes while pondering how to best pursue protection while trying to avoid stifling innovation. Against this backdrop, companies will wrestle with divergent responses to AI while trying to embrace its proliferation.
Artificial intelligence becomes an established, board-level imperative
The multifaceted challenge of digital safety is pushing organizations to ensure AI is now a board-level priority.
“I expect there will be heightened regulatory scrutiny on AI transparency and accountability,” says Adam Aft, Baker McKenzie partner, Chicago. “This scrutiny will need to evolve with the technology, for example, liability in autonomous decision-making — an especially prominent issue for agentic AI. I also expect to see continued development and enforcement of data privacy laws intersecting with AI, such as California’s new automated decision-making regulations.”
Aft also predicts that the resolution of intellectual property disputes will continue to shape where the dollars are flowing and which models for commercializing AI tools emerge as viable. “From a practice of law perspective, the adoption of AI by legal teams will track along an exponential curve,” Aft says.
AI legislation and regulation could turn a corner
Lothar Determann,* Baker McKenzie partner, Palo Alto, expects a turning of the tide on AI legislation and regulation in 2026.
“The EU and some US states have been rushing to be first, safest, and strictest on AI regulation in 2024 and 2025, but all are reconsidering now the harmful and counterproductive effects of regulating too early and too broadly. Overly broad definitions of 'artificial intelligence systems' in EU, Colorado and California capture literally all software and most hardware products, overwhelming businesses and confusing users," says Determann.
“The sheer power of the new systems is catching business, governments, and consumers by force,” Determann continues. “The machines need more power—new nuclear power stations are built. The machines need more data centers—approvals are granted around the world with record-settingly short procedures. The machines need deregulations—governments deregulate.”
The continued rise of AI will also drive more AI mishaps in 2026.
“Faulty citations in briefs and court decisions have been identified relatively easily, but oversights in complex contracts, due diligence reports, and regulatory surveys will prove harder to catch,” says Determann. “But the biggest risk for attorneys remains insufficient AI adaptation, impairing efficiency, timeliness, and affordability of legal services.”
Cybersecurity, data privacy and geopolitics will be intrinsically linked
Geopolitics will continue to influence cyber law and risk in 2026.
“In the US, federal deregulation of consumer protection will cause state and foreign regulators and litigants to be more active on issues like ‘reasonable security’,” says Justine Phillips, Baker McKenzie partner, Los Angeles. “Globally, cybersecurity laws will go into effect in 2026 that require businesses to implement stringent technical and organizational controls, manage supply chain risk, localize data, conduct third-party security assessments, and report incidents to newly established cyber regulators.”
Phillips cautions that legal teams must partner with information security teams to effectively manage cyber risk, as part of a multidisciplinary strategy. “Organizations must adopt defensible strategies that integrate people, processes, and technology to meet evolving risks and legal demands.”
Expect more turbulence in global data and cyber regulations
“Global companies have historically been able to have some degree of certainty based on multilateral norms on national security and most favored nation treatment based on frameworks such as the World Trade Organization (WTO)," explains Brian Hengesbaugh, Baker McKenzie partner, Chicago. “However, given the diminished effectiveness of the WTO and other multilateral frameworks, and the preference for bilateral arrangements, global companies will face more significant cross-border restrictions motivated by privacy, national security, AI/competition, and other drivers.”
Hengesbaugh cites as a key example the tension between the US and China that is playing out in the regulatory context, including the US Department of Justice’s Bulk Data Transfer Rule (restricting transfers to China and other “countries of concern”) and China’s implementation of its Cybersecurity Law, Data Security Law, and Personal Information Protection Law (restricting transfers to the US and other jurisdictions).
“These trends with geopolitical risks are not going away. They are likely going to deepen this year,” says Hengesbaugh.
E-discovery will be the latest frontier transformed by AI
Rising data volumes and pressure to reduce costs has driven innovation in the eDiscovery space.
“eDiscovery has been a 'petri dish' for legal tech innovation and development,” says Bryant Isbell, managing director for Baker McKenzie's Global eDiscovery & Data Advisory in London. “As mobile and short-form messaging platforms multiply, evidence analysis must evolve to keep pace.”
Mobile phones remain central, but fragmented data sources present growing challenges.
“AI is already transforming first-level document review, enabling faster, smarter processing,” says Isbell. “As adoption increases, legal process outsourcers and staffing-based review providers must shift toward tech-enabled value delivery. Those slow to adapt risk being left behind as technology reshapes the landscape.”
Looking ahead in 2026
AI proliferation will continue, but global regulation of cyber and data security will be divergent.
- Prepare to adapt policies in response to shifting geopolitical winds and changes in regulation.
- Manage legal and reputational risk with strong internal governance policies, which apply across all jurisdictions.
- Take a multipronged approach to managing digital risk, integrating people, processes and technology.
*Determann is the author of Determann's Field Guide to Artificial Intelligence Law