In brief

Recent regulatory developments underscore the growing scrutiny of professional uses of generative AI. On 13 January 2026, the Spanish Data Protection Authority (“Spanish DPA”) issued a formal notice warning of the legal and privacy risks involved in uploading, transforming or generating images of individuals through AI tools. At the same time, the European Commission has published the first draft of its voluntary Code of Practice on Transparency of AI-Generated Content (“Code”). While adherence to the Code is optional, it is intended to support providers in meeting the mandatory transparency obligations set out in Article 50 of the AI Act, which will apply from August 2026 to providers and deployers of AI systems. These developments reinforce the need for robust safeguards, internal controls, and transparent labelling when deploying generative AI.

Key takeaways

What companies need to consider now:

  • Treat any upload or use of someone’s image in an AI tool as handling personal data and put basic safeguards in place.
  • Before creating or sharing AI‑generated content — even internally — check whether it could trigger risks beyond data protection, such as reputational issues, copyright misuse or misuse of someone’s likeness.
  • Prepare for the AI Act’s transparency rules arriving in August 2026, including clear labelling of any content changed or created by AI.

 

In more detail

Spanish DPA guidance on AI and images

The notice issued by the Spanish DPA on 13 January 2026 provides its clearest position to date on the risks associated with using third party images in generative AI tools. It confirms that uploading, transforming, or generating visual content based on a person’s image constitutes personal data processing, even where the output is not intended to be shared or appears innocuous. This represents an explicit acknowledgement that simply feeding an image into an AI system already triggers General Data Protection Regulation (GDPR) obligations.

The Spanish DPA identifies two main categories of risks:

  1. Visible risks which arise when the generated image or video is shared. These include:
    • Using images outside of their original context without a valid legal basis;
    • The ease of forwarding or distributing content;
    • The practical impossibility of removing replicated copies;
    • The creation of intimate or compromising deepfakes with potentially severe consequences; and
    • The risk of falsely attributing behaviors or actions to individuals.
  2. Less visible risks which arise even when the content is not shared. These include:
    • Loss of control when external providers process the images;
    • The potential existence of unremovable copies;
    • Additional or undisclosed processing by providers;
    • The generation of metadata enabling re-identification; and
    • The practical difficulty for data subjects to exercise their rights.

Overall, the notice establishes a clear and more stringent framework: the use of images in AI systems must be treated as processing of personal data and must be accompanied by appropriate safeguards.

EU draft good practices code for transparency of AI-generated content

  • In parallel, the European Commission has issued its first draft of a voluntary Code of Practice on Transparency of AI-Generated Content (“Code”), intended to help organizations anticipate compliance with the transparency obligations under Article 50 of the AI Act. The final version is expected in June 2026, with mandatory transparency requirements applying to providers and deployers of AI systems from August 2026.
  • The Code introduces a two-tier classification system: (i) fully AI-generated content and (ii) AI-assisted content, where AI substantially influences the final output. Each category must be accompanied by clear labelling using a common icon. Until the official EU icon is adopted, an interim icon to support consistent disclosure composed of a two-letter acronym referring to artificial intelligence (such as “AI”, “IA” or “KI” reflecting the translation into the languages of the Member States) may be used.

The Code also sets out sector- and format-specific rules, especially for deepfakes. For instance, real-time deepfake videos must display a continuous on-screen indicator and an initial notice, while non-real-time videos may use individual or combined options such as fixed icons, opening notices or credits-based disclosures, as detailed in the Code.

Deployers choosing to adhere to the Code must also implement robust internal mechanisms, including documentation of labelling practices, staff training on when and how to apply disclosures, continuous monitoring procedures, and a channel for reporting mislabeling. Any reported inaccuracies must be corrected promptly.

This structure is intended to support a consistent and transparent approach to AI-generated content before the AI Act’s obligations become enforceable.

Broader legal considerations

The Spanish DPA’s notice and the Code highlight that the implications of generative AI extend far beyond data protection. The manipulation or use of third-party images, voices or other content may also impact rights such as honor, privacy, and one’s own image. In addition, generative AI can give rise to significant questions around copyright, design rights, trademarks and other intellectual property rights linked to the source materials or the generated outputs.

A holistic, cross cutting legal assessment is therefore essential before implementing or using any generative AI tool. Organizations should ensure adequate employee training, adopt clear internal safeguards, and mitigate risks emerging from both the use of third-party content and engagement with external AI providers. This broader legal lens is critical to ensuring responsible deployment of generative AI technologies.

For tailored guidance on these regulatory developments and to assess your organization's exposure and compliance needs, please contact our IPTech team.

Related content

Marta Expósito, Associate, has contributed to this legal update.

Explore More Insight