This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.

MediaWrites

By the Media, Entertainment & Sport group of Bird & Bird

| 6 minute read

A Clear Lens on AI: Transparency of AI in Film

Awards season is here once again, but there’s something different about this year’s entrants. Whereas last year’s films received public backlash for their use of AI, this year’s submissions include “AI-and-proud” entries: animated film Ahimsa reportedly detailed in its Academy Awards entry form its use of AI tools to create its visuals, and All Heart used a closed AI model trained only on the filmmakers’ artwork. 

What could have helped empower this year’s contenders to be more forthcoming about their AI-use? Undoubtedly filmmakers will have been reassured by the Academy Awards officially updating its rules in April 2025 to provide that use of generative AI tools “neither help nor harm the chances of achieving a nomination”. More importantly, ensuring audiences know about the AI-use and being open about how AI contributed to the creative process helps to reduce some of the controversy currently surrounding AI in film production. The films subject to controversy in previous years had not proactively disclosed their use of AI. This year’s entrants, on the other hand, have been transparent about their use of AI from the get-go and take pride in embracing AI as a creative tool. 

Still in pre-production: UK AI legislation 

Transparency regarding AI-use is an area that is rapidly being considered and enforced by legislators around the world, with many regulations having come or due to come into effect this year. The UK does not currently have AI-specific legislation and therefore no specific regulation for the labelling of AI in film. 

However, stakeholders in the UK film industry have been calling for regulation. For example, the British Film Institute’s June 2025 report AI in the Screen Sector: Perspectives and Paths Forward (BFI Report) calls on regulators to explore standards around content provenance and authenticity to reduce misinformation and ‘slop’ and to support trust in screen content. 

In the UK government’s 3 July 2025 response to the Culture, Media and Sport Committee’s report on British Film and High-End Television (the latter being known as the Select Committee Report), it did state that the UK government was considering regulation to ensure the labelling of AI outputs happens consistently. However,  there have been no further announcements of specific plans or timelines for potential regulation, not least due to the UK government expressing concerns about creating a new scheme that could restrict innovation. 

Global release: AI regulation in other jurisdictions

EU AI Act

The EU, on the other hand, does have AI-specific legislation: the EU AI Act. Article 50 EU AI Act, due to become fully applicable on 2 August 2026, is relevant to film production as it deals with transparent labelling. Article 50(2) requires providers of AI systems generating audio, image, video or text content to ensure that outputs of the AI system are marked and detectable as artificially generated or manipulated.  Article 50(4) deals with deep fake content. Deep fakes are defined as AI-generated or manipulated image, audio or video content that resembles an existing person’s objects, places, entities or events and would falsely appear to a person to be authentic or truthful (Article 3(60)). Article 50(4) requires deployers (users, essentially, but excluding those who use AI in a personal non-professional activity) of an AI system that generates or manipulates image, audio or video content constituting a deep fake to disclose that the content has been artificially generated or manipulated.  

An initial indication of what Article 50 may require in practice can be found in the First Draft Code of Practice on Transparency of AI-Generated Content (the Draft CoP), published on 17 December 2025. Although it is a draft and likely to change before the Code of Practice is finalised, it provides a helpful insight into current thinking regarding the application of labelling requirements. 

For providers of AI systems under Article 50(2):

  • GenAI outputs must be marked with multiple layers of machine-readable marking.
  • Enable the functionality and option for deployers to directly include a visible marking by default upon generation of the output.
  • Recording of provenance information for fully human-authored content or fully human content-editing operations is encouraged in order to increase trust and facilitate authenticity and provenance of all content.

For deployers / users of AI systems that generate deep fake content under Article 50(4):

  • A common icon for deep fakes and AI-generated and manipulated text publications should be applied in a visible and consistent location appropriate to the context. The EU-wide icon is yet to be finalised, but the Draft CoP suggests the use of a two-letter acronym that refers to AI (or the equivalent acronym in another language) as an interim icon.
  • A clear distinction should be made between fully AI-generated content and AI-assisted content. Examples of AI-assisted content include face/voice replacement or modification, hybrid audio formats, AI rewriting or summarising human-created text. This is particularly relevant as instances of face and body replacement, voice and dialogue modification, and screenplays drafted with the assistance of AI have been at the heart of the AI in film debate in recent years.
  • The deep fake labelling process must not be based only on automation but must also be supported by appropriate human oversight. Recording each creation or modification step carried out by an AI system is also recommended.  
  • Interestingly, the EU AI Act recognises that in the context of artistic, creative and fictional work, such disclosure can be done in an appropriate manner that does not hamper the display or enjoyment of the work. The Draft CoP’s guidance suggests the common icon is placed in non-intrusive positions, and makes it clear that the icon should appear at first exposure, at the latest. 

However, the Draft CoP still leaves many questions; if a film contains a depiction of an actor which has been altered by AI to make them look older, will a disclosure in the end-credits be sufficient, or will the opening credits need to contain the icon? This means audiences may have been saved from distracting pop-ups that say “AI-Generated Content” during specific scenes, but may have to put up with this message in the opening-credits: “No animals were harmed in the making of this film – all animals were AI-generated!

Snapshot of other jurisdictions

Other jurisdictions are taking similar approaches to transparency of AI-generated content. To name a few: 

  • The US does not currently have AI transparency rules at a federal level, but states are beginning to enforce their own regulations. The California AI Transparency Act (SB-942) came into effect on 1 January 2026 and applies to companies that produce high-impact or genAI systems with more than 1 million monthly users. SB-942 requires the implementation of a publicly accessible AI detection tool, as well as both visible disclosures and embedded hidden disclosures of AI-generated content. In comparison, the EU AI Act does not explicitly mandate such an AI detection tool, but the Draft CoP does require providers to help deployers access and apply detection mechanisms and verification tools.
  • South Korea’s AI Basic Act, which came into effect in January 2026, similarly requires labelling of AI deep fake content but recognises that for works of artistic or creative expression, the manner of labelling should not impede the exhibition or enjoyment of the work.
  • China’s AI content labelling rules published in September 2025 requires AI-users to declare and use labelling functions to be provided by service providers. Note that whereas the EU AI Act excludes individuals acting in a personal and non-professional capacity, China’s AI rules apply even to those users acting in their personal capacity, However, there is the option to apply to the service provider to remove the explicit label if the user does not want it. More detail on China’s AI content labelling rules can be found here

Keeping humans in the director’s chair 

Human creative authorship remains a key emphasis. The updated Academy Awards rules states it will take into account “the degree to which a human was at the heart of the creative authorship”. The practicalities of assessing human authorship will undoubtedly be facilitated by obligations and guidance from the EU regulations. The Draft CoP explicitly references the objective of promoting the uptake of “human-centric and trustworthy” AI. As such, it encourages the recording of content that has been authored or edited by humans to increase trust and facilitate authenticity, and requires the labelling process itself to be supported by appropriate human oversight. 

However, in the other jurisdictions covered above, there are no explicit obligations or recommendations to require or encourage recording of human authorship. That said, in the UK, the BFI Report proposes guidelines and agreements on protection of human creativity and authorship and verification and archiving of human creative output. It will be interesting to see whether this is implemented via legislation or via industry standards. 

UK film industry writing its own script for now

In the meantime, the UK film industry continues to self-regulate its use of AI. For instance, BAFTA is considering a certification scheme to incentivise ethical practice in the use of AI tools across the industry. This could look similar to the BAFTA albert scheme designed for productions to measure and actively reduce its carbon footprint. As stated in the UK Government’s response to the Select Committee Report, the government will appoint an AI Sector Champion for the creative industries in due course, and proposes the AI Sector Champion work with the industry to develop an AI certification scheme for ethical use of genAI in film. 

As there is currently no plan for specific AI legislation in the UK, for now, the UK film industry will have to write its own script. 

Tags

ai, united kingdom, broadcasting, artificial intelligence, ai legal services, media entertainment and sport, technology and communications, insights