Generative AI (Gen-AI) is already reshaping how news is written, edited and distributed – but what happens when it gets the story wrong, and who is held accountable?
In 2024, Apple suspended its Gen-AI based news alert service after it incorrectly published that US murder suspect Luigi Mangione had killed himself, attributing the claim to the BBC, which had never made it. Unfortunately, this was not an isolated event, and the alert service also declared Luke Littler as the PDC World Darts champion before playing, falsely reported on Rafael Nadal’s personal life and on political developments in Israel.
In a media landscape already strained by fake news, inaccuracy and distortion of facts can lead to real harm. This raises serious questions around the use, regulation and liability of Gen-AI.
When news isn’t human
Incidents such as Apple’s false news alerts raise questions about authorship, liability, intellectual property rights and public trust. Audiences often feel deceived if they discover AI was involved in content creation without their knowledge. Research carried out by YouGov[1] reported that 79% of Britons believe media organisations should be required to display on news articles any way in which the article has been created using the assistance of AI. The lack of transparency in disclosing use of AI appears to be a major trend in AI discourse. Recently, a radio show in Australia used an AI-generated avatar to host a four-hour regular slot without disclosing it and two major US publishers had to apologise for publishing a summer reading list written with AI, not realising it contained made-up books - in fact, 10 of the 15 books did not exist.
Moreover, investigations carried out by the BBC[2] found that when asking a Gen-AI chat model questions about news topics, more than half of the answers contained significant issues, including false citations and altered quotes. Unlike human journalists, AI systems often lack mechanisms for correcting errors.
The National Union of Journalists (NUJ) is calling for urgent regulatory oversight to ensure ethical use of AI, arguing that Gen-AI should only be used as an assistive tool that is always overseen by human journalists. The NUJ believes the deployment of technology like Gen-AI must be subject to safeguards, transparency and meaningful regulation in order that public trust is not further eroded, and the rights of creators are respected and protected. The NUJ has fed these demands into the UK government’s consultations on Copyright and AI, opposing broad text-and data-mining exceptions and urging an opt-in licensing model with effective sanctions and redress for journalists whose work is misused.
Friend or foe?
Publishers face dual threats from AI: exploitative training on their content, and competition from algorithm-generated alternatives. Gen-AI models are trained on vast datasets that often include news articles and journalism. The UK’s creative industry, contributing c.£124.8 billion to the economy annually, has voiced concerns over the threat of exploitation and IP rights breaches, with false attributions to journalists and creators discovering use of their likeness without their knowledge or consent. As such, we are beginning to see media organisations take action against AI companies. For example, the publishers of the Wall Street Journal and the New York Post have filed copyright and trademark infringement lawsuits against Perplexity, an AI upstart, accusing it of “massive freeriding” and seeking immense damages. In August 2025, Perplexity lost its bid to dismiss or transfer venue to the Northern District of California in full. A pretrial conference is set for July 2026.
Meanwhile, outlets like The Guardian, The Washington Post and the Financial Times have struck confidential licensing deals with OpenAI and Microsoft to monetise the use of their content. In particular, the New York Times (NYT) has entered into its first licensing agreement involving Gen-AI by partnering with Amazon. The multi-year deal allows Amazon to incorporate NYT’s editorial content into its AI products, such as Alexa. The agreement aligns with NYT’s policy advocating that quality journalism should be compensated. Other media outlets, including News Corp, have entered into similar agreements with AI companies.
However, not all stakeholders are satisfied with this shifting approach which we are now seeing reflected in regulation. Critics are arguing that the UK’s proposed changes to copyright law, which would allow AI companies to use copyrighted works without prior permission unless creators explicitly opt out, places an unfair burden on rights holders. The creative industry, which demands transparency about the content used in AI development, has stressed that the opt-out mechanism is inadequate and accordingly have launched a bold campaign titled ‘Make it Fair’ to highlight how their content is at risk of being given away for free. The campaign has involved regional and national newspapers devoting their front pages to the cause and activists lobbying their local MPs.
Changing rules and frameworks
Over the next few years, it is very likely we will see an increase in the number of AI-related disputes going through the courts. In anticipation of this, and in an attempt to regulate an historically unregulated area, the Artificial Intelligence (Regulation) Private Members’ Bill[3] (the “Bill”) was re-introduced into the House of Lords in March 2025, following its failure to pass under the previous government. If enacted, the Bill would establish the ‘AI Authority’, a new regulatory body tasked with overseeing AI in line with the Bill’s proposed framework.
In parallel to this, the government is preparing its AI Bill which aims to target “the most advanced AI models” and formalise existing voluntary commitments between the industry and government. Publication of said bill has been delayed beyond its summer 2025 target, now being expected no earlier than the next King’s Speech (likely May 2026).
From a regulatory standpoint, four key UK regulators are also considering their approach to, and implementation of, AI:
- The Competition and Markets Authority (CMA) began reviewing AI models in May 2023, and in April 2024, published its findings highlighting concerns over the growing influence of big tech firms - particular in respect of their control over key inputs that could potentially stifle competition. The Digital Markets, Competition and Consumers Act 2024 (DMCCA) has since equipped the CMA with enhanced powers including direct consumer enforcement (fines up to 10% global turnover from April 2025) and Strategic Market Status (SMS) designations for major search and mobile platform providers (confirmed October 2025), while it maintains a collaborative and proportionate approach, aiming to balance innovation with competition safeguards relevant to publishers’ digital ecosystems.
- The Financial Conduct Authority (FCA), in April 2024, alongside the Bank of England, outlined its AI strategy, stressing that firms must be able to explain their use of AI to regulators. The update also signalled that there is growing support for a potential central AI authority and highlighted the importance of international cooperation. In September 2025, the FCA’s AI Live Testing Scheme launched to support firms – including publishers - testing Gen-AI tools like content personalisation and subscription chatbots, under existing Consumer Duty rules.
- The Information Commissioner’s Office (ICO) published a 4-part consultation series on Gen-AI and data protection, which it responded to in December 2024. It has since announced plans to publish a single set of rules for AI developers and users. Further, the ICO unveiled its AI and Biometrics Strategy in June 2025, setting out plans for a single, streamlined ruleset and a statutory code of Practice – expected early this year - to guide developers and users on everything from training data compliance to individual rights. For publishers experimenting with Gen-AI tools like content generators or personalisation engines, the ICO's core message is clear: check your lawful basis for using personal data in training sets now, or risk enforcement as rules solidify.
- Ofcom has also announced it will enforce the Online Safety Act in relation to Gen-AI tools, citing concerns that AI-related risks disproportionately affect consumers and can cause serious harm to individuals, especially online. Ofcom also has plans to accelerate AI adoption across its policy areas, prioritising a safety-first approach. In January 2026, it launched a high-profile investigation into AI chatbot services over the generation of illegal or harmful contents. More information on this can be found on the Bird & Bird article here.
In May 2025, the Copyright Licensing Agency (CLA) - which administers collective copying rights for published works in education and commercial contexts - amended its Business Licence to authorise restricted Gen-AI applications within organizations. The amendment was developed in collaboration with industry bodies such as the Publishers’ Licensing Services and Authors’ Licensing and Collecting Society. Under the revised licence, licensees may input up to 5% of eligible content (or one chapter/article) from approved publishers into enterprise AI systems exclusively for internal analysis. The licence expressly prohibits use of that content for model training, fine-tuning, or Retrieval-Augmented Generation applications. This framework advances rights holders' advocacy to regulators, evidencing workable remuneration channels that challenge the rationale for wide-ranging copyright exceptions in the midst of active UK AI policy deliberations.
Final thoughts
Looking ahead, there is certainly an uphill battle to be fought in how we, in both our professional and personal capacities, successfully engage and roll-out Gen-AI. The UK government must strike a balance between developing the UK as an innovative space to attract new business while ensuring our rights-holders, and ourselves as consumers, are adequately protected. The area is one that is rapidly changing, and will no doubt require a strong regulatory and legislative framework that has the ability to adapt and evolve to new challenges that new technology no doubt brings.

/Passle/MediaLibrary/Images/2025-03-28-17-40-54-244-67e6df26003ed219308e7bff.jpg)
/Passle/MediaLibrary/Images/2026-01-06-15-37-28-366-695d2c38f4473ae28b63f12a.jpg)
/Passle/65c4b3ec7a33ecde2328516b/MediaLibrary/Images/2026-01-26-16-43-57-994-697799cd812a5c2d344bbc44.jpg)
/Passle/65c4b3ec7a33ecde2328516b/MediaLibrary/Images/2024-11-12-16-12-45-775-67337e7d635b40298fc7ffda.jpeg)