On 16 February 2023, press regulator Impress launched its new Standards Code. The key changes consist of guidance on AI and emerging technologies, a lower threshold on what amounts to ‘discrimination’, a tougher approach towards tackling misinformation and stricter rules on accountability when the Code is breached.
After a two-year consultation period, Impress published its new Standards Code (the Code). The Code aims to support journalists and protect the public from unethical journalistic reporting and practices. The Chair of Impress, Richard Ayre, noted that the new Code “sets the highest ethical standards for Impress publishers… so the public can confidently engage with the news of today, and tomorrow”. The new Code intends to tackle the future of journalism head-on; accounting for emerging technologies and new ways the public interact with media.
Key takeaways
AI and emerging technologies
When using AI to generate and publish news, publishers must now:
- prominently label content that has been recommended to people by automated systems based on their individual behaviour and data;
- provide individuals with easily accessible options to opt-out from the automated systems; and
- disclose what data they hold about people and how that data is used to make targeted recommendations.
These transparency requirements go much further than the existing requirements around AI which centre around the accuracy of content, specifically around the importance of human editorial oversight over the use of AI to avoid publishing false content (e.g., deepfakes).
Lowering the ‘discrimination’ threshold
The new Code lowers the threshold for what would be a breach of the Code if a publisher encouraged hatred or abuse against any group based on their characteristics (e.g., age, race, religion, gender identity, sexuality etc). Previously, the Code stated: “Publishers must not incite hatred against any group… [on any] characteristic that makes that group vulnerable to discrimination.” However, now the Code states: “Publishers must not encourage hatred or abuse against any group based on their characteristics…”.
Evidently, the guidance to the Code goes further – discrimination may now also be indirect, such as where a publisher treats a specific group differently from another. However, publishers should be happy to know that the new Code “does not concern content that merely hurts feelings; the disputed content must be more than provocative, offensive, hurtful or objectionable.” Instead, it must be content that is likely to (1) encourage others to target members of that group for abuse, (2) to commit acts of violence against that group, (3) encourage others to discriminate against them, or (4) includes dehumanising language against certain groups. Publishers should note that Impress will interpret its new discrimination provisions with a strong presumption in favour of freedom of expression.
In light of these stricter rules, it will be interesting to see how Impress walk the line between cracking down on discriminatory content and bolstering freedom of expression going forward.
Tackling misinformation
Under the rules on ‘accuracy’, publishers are required to take reasonable steps to ensure that their content is accurate through thorough verification of information. The Code applies higher standards of accuracy when dealing with more sensitive content – for example, content involving children, people with health condition or disabilities, and at-risk adults require publishers to take extra care in verifying the accuracy of information. The Code notes that the spread of misinformation on more sensitive content could have severe and long-lasting impact for those involved.
Given the spread of misinformation on the internet, the Code requires publishers to take reasonable steps to verify the information obtained from third-party sources such as social media posts and YouTube videos. In particular, publishers should be aware of the use of AI in social media and should ensure they are exercising editorial oversight to guarantee the accuracy of any such content produced by an AI system. More onerously, the Code requires publishers to take steps to curb the potential spread of false information, either deliberately or accidentally, by verification checks against reliable sources.
From more watchful oversight to increased transparency, the new Standards Code aims to paint a much more ethical and technologically progressive picture for the future of journalism – whether it succeeds…. watch this space.