Online Harms Bill set to make tech companies responsible for online content

Written by Jen McKeeman

21 July 2021

 

Our kids are growing up on a digital playground and no one is on recess duty.’

Kevin Honeycutt, Educational Speaker

There is no doubt that without the internet, life during the Coronavirus pandemic would have been much more difficult. Providing access to online shopping, the ability to work from home and a virtual lifeline to loved ones, the internet allowed society to function in novel and essential ways.

According to the UK’s Office for National Statistics (ONS), 46.6 million people used the internet daily in Great Britain in 2020, with 96% of UK households having internet access. However, the high levels of internet usage also provided increased opportunities for fraudsters with Finder reporting that online shopping fraud rose by 37% in the UK in the first half of last year. 

As we look to use the internet more than ever, we need to be aware of the harms to which it can expose its users. In the wrong hands, the internet can support criminal activity, spread harmful content or abuse its users. Two-thirds of adults in the UK are concerned about harmful content online and almost half report seeing hateful content in the past year.

With this in mind, the UK government is looking to create a new regulatory framework for online safety to clearly define companies’ responsibilities in keeping UK users, particularly children and vulnerable users, safer online.

 

What is the Online Harms Bill?

In May 2021, a draft of the Online Harms Bill, called the Online Safety Bill, was published by the UK government. The Bill’s purpose is to introduce new laws to help keep internet users safe when accessing online content. The scope of the Bill includes the following:

  • Companies are required to tackle illegal activity threatening childrens’ safety online and must have measures in place to prevent children from experiencing cyberbullying and accessing inappropriate material.
  • Companies must have clear and easy ways for users to report illegal material online and will have to act quickly to remove it. 

According to Caroline Dinenage, Minister for Digital and Culture:

“The new legislation will make tech companies accountable for protecting children and other vulnerable users on their platforms. They will be expected to enforce their terms and conditions, making it easier for users to report harmful content and get it taken down.”

Historically, platforms have been protected from liability for illegal user-generated content they host of which they are not aware. They have only been liable if their technology has identified such content or they have been made aware of its existence by users and failed to swiftly remove it. This doesn’t provide an incentive for companies to be proactive in policing their user-generated content and protecting the safety of their users online.

The proposed regulatory framework outlined in the Bill increases the responsibilities of companies in scope operating online and takes a more thorough approach to keeping internet users safe.

 

Who needs to comply?

The Bill proposes that the new regulatory framework should apply to:

  • User-to-user services: Internet services that allow users to share user-generated content or interact with each other online - e.g. social media platforms, discussion forums, messaging services and file hosting sites
  • Search services: Services that allow users to search all or some parts of the internet

 

Does the company have to be UK-based?

No, the company does not have to be physically located in the UK to be affected; if it offers services to UK users but is located elsewhere, it will still need to comply.

 

When will the Bill be written into law?

The bill is to be reviewed by a joint committee of MPs, before a final version is formally introduced to Parliament for debate later this year.

 

How will the Bill be enforced?

Ofcom will be empowered to enforce the online safety regulations. Ofcom currently regulates radio, TV and video on demand sectors in the UK. They will require annual transparency reports  indicating the prevalence of harmful content on the platforms of companies in scope, and the measures being taken to tackle it. They will publish these reports online for users and parents to view. Ofcom will also have the power to request additional information, including algorithmic operations and the use of technological tools used to proactively find, flag, block or remove illegal or harmful content. 

Safeguards will be in place to protect fundamental rights making sure decisions to block or delete accounts are warranted, providing users an appropriate route of appeal. Creating a culture of trust, transparency and accountability will be central to the new framework.

 

What are the consequences for non-compliance?

Ofcom will have a range of sanctions available to them as follows:

  • imposing fines of up to £18 million, or 10% of a providers’ annual global revenue — whichever is highest
  • seeking a court order to disrupt, or prevent access to, the activities of non-compliant providers 
  • pursuing criminal action against named senior managers whose companies do not comply with Ofcom’s requests for information ( this, however, will not be effective immediately)

 

What content is considered harmful within the scope of the Bill?

The Bill considers degrees of harmful content and Ofcom will regulate according to the principle of proportionality - meaning companies will have to take action in proportion to the severity of the harm in question. Ofcom will have to assess companies’ actions in relation to the age of their users, size and resources. This means more stringent requirements will be imposed for clearly illegal harms than those which may be considered legal but harmful, depending on context.

 

Harms with clear definition

Less clear definition

Underage exposure to legal content

Child sexual exploitation and abuse

Cyberbullying and trolling

Children accessing pornography

Terrorist content and activity

Extremist content and activity

Children accessing inappropriate material 

(under 13s using social media & under 18s using dating apps; excessive screen time)

Organised immigration crime

Coercive behaviour

 

Modern slavery

Intimidation

 

Extreme pornography

Disinformation

 

Revenge pornography

Violent content

 

Harassment & cyberstalking

Advocacy of self-harm

 

Hate crime

Promotion of Female Genital Mutilation (FGM)

 

Encouraging or assisting suicide

   

Incitement of violence

   

Sale of illegal goods/ services, such as drugs and weapons (on the open internet)

   

Content illegally uploaded from prisons

   

Sexting of indecent images by under 18s (creating, possessing, copying or distributing indecent or sexual images of children and young people under the age of 18)

   

 

The regulated services providers will be subject to a range of statutory duties of care to protect users from illegal user-generated content. The duty of care refers to requirements on providers regarding processes to be implemented and moderation of specific content.

There will be additional safeguarding requirements regarding harmful content for services ‘likely to be accessed’ by children, and additional transparency obligations for services designated as Category 1 (largest, most popular services), based on number of users, functionality of the service and risk of harm.

 

User-generated fraud is also to be included

The Bill includes measures to address user-generated fraud, such as dating scams and fake investment opportunities posted by users on online groups, e.g. Facebook, or sent via messaging apps, e.g. Snapchat.

According to the National Fraud Intelligence Bureau, romance fraud increased by 18% between 2019 and 2020, with losses totalling more than £60 million. Fraudsters seek money or personal information via online dating sites or apps by fooling victims into thinking they are building a relationship with them only to later steal their money or identity. 

Fraud via emails, cloned sites or advertising will not be considered in scope since the Bill is focused specifically on harms from user-generated content. Later this year, the Home Office is due to publish a Fraud Action Plan to consider additional legislative and non-legislative solutions to tackle the aforementioned challenges.

 

PM urges tech bosses to ‘up their game’

Following the recent online racist abuse of black England footballers at the Euros, Boris Johnson met with the heads of social media companies to highlight the legislative measures that would tackle such abuse. He told parliament:

"What we're doing is today taking practical steps to ensure that the Football Banning Order regime is changed so that if you were guilty ... of racist abuse online of footballers then you will not be going to the match, no ifs, no buts, no exemptions and no excuses,"

"Last night I met representatives of Facebook, of Twitter, of TikTok, of Snapchat, of Instagram and I made it absolutely clear to them that we will legislate to address this problem in the Online Harms Bill, and unless they get hate and racism off their platforms, they will face fines amounting to 10% of their global revenues."

Changes to football banning orders, introduced in 1989 to stop repeat offenders from causing trouble at and around soccer matches, are due to come after a 12-week consultation before the introduction of the Online Harms Bill later this year.

While the government’s ambition is to make ‘the UK the world’s safest place to go online’, it also acknowledges that working closely with international partners will be an important part in addressing this shared global challenge of making the internet a safer place for users, especially children and vulnerable users, while protecting freedom of expression online.

Read more about our Content Moderation Solutions

Photo by Thomas Park