Online Platforms - A Look at the Regulatory Landscape

Written by Jen McKeeman

06 July 2021

“The price of greatness is responsibility.”

Winston Churchill.

Online platforms have enjoyed considerable success over the past few years along with accelerated growth experienced globally as a result of the Coronavirus pandemic. Mercado Libre, Latin America’s online marketplace, for example, sold twice as many products per day in the second quarter of 2020 as compared to the same period in the previous year.

The EU Commission estimates there are now more than 10,000 EU digital platforms in operation, with the most successful scaling up rapidly. The draw of digital platforms is clear when you consider the relatively low marginal costs of expansion once initial setup costs have been incurred. The ability to grow more quickly and cheaply than businesses selling physical products is enticing. And historically, unlike brick and mortar stores, platforms have not been held to account for products sold on their portals.

Progressively, however, this has had the effect of infuriating consumers who experience fraudulent activities and frustrating brands feeling of a lack of control and accountability by platforms. The effects have been increasingly felt over the past year with online shopping fraud and scams rising by 37% in the first half of 2020.

The tide, however, is now turning, with greater efforts being made to safeguard users online. The advent of the EU Digital Services Act (DSA) and UK Online Safety Bill, combined with the existing Shop SAFE Act in the US, marks a new era in digital regulation. 

 

What do we mean by an online platform? 

The term online platform describes a wide range of digital platforms including e-commerce marketplaces, service platforms, peer-to-peer platforms and online communities and forums.

 

What do the new regulations mean for platforms?

Online platforms have been a target for negative and fraudulent user behaviours for years. Historically, some platform owners may have been unaware of the extent of the problem, others perhaps wise to it but overwhelmed by the challenges they faced.

The regulatory changes, in the guise of the DSA and Online Safety Bill, mean it’s no longer possible, or acceptable, to ignore counterfeit products, unauthorised goods, fake reviews, hate speech and scams present on your platform.

The most significant change relates to how platforms respond to illegal content and fraudulent activities. The definition of illegal content is proposed to be any information which is illegal under Union or Member State law. There is also growing pressure for the Act to cover harmful content such as fake news.

Platforms will need to clearly define what constitutes harmful content to demonstrate acceptable use policies. Equally, while the DSA aims to crack down on online illegal content, it also strives to protect lawful content, safeguarding fundamental rights, including the right of freedom of expression of platform users online. 

In Germany, the Network Enforcement Act (2017), commonly known as NetzDG, issues fines of up to €50M for sustained non-compliance of large social media networks to take down illegal content flagged by individuals, including hate speech. Content considered to be ‘manifestly unlawful’ must be removed within 24 hours, while ‘unlawful content’ must be taken down within 7 days. When drafted, the legislation was only intended to regulate the largest social media platforms, such as Facebook and Youtube, but the Act’s wording means it may also be applied to online review platforms that have high numbers of German users. The NetzDG states that the process to handle content complaints must work quickly, effectively and transparently. It is also likely the introduction of the DSA will have an impact on it.

 

Transparency is key

In the case of fake reviews in the EU and UK, platforms will need to provide a clear process for users to report fake content and an appropriate timeframe for the platform to respond. The platform will also have to be transparent in the approach it takes to remove such content.

Google was the first internet firm to publish a transparency report in 2010, followed in subsequent years by others including LinkedIn, Microsoft and Twitter. In 2013, Facebook launched its first transparency report and since then this has become a much more common practice, with many other platforms following suit, including TikTok in 2019.

Digital services companies have explained in consultations that the measures they need to take to tackle illegal or harmful content vary considerably depending on the services they provide, their business model and the type of illegal or harmful content in question. This forces a different relationship between the platform owners and sellers. The DSA will require a more active approach to content, specifically content moderation and product listings, and how interactions take place on their platforms.

 

IPR infringements - renewed impetus to tackle counterfeit

With Intellectual Property Rights (IPR) infringements illegal in the EU, platforms will have to clamp down on counterfeit. The scale of this task can seem overwhelming - particularly when platforms often don’t hold inventory for the third party products they sell. According to a study by Ghost Data in 2019, 20% of all fashion product posts on Instagram were for counterfeit goods.

 

Platforms, large and small, will need to be prepared for the changes ahead

Although large platforms that enable online interactions will likely feel the effects of the regulations the most, other smaller businesses will also feel the ripple effect despite mistakenly thinking they will not be affected. 

Key to being prepared is understanding what’s expected and the scale of the challenge; and the sooner they start this process, the better.

Preparations for changes to GDPR compliance took close to 2 years to implement. The DSA will require an even more active approach to content moderation and monitoring but platforms have less than 1 year to get ready. With reports that fines are still being levied on firms failing to comply with GDPR requirements, the new rules brought about by the DSA are likely to have further reaching implications for businesses wishing to continue operating within the law.

 

What is the Online Safety Bill?

The draft Bill marks a milestone in the UK Government’s efforts to make the internet safe. It hopes to safeguard young people, clamp down on racist abuse online and tackle financial fraud to protect internet users from fake investment scams and fraudulent activity online. It seeks to bring more accountability for online platform owners. For the first time in the UK, online platforms will be responsible for tackling fraudulent, user-generated content, such as social media posts, including romance scams and fake investment opportunities posted in social media groups. 

The Bill looks set to introduce a statutory duty of care on online services to protect users from ‘online harms’, such as illegal or harmful conduct. The duty of care will be enforced by Ofcom. In a similar vein to the requirements of the DSA, intermediaries, including platforms, will need to ensure harmful content is ‘dealt with rapidly’, or face fines for non-compliance. Having systems and processes in place to receive and respond to users’ complaints within an appropriate timeframe will be necessary, and action will need to be taken quickly to act on and remove illegal content. Businesses will also have to define publicly the ‘harmful’ content and behaviours that are/are not acceptable on the platform, and to enforce these statements consistently and transparently. 

Home Secretary, Priti Patel, has said:

“Ruthless criminals who defraud millions of people, and sick individuals who exploit the most vulnerable in our society, cannot be allowed to operate unimpeded, and we are unapologetic in going after them. It’s time for tech companies to be held to account and to protect the British people from harm. If they fail to do so, they will face penalties.”

Like the Digital Services Act, the Bill is also keen to preserve freedom of expression with businesses having to conduct and publish up-to-date assessments showing how they are mitigating overly restrictive measures or excessively removing content in attempting to meet the new online safety standards.

 

What if my business doesn’t comply with the Online Safety Bill requirements?

Should a business be found to be failing to comply with the new duty of care, Ofcom will be given the power to fine them up to £18 million or 10% of annual global turnover, whichever is higher. They will also have the power to block access to sites. It’s clear that regulation is going further than ever to protect consumers and their data with the accelerated move to digital we have experienced over the past 18 months.

 

Pasabi can help you prepare for the changes ahead

The good news is that at Pasabi we have proven experience with helping platforms and brands take on these challenges and mitigate future risk. We provide the technology to help online platforms tackle counterfeit, fake reviews, illegal content, scams and unauthorised sellers. Implementation is swift and designed to complement existing tools, processes and workflows. We provide the evidence needed for account or post removals, cease and desists, takedowns and offline legal action. Pasabi’s AI solution complements your teams’ efforts at scale, uncovering the fraudulent behaviours on your platform and gives your team the ability to stop them. We also provide more transparency and evidence around the action you’re taking in the fight against fakes and the need to be proactive and comply with the new regulations.

If you would like to look at the impact of the Digital Services Act in more detail, why not download our guide.

 

New call-to-action