The Online Harms Whitepaper: Will it have real  teeth, or be a fudge to appease?

The Online Harms Whitepaper: Will it have real teeth, or be a fudge to appease?

The Internet is no longer a distinct space, and notions of ‘online’ and ‘offline’ create a false dichotomy. The Internet is embedded in our lives, and should be understood as such [1]

By Steven O’Sullivan, MBA, PgDip, CISSP, CRISC, SCCISP, CCSK


Many of you may have heard of the Government’s attempts at managing the harm that can be caused by online activities. Or, in simple terms, the UK government’s landmark plan for crushing down on the spread of illegal content on the internet. This short article attempts to provide you with some context, background and next steps.


In April 2019, the White Paper was released. This set out the Government’s intention to improve protections for users online through imposing a duty of care on online services to moderate a wide spectrum of harmful content and activity on their services, including child sexual abuse material, terrorist content, hate crimes and harassment. Following the release of the White Paper, a consultation was run from 8 April 2019 to 1 July 2019, which received over 2,400 responses from companies in the technology industry, including think tanks, rights groups, governmental organisations, individuals and large tech giants.

On 21 February 2020, the UK Home Office and Department for Digital, Culture, Media & Sport (DCMS) published the Government’s Initial Consultation Response to feedback received through a public consultation on its White Paper. The next stage is to build the team and take forward to create an Online Harms Bill.

“The Internet is merely a facilitative tool for activities happening offline, but one that can amplify such activities and make it easier and faster to engage and offend.”[2]


In February, the Government produced a list of 11 online harms which was designed to be an illustrative list of harms Criminal harms such as child sexual exploitation and terrorist content will have a code of practice for each one, as they are criminal acts.

The legislation will only apply to companies that provide services or use functionality on their websites which facilitate the sharing of user generated content or user interactions, for example through comments, forums or video sharing,” the consultation paper says. In other words, this is probably going to be about social networks, even if it could conceivably apply to other kinds of businesses such as newspapers’ comment sections, or, more to the point, adult websites like PornHub.

Second: its new powers are still undefined, but Ofcom itself will not be elevated to the rank of internet super-police. They will be in charge of issuing codes of practices, oversee companies’ compliance with those codes, and make sure that they deal effectively with users’ complaints. Ofcom’s auditing process is not clear yet, but each company will be required to file annual transparency reports on their “reporting processes and moderation practices.

Age verification and transparency requirements

In-scope service providers will need to implement appropriate age verification technologies to prevent children from being exposed to inappropriate content.. As such, the regulator will be able to require companies to submit annual reports explaining the types of harmful content on their services, as well as information on the effectiveness of the company’s enforcement procedures.

Why is this important?

Companies within scope will be required to have appropriate processes and mechanisms in place, if not there already. Terms and conditions will also need to be amended to comply with the duty of care and codes of practice will need to be clear and accessible to all (including children). Ensuring compliance will be important as Ofcom is likely to have the power to impose fines, disrupt business activities, block services and impose liability on individual members of senior management for non-compliant organisations.

Some concerns

Evidence is vital. The White Paper at times blurs notions of correlation and causation, with an assumption that the online milieu causes specific harms. This is not supported by evidence. In many instances, the Internet is merely a facilitative tool for activities happening offline.

In relation to the harms in scope provided by the White Paper, the range of alleged harms is ambitiously broad. In this context, the distinct focus on terrorist content and child sexual exploitation and abuse is appropriate but the list of harms in scope presents the risk that the resources of the new regulator will be so stretched that the most serious online harms cannot be provided with the regulatory attention they deserve.

While the listed harms in scope are broad, they do not include misogyny, a key omission given the emphasis throughout the White Paper on the specific targeting of women and girls online.

Key takeaways

  • An Online Harms Bill is on its way, it quality, scope and success remains to be seen.
  • Ofcom will be the new online harms regulator. Ofcom’s responsibilities will include ensuring that online companies have processes and systems in place to fulfil their duty of care to keep people using their platforms in a safe manner. Time will tell how wise a decision this is and if they have the time, resources and teeth to enforce.
  • Call for the government to speed up timetable for legislation by the IWF shows the hope and need for real impacting changes.



[1] Chih-Ping Chen, ‘Playing with Digital Gender Identity and Cultural Value’, Gender, Place & Culture 23, no. 4 (2 April 2016)

Leave a Reply

Your email address will not be published. Required fields are marked *