Three-D Issue 32: Online harms white paper – Problems, proposed solutions, and need for evidence

Emma Goodman
LSE

During 2017 and 2018, the LSE hosted an independent Commission of Inquiry into “Truth, Trust and Technology” (T3). This Commission involved a consultation and discussion process with a group of commissioners, hearing from more than 100 experts, as well as many members of the public, in order to examine the problems of mis- and disinformation and potential policy responses.

The Commission’s report concluded that the information crisis is systemic, and it calls for a coordinated, long-term, institutional response. There is huge uncertainty about the scale of the problem, but misinformation and disinformation appear to be growing in volume and adapting to new controls, and their impacts are having immediate and structural consequences. Clearly, other economic, social and political changes contribute to the crisis, but systemic change in the media system as a whole, including the new digital technology companies, is a significant contributing factor.

We argued that public policy should approach the information crisis as a problem of system resilience. Western liberal democracies face many long-term challenges, which have triggered simple populist responses, in part because the new media system favours the simplicity and emotionality of those responses. Negotiating these challenges will test the UK model of deliberative government to the limit.

Principles for policy reform

We need a coordinated approach that aims at addressing systemic problems and creating conditions that will help to sustain democratic processes of deliberation and consensus building. Multiple competing actors need to collaborate. The T3 report outlines the principles for policy reform that we believe must be respected for any settlement, and which we reiterated in our response to the Online Harms White Paper:

  • Freedom of expression: The right to impart and receive ideas without interference should be preserved. Restrictions should be proportionate, legitimate and prescribed by law. 
  • Subsidiarity: Decisions about content standards should be taken as close as possible to those that are affected. This will often mean at a national level. 
  • Transparency: Decisions about filtering, promotion and takedown of content can be censorship, and can undermine trust. They should be taken according to well-known principles and reported publicly.
  • Evidence: Access to improved data for regulators and the public is fundamental.
  • Civil society should be involved in reforms of co-regulation and self-regulation. This may mean providing resources for organisations to be involved.
  • Ongoing review: The process of reform will be an iterative process and potential outcomes are not clear at the outset.
  • Independence: A new institution should be structurally independent from government, including in its appointments and finances. 


The approach of the White Paper: Problems and proposed solutions

Structural problems vs behavioural remedies

The largely reactive approach outlined by the Online Harms White Paper emphasizes behavioural remedies over structural change in the market, with choices about unacceptable content still residing with platforms, and a reliance on transparency reports. There is concern that the online harms legislation will become an engine driving removals of content by creating league table of takedown numbers.

As it stands, the White Paper has an over-emphasis on tech and an under-emphasis on the market and business incentives, paying little attention to the market conditions that facilitate the empowerment of individuals and groups. There is an assumption in the White Paper that the regulator can substitute for the market, but that is unlikely to work. 

Another way to look at is it that the core problem to address is that the business models of tech companies are misaligned with public values. Society is extremely uncomfortable with the framework of surveillance capitalism that these business models have built, and this is a core element of the problem that must be tackled.

The slippery slope to censorship

Elements of the White Paper proposals bring with them the risk of a significant chilling effect on freedom of expression, and of taking steps on the slippery slope to censorship. For example, the idea of Home Secretary sign-off on the duty of care, or the fact that localising so much discretion in one regulator would be contrary to Council of Europe standards.

The threat of particularly harsh sanctions such as ISP-blocking is problematic from a freedom of expression perspective: it is easy to imagine that companies might decide to just not provide these public spaces, and limit or remove their services.

There is also regulatory uncertainty over who is or isn’t in scope, and lack of clarity about who owns the codes of conduct, and what safeguards will be put in place. It is difficult to see how journalistic products would not be caught by these proposals, and there is an argument that this is bringing in press regulation by the back door.

Another option could be a double duty of care, which distinguishes between illegal content, and that which is legal but harmful, which Damian Tambini explains in his FLJS policy briefing.

The need for evidence 

Our follow up work after the publication of the T3 report has emphasized the need for more evidence on the online harms that the government hopes to legislate against, and their impact on individuals and society.

“Adult Online Hate, Harassment and Abuse: A rapid evidence assessment” – the review conducted by UKCIS’s evidence group for DCMS – revealed a distinct lack of robust research in the area of online harms to adults, even though the findings of the review point to a societal problem in need of immediate attention. More research is urgently needed to ensure that responses to risks of harms are proportionate, and appropriate. 

While there is a significant volume of evidence of children’s experiences of online harm, there is so little research on adult cyberbulling and trolling, for example, that they are difficult to even define. In general, definitions are blurred both in terms of the research base, and legislation: there is a complex and opaque legislative landscape. Much of the research that exists has been conducted by bodies such as Unison, which focus on bullying in the workplace.

What the evidence does show is that the experience of some form of hate is a normal aspect of online life for many adults, and that there is a link between the nature of harassment and the severity of impact. Race and ethnicity provoke the most hate, such as anti-Semitic, Islamophobic content. Trigger incidents such as attacks contribute to a sharp increase in this kind of hate. There is a severe impact on victims such as disabled people, many of whom have changed their lifestyles in response.

Moving on from harms to the individual to harms to democracy and society, the evidence base for disinformation’s harmful impact is small, and there is little that that is actionable. The evidence that does exist on the effect of foreign interference such as bots or political advertising tends to be US-centric and often focused on Twitter simply because it has the most accessible API, which makes it less relevant. Research into the impact of disinformation on voter intentions all started around the 2016 US presidential election and has been fairly backward-looking. There is a good deal of debate over the impact of disinformation on that election and whether it might have altered the result.

What research does show is that, as always, it is very hard to change people’s minds and attitudes. People often have entrenched political viewpoints and to overcome these, online content would have to be spectacularly effective.

When thinking about evidence in this area and its role in policymaking, there are several points that we should keep in mind:

  • There is a tendency to think that we know ‘harm’ when we see it, but do we actually?
  • Current reporting tools are insufficient for detecting harm as they don’t take into account intention or impact.
  • Evidence of impact is much harder to find than evidence of prevalence of exposure. The current evidence around impact is scant, and tends to be solely qualitative. 
  • Some types of evidence are easier to find than others: the evidence for physical and emotional harm are going to be easier to detect than harm caused by attacks on British values, for example. 
  • Some harms will have effects that are only felt over the long-term (eg increases in inequality).
  • We cannot regulate online harms without also tackling offline harm, and we should remember that the same people who are vulnerable online are vulnerable offline. 
  • We should not equate evidence of concern with evidence of harm. Equally, we shouldn’t let a lack of evidence become an excuse for tech companies to resist acting against serious risks.
  • There is inequality of access to information to create evidence, as much of it is owned by tech companies. 

What do we need going forward?

  • Much more research into the impact of potential harms. To do this, researchers need more data from tech companies. 
  • A more victim-centred approach, which needs to be handled in a sensitive way
  • More recognition of the relationship between the online and offline worlds
  • Systemic evaluations of the impact of existing interventions 
  • Clarity of the legislative landscape.

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close