Dr Iulia Cioroianu is a Prize Fellow in the Institute for Policy Research (IPR) at the University of Bath.
As online electoral spending in the UK continues its upwards trend, having gone from 0.3% of total campaign spending in 2011 to 42% in 2017, social media platforms have recently announced changes in their advertising policies, which range from a complete ban of political ads to the decision not to fact check ads by politicians. Is it time for a government regulation framework for online electoral content?
The current debate around the political campaign policies implemented by major social media platforms can be structured along five dimensions, which often tend to get conflated, generating confusion about their potential consequences.
The dimensions are a) the kind of content which the platforms consider to be political; b) the type of actor that produces and distributes it (politicians or not); c) the paid versus organic public reach of social media posts; d) the ways in which content is promoted, and the extent to which it is targeted based on individual characteristics; and e) the extent to which it contains false or inaccurate statements and disinformation.
Banning political content
Through its decision to ban almost all paid political content irrespective of the identity of the source, Twitter sidesteps the discussion about false political information and avoids the challenge of taking a position in the debate about political micro-targeting.
The platform defines “political content” as “content that references a candidate, political party, elected or appointed government official, election, referendum, ballot measure, legislation, regulation, directive, or judicial outcome”. Ads that contain references to political issues are banned, but exceptions are in place for news sources which are still allowed to promote articles discussing political issues, as well as what Twitter calls “cause-based advertising” – civic engagement ads and ads educating or raising awareness about issues such as the environment and social equity.
Deciding what types of issues fall within this category is difficult from a conceptual standpoint, and even more difficult to implement. Researchers have expressed concerns that the decision is arbitrary and that correctly classifying potentially large volumes of ads along pre-set categories might pose methodological challenges and result in high error rates.
However, since the market for political advertisement on Twitter was insignificant in the UK (before the platform announced its decision, political parties and interest groups had spent very little on Twitter advertising in this campaign, and only £56,500 was spent on the platform by all parties combined, during the 2017 elections, compared to a total of £1 million on Google), the decision mainly has indirect political effects.
By restricting government and civic groups’ access to a tool used extensively for public awareness or civic initiative campaigns, and doing so on an issue-by-issue case, Twitter can shape the political agenda and define the issue space in UK politics, but is not likely to sway this election.
A similar absolute ban imposed by either Facebook or Google would have had major direct implications for electoral campaigns, since the two platforms are by far the main avenues of political spending. Campaign spending by major UK parties on Facebook and Google so far exceeds all other types of spending. However, the ban would have had highly asymmetric effects, disadvantaging challengers and lesser known candidates.
Paid vs. organic reach
Online and social media platforms have lowered the costs of political advertising, providing the opportunity to run paid campaign ads, and develop and test campaign messages to a wider range of candidates. Banning advertising would therefore provide an advantage to politicians who are already popular on the platform or able to mobilise or gain the support of a large network of followers and high-visibility accounts.
For example, it would provide an advantage to Jeremy Corbyn, and to a lesser extent Nigel Farage and Boris Johnson, who all have large numbers of followers and generate high levels of social media engagement. But it would disadvantage Jo Swinson, who has 10-15% of their followers and generates less engagement, especially on Facebook. No social media airtime regulations similar to those in place for radio and TV exist and if they did, they would be impossible to enforce.
Smaller parties also lose the ability to conduct experiments and test the effects of different ads and messages - such as those for which the Liberal Democrats have received intensive criticism. By design, these experiments require the ability to randomise the delivery of messages to different groups, which can only be done for paid posts. All parties do it, but smaller ones can only afford to do it using social media platforms because it is much cheaper.
An outright ban on political advertising by major social media platforms would also provide incentives for political actors to boost their network engagement and visibility through the use of automation, bots and fake accounts, by enhancing their efforts to co-opt media and social media influencers and by choosing to promote and frame issues such that they catch the attention of media outlets.
Twitter has generally been considered a platform which politicians primarily use not for voter engagement purposes, but as a tool to try and seize the attention of the media and influential journalists, who often repost political material or use it in their news coverage – a phenomenon called social media amplification. In the absence of other avenues to make their voices heard lesser-known candidates may resort to other strategies to draw media’s attention, such as emphasising contentious issues or negative campaigning, further increasing political polarisation.
Another function of online advertising is voter mobilisation. Research found that online ads could increase turnout especially among young voters in competitive districts. Restricting online political advertising could therefore make it harder for parties to reach young voters. In the UK context, where young voters are disproportionately more likely to support Labour, this would reduce the chances of a “youth quake” swaying the election.
Deciding what a political actor is
Facebook’s recent policy is not focused on identifying political content, but rather distinguishing between political and non-political actors. The company adopts a clear stand on false information (for which it has been heavily criticised from all sides of the political spectrum) and imposes ad restrictions mainly based on the identity of the source.
Just as Google faces ethical and methodological challenges in defining political content, Facebook faces similar challenges in identifying political actors, which it currently defines as: “candidates running for office, current office holders - and, by extension, many of their cabinet appointees - along with political parties and their leaders”. The platform correctly removed government housing investment ads which it deemed to be electoral campaigning in disguise, but has often either failed to identify ads by political organisations trying to pass as news outlets, or mislabelled news outlets and civic organisations as political actors.
On all platforms, campaigners can only run political ads after registering for this specific purpose, but critics pointed out that, without extensive checks in place, organisations registered to run other types of ads could endorse political candidates and parties or campaigning on specific issues. The pro-remain campaign Best For Britain is not included in Google’s political ads library despite being one of the biggest spenders in the current campaign, suggesting that the lack of transparency around the source and identity of the entities running both paid and organic reach campaigns on major platforms is a broad and salient enough issue that it should not be left to social media platforms, and can only be addressed through clear electoral regulation.
Imposing targeting restrictions
In contrast to both Facebook and Twitter, Google uses its later mover advantage and takes a clear stand on the issues that researchers have identified as most salient – false content and micro-targeting.
Google announced that it will remove political ads containing false information and decided to limit the ability of campaigners to micro-target voters. Their policy contains elements across the other identified dimensions of the debate but is somewhat less transparent overall and provides less details about the actual enforcement mechanism.
While Facebook targeting happens mainly based on interests and user characteristics, Google targeting is based primarily on search keywords. Google’s micro-targeting restrictions based on user attributes are clear – advertisers will only be allowed to target ads based on age, gender and post code (which some say is granular enough) but the rules for targeting users based on the content they consume are much less clearly formulated. Google doesn’t lay out the details of its keyword targeting, nor is it disclosing the rules and algorithms it applies to keyword searches in relation to political content.
Media fragmentation is not a new phenomenon, and it is not specific to social media, having started with the increased availability of local cable TV and radio channels and niche programmes three decades ago. It has however reached extreme levels in recent years through the development of micro-targeting and personalisation methods for online and on social media platforms.
Research suggests that these micro-targeting functions are not currently being extensively used by political campaigners - targeting still takes place mainly on broad user characteristics - but there is no guarantee that this won’t change as the share of spending by political parties on online campaigns increases and the technical abilities of the campaign teams improve.
Defining and handling false content
Voicing similar challenges faced by all other platforms, Google admitted that “it can be extremely difficult (or even impossible) for humans or technology to determine the veracity of, or intent behind, a given piece of content” and that any solution needs to be compatible with automation.
Like Facebook, the company tries to “make quality count in the ranking systems, counteract malicious actors, and give users more context”. Twitter on the other hand declared that false information can be fought through organic methods – by being an open platform, other users can always call out disinformation attempts and state opposing points of view.
Facebook took a radical position with respect to false content, declaring that it will continue to host political ads across all its platforms, and it will not be fact checking the statements of political actors - a decision based on stated freedom of speech considerations. Combined with Facebook’s current targeting provisions - which allow parties to match user information with that from a list uploaded by political parties or candidates, as well as provide access to other users with similar characteristics - this policy could lead to false political ads being narrowly targeted at specific groups or individuals, such as undecided voters who can sway the results of an election, or older voters who are most susceptible to believing and sharing fake news and content.
Should companies be taking these decisions?
Motivated mainly by reputational concerns, social media platforms have in general shown some level of responsiveness to public criticism and have increased their efforts to develop content handling policies that can be endorsed by the majority.
Most of them have increased their efforts to identify and eliminate fake accounts and prevent unauthorised automation attempts, and some provide accounts and reports of their success. However, none of them are transparent about the extent to which they often fail to identify or remove fake content which aims to disinform or malicious automated accounts, and none of them provide the means through which their work and their decisions could be independently evaluated by third parties.
This lack of transparency is the main concern for the current UK campaign, as well as the upcoming US elections. Facebook and Google’s efforts to formulate policies around political content and political influence in elections have intensified following the 2018 Cambridge Analytica scandal and the allegations about foreign involvement in electoral campaigns across the world. The companies were faced with demands to increase individual privacy while increasing transparency. The increased transparency was addressed by providing access to the ads libraries and the release of several transparency reports. The privacy issues were addressed mainly through newly introduced extensive data access restrictions, some of which currently make it impossible for academic researchers to independently evaluate political activity on the platforms, ironically reducing overall transparency.
Most of the knowledge we currently have about bots, automation, fake news, echo chambers and filter bubbles comes from research on social media data which used to be publicly available. Both Facebook and Google provide access to political add libraries, but the information available is limited and deals only with one small side of the political content available on the platforms – paid ads.
Access to data on other types of political content is increasingly restricted. Facebook has been taking some steps towards reopening some of its data to independent evaluation through the Social Science One initiative, which aims to provide secure access to a selected group of researchers, but has been criticised for not providing enough access and for moving too slowly for the new policies to be relevant in the current electoral campaign in the UK.
At the same time, social media companies claim that their efforts have not been matched by government efforts in developing coherent and comprehensive regulatory frameworks and have repeatedly called for clear government regulation of online electoral campaigns.
Faced with a dynamic online environment and increasingly complex communication technologies, governments seem to be stuck in an endless cycle of consultations and recommendations which have not yet resulted in any concrete legislation. A recent comprehensive analysis showed that there were several fruitless initiatives over the past years in the UK – allowing social media companies to be the de facto decision makers when it comes to electoral campaigns.
The potential consequences outlined above and their disproportionate magnitude for smaller parties, challengers and lesser known candidates, suggest the urgent need to shift decision making power back from the hands of social media companies, to democratically elected representatives.
This blog is part of the IPR 'General Election 2019' blog series. Visit the IPR blog to read more.