Speaking another language: Agreeing and enforcing principles for global data governance

Posted in: Culture and policy, Data, politics and policy, Evidence and policymaking

Professor James Davenport is Hebron and Medlock Professor of Information Technology in the University of Bath's Department of Computer Science.

I spent the period February-June 2017 (and a few days either side) in New York as a Fulbright CyberSecurity Scholar. My primary aim was to investigate the security of credit card information, and particularly the difference between physical (card-holder present) and online transactions. However, a secondary aim, as described to Fulbright, was “…to understand both U.S. public opinion (inasmuch as this can be defined) and U.S. policy and attitudes towards cyber security issues”. It became pretty clear early on that, as I had half-suspected, the differences of detail between the US and UK policy contexts were rooted in a pretty fundamental difference of attitudes towards privacy and personal data. The New York University Law School has a Privacy Research Group, which I joined, and many of these inferences (for which I alone am responsible) come from my experiences there.

“We really have everything in common with America nowadays except, of course, language” – so wrote Oscar Wilde in The Canterville Ghost (1887). This struck me forcibly from my first day in New York, as when I said that I was a “CyberSecurity Scholar”, the general reaction was either “are you sure you’re allowed to say that?”, or “shouldn’t you be at Fort Meade?” – Fort Meade being the Headquarters of the NSA. Indeed, Americans seem to hear, or at least understand, “National Security” when we say “Security”: whereas the University of Bath, like many British Universities, has a “Security” Department to lock up the buildings and check identity etc., New York University, like many urban American ones, has a “Public Safety” Department. I was particularly struck by the reception of a talk by a French engineer entitled “The Privacy/Security Tradeoff in the Era of Surveillance Capitalism and Targeting-and-convincing Infrastructures”, which essentially failed because the audience did not understand that the speaker was talking about personal security rather than national security.

Deeper differences

However, Oscar Wilde was wrong: there are deeper differences. This came forcibly to light when I discussed with some of my New York colleagues the controversy over “smart toys”, in particular CloudPets. As shortly came out after these went on sale, there were (at least) three security issues:

  • The instructional video suggested setting the password to “qwe”, a password even weaker than “qwerty”;
  • The Bluetooth transmission used was unencrypted, or at best weakly encrypted; and
  • The recordings made of the children’s voices were stored in an unencrypted database publically readable on “the cloud” (the reporting engineer carefully did not disclose the location).

I shared with them the story that the Bundesnetzagentur had ordered the destruction of Cayla dolls (a very similar product) in Germany, as they were concealed recording devices. This caused complete incomprehension among the US scholars: how can a device be illegal? They could conceive that it might under certain circumstances be illegal (probably under the Children's Online Privacy Protection Act) to operate the device, but not that a device itself could be illegal.

I was amused to note that, some four months later, the Internet Crime Complaint Center of the FBI issued a Public Service Announcement warning that “The collection of a child’s personal information combined with a toy’s ability to connect to the Internet or other devices raises concerns for privacy and physical safety”. The announcement gives a list of ten things that purchasers should check for, but some, such as “Research the toy’s Internet and device connection security measures” are impracticable for all but the expert – and even I would have no idea how to “Research where user data is stored”.

It turns out that the interception of communications is also treated very differently on the other side of the Atlantic. In many US states, it is legal to record a conversation as long as one party consents (without that party informing the others) – and since a telephone call or internet connection can be unnoticeably diverted to a different state, this means that it is essentially always possible for telephone calls to be legally recorded. This leads to a situation in which indiscriminate (as Europeans would see it) data collection by anyone involved in the communication is essentially assumed in the USA. This had in fact shown up much earlier, in the (unsuccessful) confirmation hearings for Robert Bork as a Supreme Court Justice in 1987. During debate over his nomination, Bork's video rental history was leaked to the press. The leaker justified accessing the list on the ground that Bork himself had stated that Americans only had such privacy rights as afforded them by direct legislation. The incident led to the enactment of the 1988 Video Privacy Protection Act, which makes any ‘video tape service provider’ that discloses rental information outside the ordinary course of business liable for up to $2,500 in actual damages. This Act has been held to apply to Netflix and other similar companies, and Netflix consequently asked US customers to lobby for a change in the law – which now permits sharing with permission (which may well be implicit, of the “by using this service you consent…” variety in the US.)

This was, of course, an extremely specific piece of privacy legislation. The Obama-era Federal Communications Commission (FCC) had imposed regulations to prevent Internet service providers (ISPs) from selling people’s browsing history – but an early act of the 2017 Congress was to repeal these regulations before they came into effect, and to introduce legislation to prevent the FCC from ever imposing such rules again. The press was full of stories about crowd-funded attempts to purchase the browsing history of congressmen (as well as cartoons such as this one), but a more substantial impact was the rise in the share prices of Virtual Private Network companies, which allow users to conceal their activities online.

GDPR

Meanwhile, the European Union has enacted the General Data Protection Regulation (GDPR), which comes into force in May. This will make selling the browsing history of an EU citizen without explicit consent illegal (if it is not illegal already) – and will make it illegal worldwide. This is a problem for US ISPs, who – along with practically any other ISP – have no algorithmic way of telling whether a customer is an EU citizen or not. One might think that it would be OK to sell the data “anonymously”, but several studies have shown that it is practically impossible to publish “big data” anonymously – there are nearly always enough hints to identify people. See, for example, this rather interesting Gawker story on how anonymised data on the activity of taxi cabs in New York was eventually traced back to individual celebrities.

Is the browsing history of an EU citizen in the USA protected? Pragmatically, that would seem to depend on whether the ISP is afraid of GDPR or not. The EU has been able to enforce judgements against Google and others, but these companies are truly global, and are worried by EU enforcement; a purely US-based ISP probably would not be. Studies show that less than 50% of US companies are aware of GDPR, but pragmatically many companies that are purely US-based will probably get away with ignoring the Regulation. Conversely, those US-based companies that are truly global - the Googles and Facebooks of this world - have little choice but to embed GDPR, as this NY Times article points out. However, the author would question the legality of Facebook’s decision “not to roll out some products in Europe”. It’s not clear this is technically feasible (because of the VPNs pointed out above), and it’s not clear that it’s legal given that an EU citizen could access them while living outside the EU.

More generally, the GDPR states [Recital 50] that “The processing of personal data for purposes other than those for which the personal data were initially collected should be allowed only where the processing is compatible with the purposes for which the personal data were initially collected”. This connection between “data” and “purpose” is fundamentally alien, not just to American law, but to American thinking, whereas it has been present in European law, and indeed in European thinking, since the first Data Protection Directive – though GDPR marks a distinct strengthening of the connection. Indeed, while much of the “hype” over GDPR has been over the fines payable for data breaches, much of the work involved in making IT systems GDPR-ready has been tracking the purpose behind data, and, where necessary, the consent.

The future

So what about a post-Brexit UK? Firstly, GDPR enters into force while the UK is still a full member of the EU. Secondly, and perhaps more importantly, GDPR (and its predecessors) has firmly entered the thinking of IT professionals, and the implementation of IT systems. Whether, and to what extent, EU law such as GDPR continues to be enforced by the UK courts after Brexit is a matter of current debate – but it is clear that EU courts will wish to enforce GDPR, and there will be many fewer UK companies that can afford to ignore EU penalties than there will be US companies.

Hence it seems extremely likely that UK society, whether or not in the EU, will continue to behave very differently from US society – and these cultural differences will continue to prevent effective regulation across borders. As the Anderson report (p. 33) noted, “[UK] surveys show that the government is trusted more than commercial companies”. In the US, almost all mistrust is against the government – but that is simply because the public’s expectations of companies is (by UK/EU standards) very low: whether or not Americans agree in principle with Bork that they “only have such privacy rights as afforded them by direct legislation”, in practice they accept that companies will behave in accordance with this principle.

In order to be effective, privacy rules must be enforceable across borders – if they are not adopted universally then, like the protection some US states afford against having your phonecalls recorded, they are effectively not adopted at all. An interesting parallel is the [EU] “right to be forgotten”, which can lead to Google etc. returning different results in the EU than elsewhere – which in turn can be circumvented by using a VPN to appear to be browsing from outside the EU. It is too soon to say whether the difference in attitudes between US and EU/UK citizens will be enough to prohibit GDPR from being realised in practice; but if it is, then the future of privacy and data use policy – which must ultimately involve reconciling the attitudes of citizens in states as diverse as China, Nigeria, Paraguay and the Philippines – will be challenging indeed.

This blog post is part of the Future Policy Challenges series, a new series of IPR Blogs with a focus on science, technology and innovation that highlights some of the crucial issues policymakers may face in the coming years. Subscribe to the IPR blog to get the latest blog posts, or to keep up to date with our activities, connect with us on TwitterFacebook or LinkedIn. You can also follow the hashtag #FuturePolicyChallenges for more on this series.

Posted in: Culture and policy, Data, politics and policy, Evidence and policymaking

Respond

  • (we won't publish this)

Write a response