IPR Blog

Expert analysis, debates and comments on topical policy-relevant issues

Topic: technology

The World in 2050 and Beyond: Part 3 - Science and Policy

📥  education, future, policymaking, research, technology

Lord Rees of Ludlow is Astronomer Royal at the University of Cambridge's Institute of Astronomy, and founder of the Centre for the Study of Existential Risk. This blog post, the third in a three-part series, is based on a lecture he gave at the IPR on 9 February. Read the first part here, and the second part here.

Even in the 'concertina-ed' timeline that astronomers envisage – extending billions of years into the future, as well as into the past – this century may be a defining era. The century when humans jump-start the transition to electronic (and potentially immortal) entities that eventually spread their influence far beyond the Earth, and far transcend our limitations. Or – to take a darker view – the century where our follies could foreclose this immense future potential.

Beaker

 

One lesson I’d draw from these existential threats is this. We fret unduly about small risks – air crashes, carcinogens in food, low radiation doses, etc. But we’re in denial about some newly emergent threats, which may seem improbable but whose consequences could be globally devastating. Some of these are environmental, others are the potential downsides of novel technologies.

So how can scientists concerned about these issues – or indeed about the social impact of any scientific advances – gain traction with policy-makers?

Some scientists, of course, have a formal advisory role to government. Back in World War II, Winston Churchill valued scientists' advice, but famously kept them "on tap, not on top". It is indeed the elected politicians who should make decisions. But scientific advisers should be prepared to challenge decision-makers, and help them navigate the uncertainties.

President Obama recognised this. He opined that scientists' advice should be heeded "even when it is inconvenient – indeed, especially when it is inconvenient". He appointed John Holdren, from Harvard, as his science adviser, and a ‘dream team’ of others were given top posts, including the Nobel physicist Steve Chu. They had a predictably frustrating time, but John Holdren 'hung in there' for Obama’s full eight years. And of course we’re anxious about what will happen under the new regime!

Their British counterparts, from Solly Zuckerman to Mark Walport, have it slightly easier. The interface with government is smoother, the respect for evidence is stronger, and the rapport between scientists and legislators is certainly better.

For instance, dialogue with parliamentarians led, despite divergent ethical stances, to a generally-admired legal framework on embryos and stem cells – a contrast to what happened in the US. And the HFEA offers another fine precedent.

But we've had failures too: the GM crop debate was left too late – to a time when opinion was already polarised between eco-campaigners on the one side and commercial interests on the other.

There are habitual grumbles that it’s hard for advisors to gain sufficient traction. This isn’t surprising. For politicians, the focus is on the urgent and parochial – and getting re-elected. The issues that attract their attention are those that get headlined in the media, and fill their in-box.

So scientists might have more leverage on politicians indirectly – by campaigning, so that the public and the media amplify their voice, for example – rather than via more official and direct channels. They can engage by involvement with NGOs, via blogging and journalism, or through political activity. There’s scope for campaigners on all the issues I’ve mentioned, and indeed many others. For instance, the ‘genetic code’ pioneer John Sulston campaigns for affordable drugs for Africa.

And I think religious leaders have a role. I’m on the council of the Pontifical Academy of Sciences (which is itself an ecumenical body: its members represent all faiths or none). Max Perutz, for instance, was in a group of four who acted as emissaries of the Pope to promote arms control. And recently, my economist colleague Partha Dasgupta, along with Ram Ramanathan, a climate scientist – two lapsed Hindus! – achieved great leverage by laying the groundwork for the Papal encyclical on climate and environment.

There’s no gainsaying the Catholic Church’s global reach – nor its long-term perspective, nor its concern for the world’s poor. The Encyclical emphasised our responsibility to the developing world, and to future generations. In the lead-up to the Paris conference it had a substantial and timely influence on voters and leaders in Latin America, Africa and East Asia (even perhaps in the US Republican Party).

Science is a universal culture, spanning all nations and faiths. So scientists confront fewer impediments to straddling political divides. The Pugwash Conferences did this in the Cold War – and the governing board of Sesame, a physics project in Jordan, gets Israelis and Iranians around the same table today.

Of course, most of these challenges are global. Coping with potential shortages of food, water, resources – and the transition to low carbon energy – can’t be affected by each nation separately. Nor can threat reduction. For instance, whether or not a pandemic gets global grip may hinge on how quickly a Vietnamese poultry farmer can report any strange sickness. Indeed, a key issue is whether nations need to give up more sovereignty to new organisations along the lines of the IAEA, WHO, etc., And whether national academies, The World Academy of Sciences, and similar bodies should get more involved.

Universities are among the most international of our institutions, and they have a special role. Academics are privileged to have influence over successive generations of students. Indeed, younger people, who expect to survive most of the century, are more anxious about long-term issues, and more prepared to support ‘effective altruism’ and other causes.

Universities are highly international institutions. We should use their convening power to gather experts together to address the world's problems. That’s why some of us in Cambridge (with an international advisory group) have set up the Centre for the Study of Existential Risks, with a focus on the more extreme ‘low probability/high consequence’ threats that might confront us. They surely deserve expert analysis in order to assess which can be dismissed firmly as science fiction, and which should be on the ‘risk register’; to consider how to enhance resilience against the more credible ones; and to warn against technological developments that could run out of control. Even if we reduced these risks by only a tiny percentage, the stakes are so high that we’ll have earned our keep. A wise mantra is that ‘the unfamiliar is not the same as the improbable’.

I think scientists should all be prepared to divert some of their efforts towards public policy, and engage with individuals from government, business, and NGOs. There is in the US, incidentally, one distinctive format for such engagement that has no real parallel here. This is the JASON group. It was founded in the 1960s with support from the Pentagon. It involves top-rank academic scientists – in the early days they were mainly physicists, but the group now embraces other fields. They’re bankrolled by the Defense Department, but it’s a matter of principle that they choose their own new members. Some – Dick Garwin and Freeman Dyson, for instance – have been members since the 1960s. The JASONs spend about 6 weeks together in the summer, with other meetings during the year. It’s a serious commitment. The sociology and ‘chemistry’ of such a group hasn’t been fully replicated anywhere else. Perhaps we should try to do so in the UK, not for the military but in civilian areas – the remit of DEFRA, for instance, or the Department of Transport. The challenge is to assemble a group of really top-rank scientists who enjoy cross-disciplinary discourse and tossing ideas around. It won’t ‘take off’ unless they dedicate substantial time to it – and unless the group addresses the kind of problems that play to their strengths.

So to sum up, I think we can truly be techno-optimists. The innovations that will drive economic advance, information technology, biotech and nanotech, can boost the developing as well as the developed world – but there’s a depressing gap between what we could do and what actually happens. Will richer countries recognise that it's in their own interest for the developing world fully to share the benefits of globalisation? Can nations sustain effective but non-repressive governance in the face of threats from small groups with high-tech expertise? And – above all – can our institutions prioritise projects that are long-term in political perspectives, even if a mere instant in the history of our planet?

We’re all on this crowded world together. Our responsibility – to our children, to the poorest, and to our stewardship of life’s diversity – surely demands that we don’t leave a depleted and hazardous world. I give the last word to the eloquent biologist Peter Medawar:

“The bells that toll for mankind are [...] like the bells of Alpine cattle. They are attached to our own necks, and it must be our fault if they do not make a tuneful and melodious sound.”

 

For more information on Lord Rees' IPR lecture, please see our writeup here.

 

The World in 2050 and Beyond: Part 2 - Technological Errors and Terrors

📥  research, technology, terrorism

Lord Rees of Ludlow is Astronomer Royal at the University of Cambridge's Institute of Astronomy, and founder of the Centre for the Study of Existential Risk. This blog post, the second in a three-part series, is based on a lecture he gave at the IPR on 9 February. Read the first part here.

I think we should be evangelists for new technologies – without them the world can’t provide food, and sustainable energy, for an expanding and more demanding population. But we need wisely-directed technology. Indeed, many are anxious that it’s advancing so fast that we may not properly cope with it – and that we’ll have a bumpy ride through this century.

bio]

 

Let me expand on these concerns.

Our world increasingly depends on elaborate networks: electric-power grids, air traffic control, international finance, globally-dispersed manufacturing, and so forth. Unless these networks are highly resilient, their benefits could be outweighed by catastrophic (albeit rare) breakdowns – real-world analogues of what happened in 2008 to the financial system. Our cities would be paralysed without electricity. Supermarket shelves would be empty within days if supply chains were disrupted. Air travel could spread a pandemic worldwide within a week, causing the gravest havoc in the shambolic megacities of the developing world. And social media can spread panic and rumour, and economic contagion, literally at the speed of light.

To guard against the downsides of such an interconnected world plainly requires international collaboration. For instance, whether or not a pandemic gets global grip may hinge on how quickly a Vietnamese poultry farmer can report any strange sickness.

Advances in microbiology – diagnostics, vaccines and antibiotics – offer prospects of containing pandemics. But the same research has controversial aspects. For instance, in 2012, groups in Wisconsin and in Holland showed that it was surprisingly easy to make the influenza virus both more virulent and transmissible – to some, this was a scary portent of things to come. In 2014 the US federal government decided to cease funding these so-called ‘gain of function’ experiments.

The new CRISPR-cas technique for gene-editing is hugely promising, but there are ethical concerns raised by Chinese experiments on human embryos and by possible unintended consequences of ‘gene drive’ programmes.

Back in the early days of recombinant DNA research, a group of biologists met in Asilomar, California, and agreed guidelines on what experiments should and shouldn’t be done. This seemingly encouraging precedent has triggered several meetings to discuss recent developments in the same spirit. But today, 40 years after Asilomar, the research community is far more broadly international, and more influenced by commercial pressures. I’d worry that whatever regulations are imposed, on prudential or ethical grounds, can’t be enforced worldwide – any more than the drug laws can, or the tax laws. Whatever can be done will be done by someone, somewhere.

And that’s a nightmare. Whereas an atomic bomb can’t be built without large scale special-purpose facilities, biotech involves small-scale dual-use equipment. Indeed, biohacking is burgeoning even as a hobby and competitive game.

We know all too well that technical expertise doesn’t guarantee balanced rationality. The global village will have its village idiots and they’ll have global range. The rising empowerment of tech-savvy groups (or even individuals), by bio as well as cyber technology will pose an intractable challenge to governments and aggravate the tension between freedom, privacy and security.

Concerns about bioerror and bioterror are relatively near-term – within 10 or 15 years. What about 2050 and beyond?

The smartphone, the web and their ancillaries are already crucial to our networked lives. But they would have seemed magic even 20 years ago. So, looking several decades ahead, we must keep our minds open – or at least ajar – to transformative advances that may now seem science fiction.

On the bio front, the great physicist Freeman Dyson conjectures a time when children will be able to design and create new organisms just as routinely as his generation played with chemistry sets. If it becomes possible to ‘play God on a kitchen table’ (as it were), our ecology (and even our species) may not long survive unscathed.

And what about another transformative technology: robotics and artificial intelligence (AI)?

There have been exciting advances in what’s called generalised machine learning: Deep Mind (a small London company now bought up by Google) has just achieved a remarkable feat – its computer has beaten the world champion in a game of Go. Meanwhile, Carnegie-Mellon University has developed a machine that can bluff and calculate as well as the best human players of poker.

Of course it’s 20 years since IBM's 'Deep Blue' beat Kasparov, the world chess champion. But Deep Blue was programmed in detail by expert players. In contrast, the machines that play Go and poker gained expertise by absorbing huge numbers of games and playing against themselves. Their designers don’t themselves know how the machines make seemingly insightful decisions.

The speed of computers allows them to succeed by ‘brute force’ methods. They learn to identify dogs, cats and human faces by ‘crunching’ through millions of images – not the way babies learn. They learn to translate by reading millions of pages of (for example) multilingual European Union documents (they never get bored!).

But advances are patchy. Robots are still clumsier than a child in moving pieces on a real chessboard. They can’t tie your shoelaces or cut old people’s toenails. But sensor technology, speech recognition, information searches and so forth are advancing apace.

They won’t just take over manual work (indeed plumbing and gardening will be among the hardest jobs to automate), but routine legal work (conveyancing and suchlike), medical diagnostics and even surgery.

Can robots cope with emergencies? For instance, if an obstruction suddenly appears on a crowded highway, can Google’s driverless car discriminate whether it’s a paper bag, a dog or a child? The likely answer is that its judgement will never be perfect, but will be better than the average driver – machine errors will occur, but not as often as human error. But when accidents do occur, they will create a legal minefield. Who should be held responsible – the ‘driver’, the owner, or the designer?

The big social and economic question is this: will this ‘second machine age’ be like earlier disruptive technologies – the car, for instance – and create as many jobs as it destroys? Or is it really different this time?

The money ‘earned’ by robots could generate huge wealth for an elite. But to preserve a healthy society will require massive redistribution to ensure that everyone has at least a ‘living wage’. A further challenge will be to create and upgrade public service jobs where the human element is crucial – carers for young and old, custodians, gardeners in public parks and so on – jobs which are now undervalued, but in huge demand.

But let’s look further ahead.

If robots could observe and interpret their environment as adeptly as we do, they would truly be perceived as intelligent beings, to which (or to whom) we can relate. Such machines pervade popular culture —in movies like Her, Transcendence and Ex Machina.

Do we have obligations towards them? We worry if our fellow-humans, and even animals, can’t fulfil their natural potential. Should we feel guilty if our robots are under-employed or bored?

What if a machine developed a mind of its own? Would it stay docile, or ‘go rogue’? If it could infiltrate the internet – and the internet of things – it could manipulate the rest of the world. It may have goals utterly orthogonal to human wishes, or even treat humans as an encumbrance.

Some AI pundits take this seriously, and think the field already needs guidelines – just as biotech does. But others regard these concerns as premature, and worry less about artificial intelligence than about real stupidity.

Be that as it may, it’s likely that society will be transformed by autonomous robots, even though the jury’s out on whether they’ll be ‘idiot savants’ or display superhuman capabilities.

There’s disagreement about the route towards human-level intelligence. Some think we should emulate nature, and reverse-engineer the human brain. Others say that’s as misguided as designing flying machine by copying how birds flap their wings. And philosophers debate whether “consciousness” is special to the wet, organic brains of humans, apes and dogs — so that robots, even if their intellects seem superhuman, will still lack self-awareness or inner life.

Ray Kurzweil, now working at Google, argues that once machines have surpassed human capabilities, they could themselves design and assemble a new generation of even more powerful ones – an intelligence explosion. He thinks that humans could transcend biology by merging with computers. In old-style spiritualist parlance, they would 'go over to the other side'.

Kurzweil is a prominent proponent of this so-called ‘singularity’. But he’s worried that it may not happen in his lifetime. So he wants his body frozen until this nirvana is reached. I was once interviewed by a group of 'cryonic' enthusiasts – based in California – called the 'society for the abolition of involuntary death'. They will freeze your body, so that when immortality’s on offer you can be resurrected or your brain downloaded.

I told them I'd rather end my days in an English churchyard than a Californian refrigerator. They derided me as a 'deathist' – really old fashioned.

I was surprised to find that three academics in this country had gone in for cryonics. Two had paid the full whack; the third has taken the cut-price option of wanting just his head frozen. I was glad they were from Oxford, not from Cambridge – or Bath.

But of course, research on ageing is being seriously prioritised. Will the benefits be incremental? Or is ageing a ‘disease’ that can be cured? Dramatic life-extension would plainly be a real wild card in population projections, with huge social ramifications. But it may happen, along with human enhancement in other forms.

And now a digression into my special interest – space. This is where robots surely have a future.

During this century the whole solar system will be explored by flotillas of miniaturised probes – far more advanced than ESA’s Rosetta, or the NASA probe that transmitted amazing pictures from Pluto, which is 10,000 times further away than the moon. These two instruments were designed and built 15 years ago. Think how much better we could do today. And later this century giant robotic fabricators may build vast lightweight structures floating in space (gossamer-thin radio reflectors or solar energy collectors, for instance) using raw materials mined from the Moon or asteroids.

Robotic advances will erode the practical case for human spaceflight. Nonetheless, I hope people will follow the robots into deep space, though it will be as risk-seeking adventurers rather than for practical goals. The most promising developments are spearheaded by private companies. SpaceX, led by Elon Musk, who also makes Tesla electric cars, has launched unmanned payloads and docked with the Space Station – and has recently achieved a soft recovery of the rocket’s first stage, rendering it reusable. Musk hopes soon to offer orbital flights to paying customers.

Wealthy adventurers are already signing up for a week-long trip round the far side of the Moon – voyaging further from Earth than anyone has been before (but avoiding the greater challenge of a Moon landing and blast-off). I’m told they’ve sold a ticket for the second flight – but not for the first.

We should surely acclaim these private enterprise efforts in space; they can tolerate higher risks than a western government could impose on publicly-funded bodies, and thereby cut costs compared to NASA or the ESA. But these they should be promoted as adventures or extreme sports – the phrase ‘space tourism’ should be avoided. It lulls people into unrealistic confidence.

By 2100 courageous pioneers in the mould of (say) the British adventurer Sir Ranulph Fiennes – or Felix Baumgartner, who broke the sound barrier in freefall from a high-altitude balloon – may have established ‘bases’ independent from the Earth, on Mars, or maybe on asteroids. Musk himself (aged 45) says he wants to die on Mars – but not on impact.

But don’t ever expect mass emigration from Earth. Nowhere in our solar system offers an environment even as clement as the Antarctic or the top of Everest. It’s a dangerous delusion to think that space offers an escape from Earth's problems. There’s no ‘Planet B’.

Indeed, Space is an inherently hostile environment for humans. For that reason, even though we may wish to regulate genetic and cyborg technology on Earth, we should surely wish the space pioneers good luck in using all such techniques to adapt to alien conditions. This might be the first step towards divergence into a new species: the beginning of the post-human era. And it would also ensure that advanced life would survive, even if the worst conceivable catastrophe befell our planet.

As an astronomer I’m sometimes asked: ‘does contemplation of huge expanses of space and time affect your everyday life?’ Well, having spent much of my life among astronomers, I have to tell you that they’re not especially serene, and fret as much as anyone about what happens next week or tomorrow. But they do bring one special perspective – an awareness of the far future. Let me explain.

The stupendous timespans of the evolutionary past are now part of common culture (outside ‘fundamentalist’ circles, at any rate). But most people still tend to regard humans as the culmination of the evolutionary tree. That hardly seems credible to an astronomer. Our Sun formed 4.5 billion years ago, but it's got 6 billion more before the fuel runs out, and the expanding universe will continue – perhaps forever. To quote Woody Allen, eternity is very long, especially towards the end. So we may not even be at the half-way stage of evolution.

It may take just decades to develop human-level AI – or it may take centuries. Be that as it may, it’s but an instant compared to the cosmic future stretching ahead.

There must be chemical and metabolic limits to the size and processing power of ‘wet’ organic brains. Maybe we’re close to these already. But fewer limits constrain electronic computers (still less, perhaps, quantum computers); for these, the potential for further development could be as dramatic as the evolution from pre-Cambrian organisms to humans. So, by any definition of ‘thinking’, the amount and intensity that’s done by organic human-type brains will be utterly swamped by the future cogitations of AI.

Moreover, the Earth’s environment may suit us ‘organics’ – but interplanetary and interstellar space may be the preferred arena where robotic fabricators will have the grandest scope for construction, and where non-biological ‘brains’ may develop greater powers than humans can even imagine.

I’ve no time to speculate further beyond the flakey fringe – perhaps a good thing! So let me conclude by focusing back more closely on the here and now.

For more information on Lord Rees' IPR lecture, please see our writeup here.

 

Who will shape the future of the data society?

📥  big data, data science, future, open data, technology

Dr Jonathan Gray is Prize Fellow at the IPR

The contemporary world is held together by a vast and overlapping fabric of information systems. These information systems do not only tell us things about the world around us. They also play a central role in organising many different aspects of our lives. They are not only instruments of knowledge, but also engines of change. But what kind of change will they bring?

shutterstock_231105463

 

Contemporary data infrastructures are the result of hundreds of years of work and thought. In charting the development of these infrastructures we can learn about the rise and fall not only of the different methods, technologies and standards implicated in the making of data, but also about the articulation of different kinds of social, political, economic and cultural worlds: different kinds of “data worlds”.

Beyond the rows and columns of data tables, the development of data infrastructures tells tales of the emergence of the world economy and global institutions; different ways of classifying populations; different ways of managing finances and evaluating performance; different programmes to reform and restructure public institutions; and how all kinds of issues and concerns are rendered into quantitative portraits in relation to which progress can be charted – from gender equality to child mortality, biodiversity to broadband access, unemployment to urban ecology.

The transnational network that assembled last week in Madrid for the International Open Data Conference has the opportunity to play a significant role in shaping the future of these data worlds. Many of those who were present have made huge contributions towards an agenda of opening up datasets and developing capacities to use them. Thanks to these efforts there is now global momentum around open data amongst international organisations, national governments, local administrations and civil society groups – which will have an enduring impact on how data is made public.

Perhaps, around a decade after the first stirrings of interest in what we know know as “open data”, it is time to have a broader conversation around not only the opening up and use of datasets, but also the making of data infrastructures: of what issues are rendered into data and how, and the kinds of dynamics of collective life that these infrastructures give rise to. How might we increase public deliberation around the calibration and direction of these engines of change?

Anyone involved with the creation of official data will be well aware that this is not a trivial proposition. Not least because of the huge amount of effort and expense that can be incurred in everything from developing standards, commissioning IT systems, organising consultation processes and running the social, technical and administrative systems which can be required to create and maintain even the smallest and simplest of datasets. Reshaping data worlds can be slow and painstaking work. But unless we instate processes to ensure alignment between data infrastructures and the concerns of their various publics, we risk sustaining systems which are at best disconnected from and at worst damaging towards those whom they are intended to benefit.

What might such social shaping of data infrastructures look like? Luckily there is no shortage of recent examples – from civil society groups campaigning for changes in existing information systems (such as advocacy around the UK’s company register), to cases of citizen and civil society data leading to changes in official data collection practices, to the emergence of new tools and methods to work with, challenge and articulate alternatives to official data. Official data can also be augmented by “born digital” data derived from a variety of different platforms, sources and devices which can be creatively repurposed in the service of studying and securing progress around different issues.

While there is a great deal of experimentation with data infrastructures “in the wild”, how might institutions learn from these initiatives in order to make public data infrastructures more responsive to their publics? How can we open up new spaces for participation and deliberation around official information systems at the same time as building on the processes and standards which have developed over decades to ensure the quality, integrity and comparability of official data? How might participatory design methods be applied to involve different publics in the making of public data? How might official data be layered with other “born digital” data sources to develop a richer picture around issues that matter? How do we develop the social, technical and methodological capacities required to enable more people to take part not just in using datasets, but also reshaping data worlds?

Addressing these questions will be crucial to the development of a new phase of the open data movement – from the opening up of datasets to the opening up of data infrastructures. Public institutions may find they have not only new users, but new potential contributors and collaborators as the sites where public data is made begin to multiply and extend outside of the public sector – raising new issues and challenges related to the design, governance and political economics of public information systems.

The development of new institutional processes, policies and practices to increase democratic engagement around data infrastructures may be more time consuming than some of the comparatively simpler steps that institutions can take to open up their datasets. But further work in this area is vital to secure progress on a wide range of issues – from tackling tax base erosion to tracking progress towards commitments made at the recent Paris climate negotiations.

As a modest contribution to advancing research and practice around these issues, a new initiative called the Public Data Lab is forming to convene researchers, institutions and civil society groups with an interest in the making of data infrastructures, as well as the development of capacities that are required for more people to not only take part in the data society, but also to more meaningfully participate in shaping its future.

This piece originally appeared on the IODC16 website.

Emily Rempel: 'The Problem of Public Engagement, Public Policy and Public Data'

📥  big data, data science, open data, technology

Why does public engagement with new technology matter? More to the point, why does public engagement with data matter? The collection and use of data isn’t new: in England, the state has been collecting data since at least the Domesday Book. People have been counting up things and making decisions based on those things for a long time. So why have perceived public concerns with things like data privacy and consent now, in 2016, become an issue? The answer lies in what that data can do.

Data analysis is no longer just about summing and recording. People (and machines) can create complex algorithms that seek to predict everything from flu trends to the likelihood of an individual committing a crime. These new kinds of data projects are broadly defined as ‘data science’, e.g. the use and combination of data in new ways. This science can leverage pre-existing ‘big data’ sets containing trillions of rows of data, like national hospital and store purchase records, to propose, inform and evaluate policy.

a
To engage different publics on ‘data science’, the Government Digital Service commissioned the Public Dialogue on Data Science Ethics. For the past few months, I have had the opportunity to observe these public consultations. The dialogues involved asking members of the UK public for their views on data science, asking specifically what lines government shouldn’t cross in its use and combination of data.

Engagement exercises like this one sit on a spectrum based on the intensity of active public involvement they entail (see the figure below). At one end, there is simple knowledge translation from government to citizen; at the other, full public participation and deliberation [3]. This typology also describes changes in how engagement is done in practice. Over the past few decades there has been a distinct shift from telling to asking publics about risk. However, engagement activities often blur this line. Arguably information provision must take place in consultation exercises, as it’s hard to imagine people caring about something they know nothing about. A more nuanced trend is the change in the purpose of engagement. Essentially, why should we do engagement? Knowledge translation is strongly related to securing some acceptable degree of trust, for example to improve public acceptance of a technology. More recently the purpose has shifted to a combination of building trust and robustness. Robustness involves publics shaping and improving new technologies and regulatory policies. This ‘purpose spectrum’ can tell us whether engagement will result in noticeable change.

z

 

The Public Dialogue on Data Science Ethics included both trust and robustness objectives. The dialogue’s aim was to reform the government’s Data Ethics Guidelines [4], which positions it as a deliberative exercise: one where public voices are given more influence in government activity. Each session included exercises based on identifying types of data and examples of how data science could be used in policy. The dialogue organisers used these exercises to translate complex and diffuse concepts, like machine learning, into accessible discussion prompts. They had to push participants outside the box in thinking about types of data, for example mobile phone GPS records and commercial ‘smart meter’ readings. This led to in-depth debates about the usefulness of data science in policy, which will inform changes to ethical guidelines.

The dialogue sits comfortably on the right side of both the ‘activity’ and ‘purpose’ spectrums. People aren’t given full deliberative power on data science, but it is anticipated that there will be tangible change and impact from the dialogue. However, this hides a major constraint in public engagement exercises, which is that the description of technology risks shapes or frames the discussion of them. Engagement activities necessarily involve information provision (telling) and consultation (asking). But ‘telling’ creates artificial discussion boundaries. If you present a major risk of data use as bias from inaccurate data, people’s discussion will inevitably revolve around that issue. The kinds of risks discussed included re-identification of individuals who were thought to be anonymous or lack of privacy in data handling procedures. At a more essential level, most publics would not immediately understand what ‘data science’ means. Their frame of reference for discussion will emerge from the definition given to them. It is challenging to probe for in-depth risk assessments when people have limited examples to draw from. By defining specific topics and examples of data science, engagement exercises are unavoidably restricted to certain views and presentations of data. The concerns and benefits people highlight are a reflection of the information they’ve been given. The dialogue may aim to be on the right side of the purpose spectrum but achieving that aim is difficult.

Organisers and different publics are both constrained by the limitations of public engagement. Engagement exercises tend to reflect a propensity towards restricted public influence. This means that the kinds of questions organisers can ask are also restricted. They cannot probe on policies and directives publics have no influence on to begin with, for example whether government should or should not work towards a model of ‘data-driven policy’. Greater public influence would require significant changes in technology risk governance and decision-making. It would require unravelling the constraint of balancing information provision with public engagement. How this could be done, either practically or theoretically, is up for debate.

This brings us back to the idea of purpose. If there is such a limited role for public voices, why bother? Arguably, for the time being, what’s important is the aim. The Public Dialogue on Data Science Ethics aimed, and did, include public opinions in the future of data science in policy. This lends credence to the idea that new kinds of data are a public matter and that people deserve a voice, even if for now, it is limited to a specific set of discussions. As we go forward with engaging publics in data science, we can move beyond managing concern. Conversations can be broader and more inclusive.

Emily Rempel is an interdisciplinary PhD student in the Department of Psychology and the Institute for Policy Research at the University of Bath. She is working with the Cabinet Office’s Government Digital Service on the Public Dialogue on Data Science Ethics.

[1] http://www.domesdaybook.co.uk/

[2] http://blogs.bath.ac.uk/iprblog/2016/02/12/emily-rempel-on-the-machines-are-all-around-us-an-introduction-to-the-uk-governments-public-dialogue-on-data-ethics/

[3] Rowe, G., Frewer, L. J., & Frewer, L. J. (2015). A Typology of Public Engagement Mechanisms, 30(2), 251–290. http://doi.org/10.1177/0162243904271724

[4] https://data.blog.gov.uk/2015/12/08/data-science-ethics/