The World in 2050 and Beyond: Part 2 - Technological Errors and Terrors

Posted in: Evidence and policymaking, Science and research policy, Security and defence

Lord Rees of Ludlow is Astronomer Royal at the University of Cambridge's Institute of Astronomy, and founder of the Centre for the Study of Existential Risk. This blog post, the second in a three-part series, is based on a lecture he gave at the IPR on 9 February. Read the first part here.

I think we should be evangelists for new technologies – without them the world can’t provide food, and sustainable energy, for an expanding and more demanding population. But we need wisely-directed technology. Indeed, many are anxious that it’s advancing so fast that we may not properly cope with it – and that we’ll have a bumpy ride through this century.

Let me expand on these concerns.

Our world increasingly depends on elaborate networks: electric-power grids, air traffic control, international finance, globally-dispersed manufacturing, and so forth. Unless these networks are highly resilient, their benefits could be outweighed by catastrophic (albeit rare) breakdowns – real-world analogues of what happened in 2008 to the financial system. Our cities would be paralysed without electricity. Supermarket shelves would be empty within days if supply chains were disrupted. Air travel could spread a pandemic worldwide within a week, causing the gravest havoc in the shambolic megacities of the developing world. And social media can spread panic and rumour, and economic contagion, literally at the speed of light.

To guard against the downsides of such an interconnected world plainly requires international collaboration. For instance, whether or not a pandemic gets global grip may hinge on how quickly a Vietnamese poultry farmer can report any strange sickness.

Advances in microbiology – diagnostics, vaccines and antibiotics – offer prospects of containing pandemics. But the same research has controversial aspects. For instance, in 2012, groups in Wisconsin and in Holland showed that it was surprisingly easy to make the influenza virus both more virulent and transmissible – to some, this was a scary portent of things to come. In 2014 the US federal government decided to cease funding these so-called ‘gain of function’ experiments.

The new CRISPR-cas technique for gene-editing is hugely promising, but there are ethical concerns raised by Chinese experiments on human embryos and by possible unintended consequences of ‘gene drive’ programmes.

Back in the early days of recombinant DNA research, a group of biologists met in Asilomar, California, and agreed guidelines on what experiments should and shouldn’t be done. This seemingly encouraging precedent has triggered several meetings to discuss recent developments in the same spirit. But today, 40 years after Asilomar, the research community is far more broadly international, and more influenced by commercial pressures. I’d worry that whatever regulations are imposed, on prudential or ethical grounds, can’t be enforced worldwide – any more than the drug laws can, or the tax laws. Whatever can be done will be done by someone, somewhere.

And that’s a nightmare. Whereas an atomic bomb can’t be built without large scale special-purpose facilities, biotech involves small-scale dual-use equipment. Indeed, biohacking is burgeoning even as a hobby and competitive game.

We know all too well that technical expertise doesn’t guarantee balanced rationality. The global village will have its village idiots and they’ll have global range. The rising empowerment of tech-savvy groups (or even individuals), by bio as well as cyber technology will pose an intractable challenge to governments and aggravate the tension between freedom, privacy and security.

Concerns about bioerror and bioterror are relatively near-term – within 10 or 15 years. What about 2050 and beyond?

The smartphone, the web and their ancillaries are already crucial to our networked lives. But they would have seemed magic even 20 years ago. So, looking several decades ahead, we must keep our minds open – or at least ajar – to transformative advances that may now seem science fiction.

On the bio front, the great physicist Freeman Dyson conjectures a time when children will be able to design and create new organisms just as routinely as his generation played with chemistry sets. If it becomes possible to ‘play God on a kitchen table’ (as it were), our ecology (and even our species) may not long survive unscathed.

And what about another transformative technology: robotics and artificial intelligence (AI)?

There have been exciting advances in what’s called generalised machine learning: Deep Mind (a small London company now bought up by Google) has just achieved a remarkable feat – its computer has beaten the world champion in a game of Go. Meanwhile, Carnegie-Mellon University has developed a machine that can bluff and calculate as well as the best human players of poker.

Of course it’s 20 years since IBM's 'Deep Blue' beat Kasparov, the world chess champion. But Deep Blue was programmed in detail by expert players. In contrast, the machines that play Go and poker gained expertise by absorbing huge numbers of games and playing against themselves. Their designers don’t themselves know how the machines make seemingly insightful decisions.

The speed of computers allows them to succeed by ‘brute force’ methods. They learn to identify dogs, cats and human faces by ‘crunching’ through millions of images – not the way babies learn. They learn to translate by reading millions of pages of (for example) multilingual European Union documents (they never get bored!).

But advances are patchy. Robots are still clumsier than a child in moving pieces on a real chessboard. They can’t tie your shoelaces or cut old people’s toenails. But sensor technology, speech recognition, information searches and so forth are advancing apace.

They won’t just take over manual work (indeed plumbing and gardening will be among the hardest jobs to automate), but routine legal work (conveyancing and suchlike), medical diagnostics and even surgery.

Can robots cope with emergencies? For instance, if an obstruction suddenly appears on a crowded highway, can Google’s driverless car discriminate whether it’s a paper bag, a dog or a child? The likely answer is that its judgement will never be perfect, but will be better than the average driver – machine errors will occur, but not as often as human error. But when accidents do occur, they will create a legal minefield. Who should be held responsible – the ‘driver’, the owner, or the designer?

The big social and economic question is this: will this ‘second machine age’ be like earlier disruptive technologies – the car, for instance – and create as many jobs as it destroys? Or is it really different this time?

The money ‘earned’ by robots could generate huge wealth for an elite. But to preserve a healthy society will require massive redistribution to ensure that everyone has at least a ‘living wage’. A further challenge will be to create and upgrade public service jobs where the human element is crucial – carers for young and old, custodians, gardeners in public parks and so on – jobs which are now undervalued, but in huge demand.

But let’s look further ahead.

If robots could observe and interpret their environment as adeptly as we do, they would truly be perceived as intelligent beings, to which (or to whom) we can relate. Such machines pervade popular culture —in movies like Her, Transcendence and Ex Machina.

Do we have obligations towards them? We worry if our fellow-humans, and even animals, can’t fulfil their natural potential. Should we feel guilty if our robots are under-employed or bored?

What if a machine developed a mind of its own? Would it stay docile, or ‘go rogue’? If it could infiltrate the internet – and the internet of things – it could manipulate the rest of the world. It may have goals utterly orthogonal to human wishes, or even treat humans as an encumbrance.

Some AI pundits take this seriously, and think the field already needs guidelines – just as biotech does. But others regard these concerns as premature, and worry less about artificial intelligence than about real stupidity.

Be that as it may, it’s likely that society will be transformed by autonomous robots, even though the jury’s out on whether they’ll be ‘idiot savants’ or display superhuman capabilities.

There’s disagreement about the route towards human-level intelligence. Some think we should emulate nature, and reverse-engineer the human brain. Others say that’s as misguided as designing flying machine by copying how birds flap their wings. And philosophers debate whether “consciousness” is special to the wet, organic brains of humans, apes and dogs — so that robots, even if their intellects seem superhuman, will still lack self-awareness or inner life.

Ray Kurzweil, now working at Google, argues that once machines have surpassed human capabilities, they could themselves design and assemble a new generation of even more powerful ones – an intelligence explosion. He thinks that humans could transcend biology by merging with computers. In old-style spiritualist parlance, they would 'go over to the other side'.

Kurzweil is a prominent proponent of this so-called ‘singularity’. But he’s worried that it may not happen in his lifetime. So he wants his body frozen until this nirvana is reached. I was once interviewed by a group of 'cryonic' enthusiasts – based in California – called the 'society for the abolition of involuntary death'. They will freeze your body, so that when immortality’s on offer you can be resurrected or your brain downloaded.

I told them I'd rather end my days in an English churchyard than a Californian refrigerator. They derided me as a 'deathist' – really old fashioned.

I was surprised to find that three academics in this country had gone in for cryonics. Two had paid the full whack; the third has taken the cut-price option of wanting just his head frozen. I was glad they were from Oxford, not from Cambridge – or Bath.

But of course, research on ageing is being seriously prioritised. Will the benefits be incremental? Or is ageing a ‘disease’ that can be cured? Dramatic life-extension would plainly be a real wild card in population projections, with huge social ramifications. But it may happen, along with human enhancement in other forms.

And now a digression into my special interest – space. This is where robots surely have a future.

During this century the whole solar system will be explored by flotillas of miniaturised probes – far more advanced than ESA’s Rosetta, or the NASA probe that transmitted amazing pictures from Pluto, which is 10,000 times further away than the moon. These two instruments were designed and built 15 years ago. Think how much better we could do today. And later this century giant robotic fabricators may build vast lightweight structures floating in space (gossamer-thin radio reflectors or solar energy collectors, for instance) using raw materials mined from the Moon or asteroids.

Robotic advances will erode the practical case for human spaceflight. Nonetheless, I hope people will follow the robots into deep space, though it will be as risk-seeking adventurers rather than for practical goals. The most promising developments are spearheaded by private companies. SpaceX, led by Elon Musk, who also makes Tesla electric cars, has launched unmanned payloads and docked with the Space Station – and has recently achieved a soft recovery of the rocket’s first stage, rendering it reusable. Musk hopes soon to offer orbital flights to paying customers.

Wealthy adventurers are already signing up for a week-long trip round the far side of the Moon – voyaging further from Earth than anyone has been before (but avoiding the greater challenge of a Moon landing and blast-off). I’m told they’ve sold a ticket for the second flight – but not for the first.

We should surely acclaim these private enterprise efforts in space; they can tolerate higher risks than a western government could impose on publicly-funded bodies, and thereby cut costs compared to NASA or the ESA. But these they should be promoted as adventures or extreme sports – the phrase ‘space tourism’ should be avoided. It lulls people into unrealistic confidence.

By 2100 courageous pioneers in the mould of (say) the British adventurer Sir Ranulph Fiennes – or Felix Baumgartner, who broke the sound barrier in freefall from a high-altitude balloon – may have established ‘bases’ independent from the Earth, on Mars, or maybe on asteroids. Musk himself (aged 45) says he wants to die on Mars – but not on impact.

But don’t ever expect mass emigration from Earth. Nowhere in our solar system offers an environment even as clement as the Antarctic or the top of Everest. It’s a dangerous delusion to think that space offers an escape from Earth's problems. There’s no ‘Planet B’.

Indeed, Space is an inherently hostile environment for humans. For that reason, even though we may wish to regulate genetic and cyborg technology on Earth, we should surely wish the space pioneers good luck in using all such techniques to adapt to alien conditions. This might be the first step towards divergence into a new species: the beginning of the post-human era. And it would also ensure that advanced life would survive, even if the worst conceivable catastrophe befell our planet.

As an astronomer I’m sometimes asked: ‘does contemplation of huge expanses of space and time affect your everyday life?’ Well, having spent much of my life among astronomers, I have to tell you that they’re not especially serene, and fret as much as anyone about what happens next week or tomorrow. But they do bring one special perspective – an awareness of the far future. Let me explain.

The stupendous timespans of the evolutionary past are now part of common culture (outside ‘fundamentalist’ circles, at any rate). But most people still tend to regard humans as the culmination of the evolutionary tree. That hardly seems credible to an astronomer. Our Sun formed 4.5 billion years ago, but it's got 6 billion more before the fuel runs out, and the expanding universe will continue – perhaps forever. To quote Woody Allen, eternity is very long, especially towards the end. So we may not even be at the half-way stage of evolution.

It may take just decades to develop human-level AI – or it may take centuries. Be that as it may, it’s but an instant compared to the cosmic future stretching ahead.

There must be chemical and metabolic limits to the size and processing power of ‘wet’ organic brains. Maybe we’re close to these already. But fewer limits constrain electronic computers (still less, perhaps, quantum computers); for these, the potential for further development could be as dramatic as the evolution from pre-Cambrian organisms to humans. So, by any definition of ‘thinking’, the amount and intensity that’s done by organic human-type brains will be utterly swamped by the future cogitations of AI.

Moreover, the Earth’s environment may suit us ‘organics’ – but interplanetary and interstellar space may be the preferred arena where robotic fabricators will have the grandest scope for construction, and where non-biological ‘brains’ may develop greater powers than humans can even imagine.

I’ve no time to speculate further beyond the flakey fringe – perhaps a good thing! So let me conclude by focusing back more closely on the here and now.

For more information on Lord Rees' IPR lecture, please see our writeup here.

Posted in: Evidence and policymaking, Science and research policy, Security and defence

Respond

  • (we won't publish this)

Write a response