Big Questions: Is AI changing the way we think?

Posted in: BA2, Research

When will machines be smarter than us? It’s the most common question students ask me. My answer is “They already are”. Smart machines can already do things that humans could never do; a spreadsheet can calculate a table of numbers in a fraction of a second, for example. There’s a lot of hype around artificial intelligence. The tech industry in particular is always looking for something new, even when it’s not new, so in a sense AI is just a new label for technology.

That said, at the beginning of this century, computer scientists made a big breakthrough in being able to build much deeper neural networks, with more layers in between them. Because we’re building faster and faster computers, people argue that we’re effectively building intelligence, and soon there’ll be no limit to what this intelligence can do. It plays to Western fears about creating something which then destroys the creator – from ancient Greek mythology, where Pygmalion fell in love with one of his sculptures which then came to life, through to Frankenstein and the Terminator.

This doesn’t mean we shouldn’t be worried about AI, but we need to reframe the question: instead of focussing on what this technology can do, we should be focussing on what we should do with this technology. That’s why I ask my students to think about the ethical implications of whatever they go on to build. For example, the government is moving its services online, but that doesn’t mean everyone can access those services. We need to ask, is it going to discriminate against people, or lock some people out from certain benefits that they might otherwise have had?

For my PhD I built a robot and conducted experiments to observe how people interacted with it. What I learned was that if the machine nature of a robot is overt, then humans are able to see it as a mechanism, and therefore better understand its purpose. For example, what’s the economic transaction that’s going on? Who’s benefiting from this robot? How much of the benefit that’s being derived is ours? By making technology transparent, it’s less likely people
will be deceived by it.

I think there are dangers in continually trying to optimise life because we learn when we search and explore for ourselves. For instance, when tech companies replace search functions with recommendation engines, we start to defer to the technology, which could then weaken our own decision-making capabilities.

AI also has huge implications for human to human contact. Take sex robots, for example. As we build smarter machines there will be a tendency to start to treat them like people, but conversely this could mean we start to treat people like machines. Or the self-scanning supermarket checkouts might be more efficient, but for people who live alone, talking to another human at the checkout might be the only contact they get in a day. So it’s important to explore what the implications are of optimising human contact to the extent where relationships are replaced by transactions.

Guiding principles around AI already exist, but we really need standards of professional behaviour that are enforceable, like the norms that we have in other areas, such as genetic engineering. And that’s a new attitude in this area, because technology is all about being disruptive. To put it simply, just because you can do it, it doesn’t mean that you should.

Dr Rob Wortham, MEng Electrical & Electronic Engineering 1986, PhD Computer Science 2018

Posted in: BA2, Research

Read the full issue of BA2 #27

Respond

  • (we won't publish this)

Write a response