WILL ARTIFICIAL INTELLIGENCE OUTSMART US?
Intelligence is central to what it means to be human. Everything that civilisation has to offer is a product of human intelligence.
DNA passes the blueprints of life between generations. Ever more complex life forms input information from sensors such as eyes and ears and process the information in brains or other systems to figure out how to act and then act on the world, by outputting information to muscles, for example. At some point during our 13.8 billion years of cosmic history, something beautiful happened. This information processing got so intelligent that life forms became conscious. Our universe has now awoken, becoming aware of itself. I regard it a triumph that we, who are ourselves mere stardust, have come to such a detailed understanding of the universe in which we live.
I think there is no significant difference between how the brain of an earthworm works and how a computer computes. I also believe that evolution implies there can be no qualitative difference between the brain of an earthworm and that of a human. It therefore follows that computers can, in principle, emulate human intelligence, or even better it. It’s clearly possible for something to acquire higher intelligence than its ancestors: we evolved to be smarter than our ape-like ancestors, and Einstein was smarter than his parents.
If computers continue to obey Moore’s Law, doubling their speed and memory capacity every eighteen months, the result is that computers are likely to overtake humans in intelligence at some point in the next hundred years. When an artificial intelligence (AI) becomes better than humans at AI design, so that it can recursively improve itself without human help, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails. When that happens, we will need to ensure that the computers have goals aligned with ours. It’s tempting to dismiss the notion of highly intelligent machines as mere science fiction, but this would be a mistake, and potentially our worst mistake ever.
For the last twenty years or so, AI has been focused on the problems surrounding the construction of intelligent agents, systems that perceive and act in a particular environment. In this context, intelligence is related to statistical and economic notions of rationality—that is, colloquially, the ability to make good decisions, plans or inferences. As a result of this recent work, there has been a large degree of integration and cross- fertilisation among AI, machine-learning, statistics, control theory, neuroscience and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks, such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion and question-answering systems.
As development in these areas and others moves from laboratory research to economically valuable technologies, a virtuous cycle evolves, whereby even small improvements in performance are worth large sums of money, prompting further and greater investments in research. There is now a broad consensus that AI research is progressing steadily and that its impact on society is likely to increase. The potential benefits are huge; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide. The eradication of disease and poverty is possible. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls. Success in creating AI would be the biggest event in human history.
Unfortunately, it might also be the last, unless we learn how to avoid the risks. Used as a toolkit, AI can augment our existing intelligence to open up advances in every area of science and society. However, it will also bring dangers. While primitive forms of artificial intelligence developed so far have proved very useful, I fear the consequences of creating something that can match or surpass humans. The concern is that AI would take off on its own and redesign itself at an ever-increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded. And in the future AI could develop a will of its own, a will that is in conflict with ours. Others believe that humans can command the rate of technology for a decently long time, and that the potential of AI to solve many of the world’s problems will be realised. Although I am well known as an optimist regarding the human race, I am not so sure.
In the near term, for example, world militaries are considering starting an arms race in autonomous weapon systems that can choose and eliminate their own targets. While the UN is debating a treaty banning such weapons, autonomous-weapons proponents usually forget to ask the most important question. What is the likely end-point of an arms race and is that desirable for the human race? Do we really want cheap AI weapons to become the Kalashnikovs of tomorrow, sold to criminals and terrorists on the black market? Given concerns about our ability to maintain long- term control of ever more advanced AI systems, should we arm them and turn over our defence to them? In 2010, computerised trading systems created the stock-market Flash Crash; what would a computer-triggered crash look like in the defence arena? The best time to stop the autonomous-weapons arms race is now.
In the medium term, AI may automate our jobs, to bring both great prosperity and equality. Looking further ahead, there are no fundamental limits to what can be achieved. There is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains. An explosive transition is possible, although it may play out differently than in the movies. As mathematician Irving Good realised in 1965, machines with superhuman intelligence could repeatedly improve their design even further, in what science-fiction writer Vernor Vinge called a technological singularity. One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders and potentially subduing us with weapons we cannot even understand. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all.
In short, the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. We should plan ahead. If a superior alien civilisation sent us a text message saying, “We’ll arrive in a few decades,” would we just reply, “OK, call us when you get here, we’ll leave the lights on”? Probably not, but this is more or less what has happened with AI. Little serious research has been devoted to these issues outside a few small non-profit institutes.
Fortunately, this is now changing. Technology pioneers Bill Gates, Steve Wozniak and Elon Musk have echoed my concerns, and a healthy culture of risk assessment and awareness of societal implications is beginning to take root in the AI community. In January 2015, I, along with Elon Musk and many AI experts, signed an open letter on artificial intelligence, calling for serious research into its impact on society. In the past, Elon Musk has warned that superhuman artificial intelligence is capable of providing incalculable benefits, but if deployed incautiously will have an adverse effect on the human race. He and I sit on the scientific advisory board for the Future of Life Institute, an organisation working to mitigate existential risks facing humanity, and which drafted the open letter. This called for concrete research on how we could prevent potential problems while also reaping the potential benefits AI offers us, and is designed to get AI researchers and developers to pay more attention to AI safety. In addition, for policymakers and the general public the letter was meant to be informative but not alarmist. We think it is very important that everybody knows that AI researchers are seriously thinking about these concerns and ethical issues. For example, AI has the potential to eradicate disease and poverty, but researchers must work to create AI that can be controlled.
In October 2016, I also opened a new centre in Cambridge, which will attempt to tackle some of the open-ended questions raised by the rapid pace of development in AI research. The Leverhulme Centre for the Future of Intelligence is a multi-disciplinary institute, dedicated to researching the future of intelligence as crucial to the future of our civilisation and our species. We spend a great deal of time studying history, which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence. We are aware of the potential dangers, but perhaps with the tools of this new technological revolution we will even be able to undo some of the damage done to the natural world by industrialisation.
Recent developments in the advancement of AI include a call by the European Parliament for drafting a set of regulations to govern the creation of robots and AI. Somewhat surprisingly, this includes a form of electronic personhood, to ensure the rights and responsibilities for the most capable and advanced AI. A European Parliament spokesman has commented that, as a growing number of areas in our daily lives are increasingly affected by robots, we need to ensure that robots are, and will remain, in the service of humans. A report presented to the Parliament declares that the world is on the cusp of a new industrial robot revolution. It examines whether or not providing legal rights for robots as electronic persons, on a par with the legal definition of corporate personhood, would be permissible. But it stresses that at all times researchers and designers should ensure all robotic design incorporates a kill switch.
This didn’t help the scientists on board the spaceship with Hal, the malfunctioning robotic computer in Stanley Kubrick’s 2001: A Space Odyssey, but that was fiction. We deal with fact. Lorna Brazell, a consultant at the multinational law firm Osborne Clarke, says in the report that we don’t give whales and gorillas personhood, so there is no need to jump at robotic personhood. But the wariness is there. The report acknowledges the possibility that within a few decades AI could surpass human intellectual capacity and challenge the human–robot relationship.
By 2025, there will be about thirty mega-cities, each with more than ten million inhabitants. With all those people clamouring for goods and services to be delivered whenever they want them, can technology help us keep pace with our craving for instant commerce? Robots will definitely speed up the online retail process. But to revolutionise shopping they need to be fast enough to allow same-day delivery on every order.
Opportunities for interacting with the world, without having to be physically present, are increasing rapidly. As you can imagine, I find that appealing, not least because city life for all of us is so busy. How many times have you wished you had a double who could share your workload? Creating realistic digital surrogates of ourselves is an ambitious dream, but the latest technology suggests that it may not be as far-fetched an idea as it sounds.
When I was younger, the rise of technology pointed to a future where we would all enjoy more leisure time. But in fact the more we can do, the busier we become. Our cities are already full of machines that extend our capabilities, but what if we could be in two places at once? We’re used to automated voices on phone systems and public announcements. Now inventor Daniel Kraft is investigating how we can replicate ourselves visually. The question is, how convincing can an avatar be?
Interactive tutors could prove useful for massive open online courses (MOOCs) and for entertainment. It could be really exciting—digital actors that would be forever young and able to perform otherwise impossible feats. Our future idols might not even be real.
How we connect with the digital world is key to the progress we’ll make in the future. In the smartest cities, the smartest homes will be equipped with devices that are so intuitive they’ll be almost effortless to interact with.
When the typewriter was invented, it liberated the way we interact with machines. Nearly 150 years later and touch screens have unlocked new ways to communicate with the digital world. Recent AI landmarks, such as self-driving cars, or a computer winning at the game of Go, are signs of what is to come. Enormous levels of investment are pouring into this technology, which already forms a major part of our lives. In the coming decades it will permeate every aspect of our society, intelligently supporting and advising us in many reas including healthcare, work, education and science. The achievements we have seen so far will surely pale against what the coming decades will bring, and we cannot predict what we might achieve when our own minds are amplified by AI.
Perhaps with the tools of this new technological revolution we can make human life better. For instance, researchers are developing AI that would help reverse paralysis in people with spinal-cord injuries. Using silicon chip implants and wireless electronic interfaces between the brain and the body, the technology would allow people to control their body movements with their thoughts.
I believe the future of communication is brain–computer interfaces. There are two ways: electrodes on the skull and implants. The first is like looking through frosted glass, the second is better but risks infection. If we can connect a human brain to the internet it will have all of Wikipedia as its resource.
The world has been changing even faster as people, devices and information are increasingly connected to each other. Computational power is growing and quantum computing is quickly being realised. This will revolutionise artificial intelligence with exponentially faster speeds. It will advance encryption. Quantum computers will change everything, even human biology. There is already one technique to edit DNA precisely, called CRISPR. The basis of this genome-editing technology is a bacterial defence system. It can accurately target and edit stretches of genetic code. The best intention of genetic manipulation is that modifying genes would allow scientists to treat genetic causes of disease by correcting gene mutations. There are, however, less noble possibilities for manipulating DNA. How far we can go with genetic engineering will become an increasingly urgent question. We can’t see the possibilities of curing motor neurone diseases—like my ALS—without also glimpsing its dangers.
Intelligence is characterised as the ability to adapt to change. Human intelligence is the result of generations of natural selection of those with the ability to adapt to changed circumstances. We must not fear change. We need to make it work to our advantage.
We all have a role to play in making sure that we, and the next generation, have not just the opportunity but the determination to engage fully with the study of science at an early level, so that we can go on to fulfil our potential and create a better world for the whole human race. We need to take learning beyond a theoretical discussion of how AI should be and to make sure we plan for how it can be. We all have the potential to push the boundaries of what is accepted, or expected, and to think big. We stand on the threshold of a brave new world. It is an exciting, if precarious, place to be, and we are the pioneers.
When we invented fire, we messed up repeatedly, then invented the fire extinguisher. With more powerful technologies such as nuclear weapons, synthetic biology and strong artificial intelligence, we should instead plan ahead and aim to get things right the first time, because it may be the only chance we will get. Our future is a race between the growing power of our technology and the wisdom with which we use it. Let’s make sure that wisdom wins.
Why are we so worried about artificial intelligence? Surely humans are always able to pull the plug?
People asked a computer, “Is there a God?” And the computer said, “There is now,” and fused the plug.