The long-term future of AI
In 1965, I. J. Good's article Speculations Concerning the First Ultraintelligent Machine included the following remark:
"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."
For most of the history of AI, this issue has been ignored. Indeed,
Good himself continues, "It is curious that this point is made so
seldom outside of science fiction." As the capabilities of AI systems
improve, however, and as the transition of AI into broad areas of
human life leads to huge increases in research investment, it is
inevitable that the field will have to begin to take itself
seriously. The field has operated for over 50 years on one simple assumption: the more intelligent, the better.
To this must be conjoined an overriding concern for the benefit of humanity. The argument is very simple:
- AI is likely to succeed.
- Unconstrained success brings huge risks and huge benefits.
- What can we do now to improve the chances of reaping the benefits and avoiding the risks?
Some organizations are already considering these questions, including the
Future of Humanity Institute at Oxford,
the Centre for the Study of Existential Risk at Cambridge,
the Machine Intelligence Research Institute in Berkeley,
and the Future of Life Institute at Harvard/MIT.
I serve on the Advisory Boards of CSER, FLI, and MIRI.
Just as nuclear fusion researchers consider the problem
of containment of fusion reactions as one of the primary
problems of their field, it seems inevitable that issues of control
and safety will become central to AI as the field matures. The
research questions are beginning to be formulated and range from
highly technical (foundational issues of rationality and utility,
provable properties of agents, etc.) to broadly philosophical.
Media, publications, etc.
- Stuart Russell, The Future of AI: What if We Succeed?, panel at IJCAI 13, Beijing, August 9, 2013.
- Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek,
``Transcending Complacency on Superintelligent Machines.''
Huffington Post, April 19, 2014.
- Stuart Russell, Transcendence: An AI Researcher Enjoys Watching His Own Execution, Huffington Post, April 29, 2014.
- Workshop on the Future of Artificial Intelligence held at AAMAS 14, Paris, May 6, 2014.
- Interview on the subject of the movie Transcendence with Stuart Russell, Christof Koch, on NPR Science Friday, May 9, 2014.
- Interview on the long-term future of AI with Stuart Russell, on Canadian Broadcasting Corporation's Spark with Nora Young, May 31, 2014. [transcript]
- Stuart Russell, Of Myths and Moonshine, contribution to the conversation on The Myth of AI on edge.org.
- Stuart Russell and more than 7000 others, Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter, January, 2015.
- Value Alignment, Berkeley IdeasLab Debate Presentation at the World Economic Forum, Davos, January 21, 2015.
- Panel discussion live on NHK TV (Japan), World Economic Forum, Davos, January 22, 2015.
- Interview on Hub Culture TV, World Economic Forum, Davos, January 23, 2015.
- Our Fear of Artificial Intelligence, by Paul Ford, MIT Technology Review, February 1, 2015.
- Stuart Russell, Will they make us better people?, contribution to the Annual Question, 2015 on edge.org.
- Invasion of the Friendly Movie Robots, by Don Steinberg, Wall Street Journal, February 26, 2015.
- The Future of Artificial Intelligence, with Stuart Russell, Eric Horvitz, Max Tegmark, on NPR Science Friday, April 10, 2014.
- Concerns of an Artificial Intelligence Pioneer, by Natalie Wolchover, Quanta Nagazine, April 21, 2015.
- How smart is today's artificial intelligence?, PBS Newshour, May 8, 2015.
- Will your job get outsourced to a robot?, PBS Newshour, May 20, 2015.
- Stuart Russell, The Long-Term Future of (Artificial) Intelligence, video of talk at the Centre for the Study of Existential Risks (Cambridge), May 15, 2015.
- Professor Stuart Russell's talk at the Centre for the Study of Existential Risks (Cambridge), by Calum Chace, May 15, 2015.
- The Good, The Bad and The Robot: Experts Are Trying to Make Machines Be 'Moral', by Coby McDonald, California Magazine, June 7, 2015.
- How Smart Should We Allow Robots to Get?, Science Friday, June 9, 2015.
- The ethics of AI: how to stop your robot cooking your cat, by John Havens, The Guardian, June 23, 2015.
- On AMC's 'Humans,' Wrong Approach to Robots May Be Just What Real Humans Need, by Hilary Brueck, Forbes Magazine, June 28, 2015.
- Are Super Intelligent Computers Really A Threat to Humanity?, panel discussion
at the Information Technology and Innovation Foundation, Washington, DC, June 30, 2015. Subsequent media coverage:
- What the debacle of climate change can teach us about the dangers of artificial intelligence, by Matt McFarland, Washington Post, July 1, 2015.
- The Terminator question: Scientists downplay the risks of superintelligent computers , Yuan Gu, PC World, July 1, 2015.
- Robot apocalypse unlikely, but researchers need to understand AI risks, Grant Gross, IDG News Service, July 1, 2015.
- Should We Fear "Terminator"-Style Robot Uprisings? A Washington Think Tank Discusses., Graham Vyse, InsideSources, July 1, 2015.
- How Do We Stop Artificial Intelligence from Overpowering Humans?, by Hallie Golden, NextGov, July 1, 2015.
- Which movies get artificial intelligence right?, by David Shultz, July 17, 2015.
- Fears of an AI pioneer, by John Bohannon, Science, Vol. 349 no. 6245, 17 July 2015, 252.