The long-term future of AI
In 1965, I. J. Good's article Speculations Concerning the First Ultraintelligent Machine included the following remark:
"Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control."
For most of the history of AI, this issue has been ignored. Indeed,
Good himself continues, "It is curious that this point is made so
seldom outside of science fiction." As the capabilities of AI systems
improve, however, and as the transition of AI into broad areas of
human life leads to huge increases in research investment, it is
inevitable that the field will have to begin to take itself
seriously. The field has operated for over 50 years on one simple assumption: the more intelligent, the better.
To this must be conjoined an overriding concern for the benefit of humanity. The argument is very simple:
Some organizations are already considering these questions, including the
Future of Humanity Institute at Oxford,
the Centre for the Study of Existential Risk at Cambridge,
the Machine Intelligence Research Institute in Berkeley,
and the Future of Life Institute at Harvard/MIT.
I serve on the Advisory Boards of CSER and FLI.
- AI is likely to succeed.
- Unconstrained success brings huge risks and huge benefits.
- What can we do now to improve the chances of reaping the benefits and avoiding the risks?
Just as nuclear fusion researchers consider the problem
of containment of fusion reactions as one of the primary
problems of their field, it seems inevitable that issues of control
and safety will become central to AI as the field matures. The
research questions are beginning to be formulated and range from
highly technical (foundational issues of rationality and utility,
provable properties of agents, etc.) to broadly philosophical.
Media, publications, etc.
- Stuart Russell, The Future of AI: What if We Succeed?, panel at IJCAI 13, Beijing, August 9, 2013.
- Stephen Hawking, Stuart Russell, Max Tegmark, and Frank Wilczek,
``Transcending Complacency on Superintelligent Machines.''
Huffington Post, April 19, 2014.
- Stuart Russell, Transcendence: An AI Researcher Enjoys Watching His Own Execution, Huffington Post, April 29, 2014.
- Workshop on the Future of Artificial Intelligence held at AAMAS 14, Paris, May 6, 2014.
- Interview on the subject of the movie Transcendence with Stuart Russell, Christof Koch, on NPR Science Friday, May 9, 2014.
- Interview on the long-term future of AI with Stuart Russell, on Canadian Broadcasting Corporation's Spark with Nora Young, May 31, 2014. [transcript]
- Stuart Russell, Of Myths and Moonshine, contribution to the conversation on The Myth of AI on edge.org.
- Value Alignment, Berkeley IdeasLab Debate Presentation at the World Economic Forum, Davos, January 21, 2015.
- Panel discussion live on NHK TV (Japan), World Economic Forum, Davos, January 22, 2015.
- Interview on Hub Culture TV, World Economic Forum, Davos, January 23, 2015.
- Our Fear of Artificial Intelligence, by Paul Ford, MIT Technology Review, February 1, 2015.
- Stuart Russell, Will they make us better people?, contribution to the Annual Question, 2015 on edge.org.
- Appearance on the ED Show, MSNBC, discussing autonomous weapons, February 18, 2015.
- Invasion of the Friendly Movie Robots, by Don Steinberg, Wall Street Journal, February 26, 2015.
- The Future of Artificial Intelligence, with Stuart Russell, Eric Horvitz, Max Tegmark, on NPR Science Friday, April 10, 2014.