chapter nine

9 Safe Superintelligence

 

This chapter covers

  • What is intelligence?
  • Epicurus + Occam + Bayes + Solomonoff
  • AIXI is introduced as a theoretical ceiling for Legg’s universal intelligence measure
  • Superintelligence Canon (Good, Vinge, Kurzweil, Goertzel, Pennachin, Yudkowsky, and Bostrom)
  • Mistaking definitions and benchmarks for explanations
  • Wittgenstein and Turing reframe intelligence as public tests

In 2008, Shane Legg’s doctoral dissertation, Machine Super Intelligence, offered an explicit definition of intelligence, proposed a way to measure it, and asked what a “superintelligence” would be in that formal sense. Legg defined intelligence as “an agent’s ability to achieve goals in a wide range of environments” and argued that, in principle, this capacity can be summarized by a single scalar quantity. To construct his measure, Legg draws on Epicurus, Occam, Bayes, and Solomonoff. This yields a “universal prior” and, from it, a universal intelligence measure defined as the expected reward an ideal agent can accumulate across all computable environments.

9.1 Machine SuperIntelligence

9.1.1 Defining Intelligence

9.1.2 Universal Intelligence Measure

9.1.3 AIXI

9.1.4 Environment Taxonomy

9.1.5 Agents and Limits

9.2 AI Safety

9.2.1 Intelligence Explosion

9.2.2 Technological Singularity

9.2.3 AGI (as a field) is Born

9.2.4 Superintelligence

9.2.5 AI Foom

9.2.6 Impact

9.3 Explaining Too Little and Promising Too Much

9.3.1 Legg’s Functionalism

9.3.2 A Hint of Behaviorism

9.3.3 Meaning in Use

9.3.4 Turing’s Test

9.3.5 Definitions Masquerade as Explanations

9.3.6 Public Tests

9.3.7 Dirty Hands, Clear(er) Claims