19 The future of AI

 

This chapter covers

  • The limitations of deep learning
  • The nature of intelligence
  • What’s missing from current approaches
  • What the future might look like

To use a tool appropriately, you should not only understand what it can do but also be aware of what it can’t do. I’m going to present an overview of some key limitations of deep learning. Then, I’ll offer some speculative thoughts about the future evolution of AI and what it would take to get to human-level general intelligence. This should be especially interesting to you if you’d like to get into fundamental research.

19.1 The limitations of deep learning

There are infinitely many things you can do with deep learning. But deep learning can’t do everything. To use a tool well, you should be aware of its limitations, not just its strengths. So where does deep learning fall short?

19.1.1 Deep learning models struggle to adapt to novelty

Deep learning models are big parametric curves fitted to large datasets. That’s the source of their power – they’re easy to train, and they scale really well, both in terms of model size and dataset size. But that’s also a source of significant weaknesses. Curve fitting has inherent limitations.

19.1.2 Deep learning models are highly sensitive to phrasing and other distractors

19.1.3 Deep learning models struggle to learn generalizable programs

19.1.4 The risk of anthropomorphizing machine-learning models

19.2 Scale isn’t all you need

19.2.1 Automatons vs. intelligent agents

19.2.2 Local generalization vs. extreme generalization

19.2.3 The purpose of intelligence

19.2.4 Climbing the spectrum of generalization

19.3 How to build intelligence

19.3.1 The kaleidoscope hypothesis

19.3.2 The essence of intelligence: abstraction acquisition and recombination

19.3.3 The importance of setting the right target

19.3.4 A new target: on-the-fly adaptation

19.3.5 ARC Prize

19.3.6 The test-time adaptation era

19.3.7 ARC-AGI 2

19.4 The missing ingredients: search and symbols