epilogue

Epilogue: The Missing Pieces

 

Sutskever’s List promises to explain 90% of what matters, and it delivers on that promise with unusual clarity. It reconstructs the path from feature engineering to large-scale supervised learning, and from there to the self-supervised breakthroughs that made ChatGPT possible. Yet the remaining 10% is not an afterthought. It encompasses other major traditions in AI that were advancing in parallel with language-model pretraining, including deep reinforcement learning, self-supervised vision, multimodal grounding, diffusion models, and the emergence of scientific foundation models such as AlphaFold. Just as importantly, Sutskever’s List stops at 2022, leaving the subsequent phase of AI development outside its scope. The final epilogue will therefore address not only the missing 10% but also the full 100% of what the list leaves out.

Deep Reinforcement Learning

Masked Language Models

Self-Supervised Vision

The Missing 10%

The Missing 100%

Multimodal Foundations

Diffusion Models

AI for Science

Knowledge Distillation

Alignment

Tools and Knowledge

MoE

The Multimodal Jump

Reasoning Models